abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
A memory module system with a global shared context. A memory module system can include a plurality of memory modules and at least one processor, which can implement the global shared context. The memory modules of the system can provide the global shared context at least in part by providing an address space shared between the modules and applications running on the modules. The address space sharing can be achieved by having logical addresses global to the modules, and each logical address can be associated with a certain physical address of a specific module. |
1.A system including:A plurality of memory modules connected to provide physical memory, and the memory modules of the plurality of memory modules include:A plurality of partitions of the physical memory, wherein a partition of the plurality of partitions is associated with at least one physical memory address; andThe processor, which is configured to:Executing code, where the code, when executed, causes the processor to access a virtual memory address;Mapping the virtual memory address to a shared memory address in the logical memory space shared among the plurality of memory modules; andThe shared memory address is mapped to the physical memory address of a partition in the plurality of partitions.2.The system of claim 1, wherein the shared memory address comprises a sequence of bits, and wherein the mapping of the shared memory address to the physical memory address is based at least in part on the value of a predetermined bit in the sequence of bits.3.The system according to claim 2, wherein the predetermined bit comprises two or more bit groups, wherein the first group of the predetermined bit provides a mapping to a partition of the plurality of partitions, and the predetermined The second set of bits provides a mapping to data locations within the partition.4.The system of claim 3, wherein the predetermined bit includes four bit groups, wherein the third group of the predetermined bit provides a mapping to a cache set, the cache set including one or the plurality of partitions A plurality of partitions, and a fourth set of predetermined bits of the value at least partially provides tag information for the corresponding cache set.5.The system of claim 1, wherein the mapping of the virtual memory address to the shared memory address is based on a page table, wherein an entry in the page table provides a virtual memory address to the shared memory address mapping, wherein The page table can be read and modified by the processor, and wherein the page table is stored in the plurality of partitions.6.The system according to claim 5, wherein each memory module of the plurality of memory modules maintains a corresponding part of the page table, and wherein the corresponding part of the page table of the memory module of the system is a corresponding memory The physical memory address of the module provides mapping.7.The system of claim 6, wherein the modification of the page table is performed via a modification device that communicates a message to the plurality of memory modules, and wherein at least one of the memory modules is configured to report to the The modification device sends a confirmation that the communicated message has been received and the corresponding modification has been entered.8.The system according to claim 5, wherein one or more of the memory modules in the plurality of memory modules maintain page tables of themselves and other memory modules of the system, and the modification of the page table This is done via a modification device broadcasting a message to the plurality of memory modules, and wherein one or more of the plurality of memory modules performs the modification of the page table on a copy of its page table.9.The system according to claim 1, wherein the virtual memory address of the system includes a first bit sequence, wherein the shared memory address of the system includes a second bit sequence, and wherein the virtual memory address to the shared memory address The mapping of is based at least in part on mapping the first bit sequence to the second bit sequence and mapping the second bit sequence to the first bit sequence.10.The system of claim 9, wherein the first bit sequence of the virtual memory address is at least partially offset from the second bit sequence of the shared memory address.11.The system of claim 9, wherein the second bit sequence of the shared memory address is used for a cache address of a cache, wherein the cache includes a group of partitions among the plurality of partitions.12.The system according to claim 11, wherein the cache is a set associative cache.13.The system according to claim 1, wherein the virtual address memory space of the plurality of application programs includes shared memory addresses of the forked process and the merged process of the plurality of application programs, and wherein the plurality of memory modules include A number of synchronization primitives for synchronizing the memory access operations of multiple applications.14.A method including:Executing code, where the code causes the processor to access the virtual memory address;Mapping the virtual memory address to a shared memory address in a logical memory space shared among a plurality of memory modules; andThe shared memory address is mapped to the physical memory address of at least one of the plurality of memory module partitions.15.The method of claim 14, wherein the mapping of the shared memory address is used by the operating system of the device.16.The method of claim 15, wherein the mapping of the shared memory address is modified at least based on user interaction with the device.17.A device including:Multiple memory modules,The memory module of the plurality of memory modules includes:A plurality of partitions of the physical memory, the partitions of the plurality of partitions are associated with at least one physical memory address; andThe processor, which is configured to:Execute code to access the virtual memory address;Mapping the virtual memory address to a shared memory address in the logical memory space shared among the plurality of memory modules; andThe shared memory address is mapped to the physical memory address of a partition in the plurality of partitions.18.The apparatus of claim 17, wherein at least one memory module of the plurality of memory modules includes a part of a graphics processing pipeline distributed among the plurality of memory modules.19.The device of claim 17, wherein the shared memory address comprises a sequence of bits, and wherein the mapping of the shared memory address to the physical memory address is based at least in part on the value of a predetermined bit in the sequence of bits.20.18. The device of claim 17, wherein the mapping of the virtual memory address to the shared memory address is based on a page table, wherein an entry in the page table provides a virtual memory address to the shared memory address mapping, wherein The page table can be read and modified by the processor, and wherein the page table is stored in the plurality of partitions. |
Memory module system using global shared contextTechnical fieldAt least some embodiments disclosed herein relate to a memory module system that utilizes a global shared context.Background techniqueSome conventional examples of memory modules may include single in-line memory modules (SIMM) and dual in-line memory modules (DIMMs). The SIMM differs from a dual in-line memory module (DIMM) in that the contacts on the SIMM are redundant on both sides of the module. This will not happen with DIMMs. The DIMM has separate electrical contacts on each side of the module. DIMMs are generally used in current computers that are large enough to include one or more DIMMs, and DIMMs may include multiple dynamic random access memory (DRAM) integrated circuits. For smaller computers such as notebook computers, small form-factor dual in-line memory modules (SO-DIMMs) are usually used.In addition, the memory components can be integrated on a system-on-a-chip (SoC). SoC is an integrated circuit (IC) that integrates computer components in a chip. Common computer components in SoC include central processing unit (CPU), memory, input/output ports, and auxiliary storage devices. All the components of the SoC can be on a single substrate or microchip. The SoC may include various signal processing functions, and may include a dedicated processor or a co-processor, such as a graphics processing unit (GPU). Through tight integration, SoC can consume less power than conventional multi-chip systems with equivalent functionality. This makes SoC beneficial (for example, in smartphones and tablet computers) to integrate mobile computing devices. In addition, SoC can be used for embedded systems and the Internet of Things.Summary of the inventionIn one aspect, the present application relates to a system including: a plurality of memory modules connected to provide physical memory, the memory modules of the plurality of memory modules including: a plurality of partitions of the physical memory, wherein A partition of the plurality of partitions is associated with at least one physical memory address; and a processor configured to: execute code, wherein the code when executed causes the processor to access a virtual memory address; and The virtual memory address is mapped to a shared memory address in a logical memory space shared among the plurality of memory modules; and the shared memory address is mapped to a physical memory address of a partition of the plurality of partitions.In another aspect, the present application relates to a method comprising: executing code, wherein the code causes a processor to access a virtual memory address; and mapping the virtual memory address to logic shared among multiple memory modules A shared memory address in the memory space; and a physical memory address for mapping the shared memory address to at least one of the plurality of memory module partitions.In another aspect, the present application relates to a device that includes: a plurality of memory modules, the memory modules of the plurality of memory modules include: a plurality of partitions of a physical memory, and the partitions of the plurality of partitions are at least A physical memory address is associated; and a processor configured to: execute code to access a virtual memory address; and map the virtual memory address to a share in a logical memory space shared among the plurality of memory modules A memory address; and a physical memory address that maps the shared memory address to a partition of the plurality of partitions.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figures 1 and 3 illustrate example memory module systems according to some embodiments of the present disclosure.Figure 2 illustrates an example memory module according to some embodiments of the present disclosure.Figure 4 illustrates an example networked system including a computing device according to some embodiments of the present disclosure.Figures 5 to 7 illustrate a flowchart of example operations that may be performed by the aspects of the memory module depicted in Figures 1 to 4 according to some embodiments of the present disclosure.Figures 8-9 illustrate example physical memory partitions and shared memory address bit sets mapped to at least one partition and at least one data location in the partition according to some embodiments of the present disclosure.Detailed waysAt least some embodiments disclosed herein relate to a memory module system that utilizes a global shared context. The memory module system may include multiple memory modules, where each module is coupled to at least one processor, and the memory module may implement a global shared context. The memory module system may be, include, or be part of SoC, or the memory module of the memory module system may be, include, or be part of SoC. In some embodiments, the SoC in these instances may include a central processing unit (CPU), GPU, and/or neural processing unit (NPU). For example, some of the memory components described herein may be integrated on the SoC or PCB of a device, a computer cluster, a PC, a mobile device, or an embedded device. In addition, the SoC or PCB of the device, computer cluster, PC, mobile device, or embedded device may be integrated into some of the memory components described herein.The memory module of the system may provide an address space shared between the module and the application program running on the module and/or the coupled processor. Address space sharing can be achieved by making the logical address the global logical address of the module, and each logical address can be associated with a certain physical address of a specific module. In some embodiments, the size of the logical address space may be the same as the sum of the physical address spaces of the modules in the memory module system. For example, if there are eight modules, the association (or mapping) from logic to physical address can be realized by a predetermined first group of 3 bits at a predetermined position in the code or address space (for example, 3 bits provide 2^3 numbers). Or eight numbers-each of the eight modules corresponds to a number). The remainder of the logical address bits or part thereof (for example, the second bit group) can be mapped to a specific physical address within each module using the second mapping scheme. In this regard, these (first and second) bit groups need not be adjacent (for example, adjacent bits in an address), and may be dynamically or depending on the decision made by the system (for example, the operating system) and/or the user. Change as needed. The second mapping scheme can be as simple as one-to-one mapping. The third mapping scheme may be more complicated, such as round-robin scheduling among the banks of each memory device in the module, modulus or interleaving on the module ID, and so on.The application program running on the embodiment of the memory module system may have its own virtual address space. In some embodiments, the association between the virtual space and the logical address space of various applications can be implemented through page tables. Such a table can provide virtual to logical addressing, and can further correlate the physical address at each module. The page table can be read and modified by the processor of the memory module, and the page table can be stored in the module. Alternatively, a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical and/or physical addresses can be used. Instances of this architecture may include collection associativity, such as the collection associativity used by the collection associative cache. In addition, the association between the virtual space and the logical address space of various applications can be implemented through a page table or a predetermined architecture and/or algorithm or a combination thereof.In some embodiments, in order to support the association between the virtual space and the logical address space of various applications, the system may use synchronization primitives and/or semantics. The memory module system may also use messaging (for example, point-to-point, broadcast, multicast, targeted by certain IDs, etc.) and/or atomic operations for critical data. Such functionality can be implemented via the corresponding hardware mailbox at each module. For example, a mailbox can be implemented at each memory module processor of the system or at each memory bank of the module.In some embodiments, since the global shared context is effective, a large amount of sharing can be performed, and applications using the system can be composed of various components, shared libraries, and the like. And, this is especially true in some instances where applications can have the same origin or root process. Therefore, when a process is forked instead of copying the context, the context can be extended by preserving sharing in the logical address space that supports the global context. Since the context is global, the processor associated with the memory module does not need to perform context switching among many applications. The virtual context can be cached and maintained in the memory module system (compared to context switching by a centralized processor architecture). The virtual context can be executed by multiple processors of the memory module in the logical space with the aid of synchronization primitives and addressing. Even if a single application context is distributed among several memory modules, it is possible to execute the application context synchronously via such a mechanism.In some embodiments, graphics pipelines (eg, graphics pipelines for geometry, projection, lighting, cropping, rasterization, shading, screen streaming, and/or other functions) can be distributed among several memory modules of the system . In some embodiments, since each memory module may include an SoC with a GPU, the pipeline may use single instruction multiple data (SIMD) operations and/or data via high-bandwidth wired and/or wireless interconnections between modules. Exchange to execute.In some embodiments, in order to efficiently execute task-level parallelism (for example, multiple applications), each processor on each memory module may only move between contexts cached in the memory; and therefore, each The processor can continuously run the byte code of the application in the logical space. In this sense, the operating system (OS) of the device and the running applications can be combined to represent a global shared context. The value of the shared context is the time it is placed in the memory, especially the time it is placed in the non-volatile memory, and the value is continuously evolving and maintained here according to the user using the device or the system including the memory module.Figures 1 and 3 illustrate example memory module systems 100 and 300 according to some embodiments of the present disclosure. Figure 2 illustrates an example memory module 202 according to some embodiments of the present disclosure. The memory module 202 may be a module of the system 100 or the system 300. Figures 1 and 2 illustrate example memory modules 102a, 102b, and 202, respectively, according to some embodiments of the present disclosure. And, such a module may be a part of the system 100 or the system 300.Figure 1 shows a memory module system 100 having a plurality of memory modules (see, for example, memory modules 102a and 102b), which may be (e.g., at least via a processor of the memory module system-see, for example, processors 106a, 106b, 106c) And 106d) at least partially implement the global shared context 101. In addition, FIG. 1 shows that each of the memory modules of the system 100 has multiple physical memory partitions (see, for example, physical memory partitions 104a, 104b, 104c, 104d, 104e, and 104f). Each memory module of the system 100 also has at least one processor (for example, see processors 106a, 106b, 106c, and 106d). As shown, different embodiments of the memory module system 100 may have memory modules, where each memory module may have one processor (e.g., processor 106a), two processors (e.g., processors 106a and 106b), or More than two processors. It should be understood that the dashed box represents optional components. In addition, it should be understood that embodiments of the memory module in the memory module system 100 may have two physical memory partitions or more than two physical memory partitions.Each memory partition can be made up of elements of a memory subsystem or architecture, such as memory dies, banks and ranks, memory chips, memory arrays and sub-arrays, memory rows and columns, memory decks, and stacks.Each memory module of the system 100 is also shown as having a bus (for example, see buses 110a and 110b, where each bus may include multiple buses) that connects multiple physical memory partitions of the memory module (for example, see physical memory The partitions 104a to 104c and the physical memory partitions 104d to 104f) and the processors of the modules (see, for example, the processors 106a to 106b and the processors 106c to 106d). The bus of the memory module (for example, see buses 110a and 110b) may be part of the bus of the memory module system 100 (for example, see the one or more buses 116). One or more buses 116 may connect each memory module of the memory module system 100 to each other and other parts of the memory module system. The one or more buses 116 may also connect the memory module system 100 and parts of the memory module system to other parts of the host system hosting the memory module system. In some examples, the memory module system 100 may be part of and installed in the host system. In addition, one or more of the processors (for example, see processors 106a to 106b and 106c to 106d) of each memory module of the memory module system 100 can arbitrate via the bus of the system 100 (for example, see the buses 110a, 110b and 116) Data communicated.In some embodiments, as shown in FIGS. 1 to 3, the memory module system (for example, see memory module systems 100 and 300) includes a plurality of memory modules (for example, see memory modules 102a to 102b and memory modules 302a, 302b, and 302c), and each of the multiple memory modules (see, for example, memory modules 102a, 102b, and 202) includes multiple physical memory partitions (see, for example, partitions 104a to 104c, partitions 104d to 104f, and partitions 205a, 205b , 205c, 205d, 205e, 205f, 205g, 205h and 205i). Each of the plurality of physical memory partitions may be associated with at least one physical memory address. Additionally, in such embodiments, the memory module system includes at least one processor (see, for example, processors 106a to 106, 106c to 106d, and 206a to 206b). Each processor of the memory module system may be associated with at least one physical memory partition among a plurality of physical memory partitions.In such embodiments and other embodiments, each processor of the memory module system (see, for example, the processors 106a to 106 and 106c to 106d of the system 100) may be configured to execute code based on the memory access related The virtual memory address decoded in the code is connected to access the physical memory of the system (for example, the physical partition of the system memory), where the code can be the code of a program, application, software module or library or an operating system (OS) Part of etc. Each processor of the system (see, for example, processors 106a to 106 and 106c to 106d of system 100) may also be configured to map each of the virtual memory addresses to the physical memory associated with multiple memory modules and Shared memory address shared among multiple memory modules. In some examples, each processor of the system may be configured to map each of the virtual memory addresses to at least one partition of the physical memory of the multiple memory modules (e.g., see partitions 104a to 104c, partitions 104d to 104f Shared memory addresses associated with partitions 205a to 205i) and shared among multiple memory modules. The global shared context (see, for example, global shared context 101) may include any of the aforementioned mappings performed by the processor of the memory module system.In such and other embodiments, each processor of the memory module system (see, for example, the processors 106a to 106b of the system 100) may be configured to slave other processors of the memory module system (see, for example, the system 100). The processors 106c to 106d) and the memory module receive the shared memory address and the data associated with the received shared memory address. Each processor of the memory module system may also be configured to map the received shared memory address to a corresponding physical memory address associated with the processor of the physical memory partition of the system. The global shared context (see, for example, global shared context 101) may include a mapping of the received shared memory address to the corresponding physical memory address associated with the processor of the physical memory partition of the system. Each processor of the memory module system (see, for example, processors 106a to 106b of system 100) may also be configured to assign shared memory addresses based at least in part on the mapping of the sent shared memory addresses to the corresponding physical memory addresses of the system. The data associated with the sent shared memory address is sent to other processors of the system (for example, see processors 106c to 106d of system 100). The global shared context (for example, see global shared context 101) may include a mapping of the sent shared memory address to the corresponding physical memory address of the system associated with the processor (for example, the corresponding physical memory partition of the system associated with the processor) .In such embodiments and other embodiments, each shared memory address of the system (see, for example, memory module systems 100 and 300) may include a bit sequence, and the mapping of shared memory addresses to physical memory addresses may be based at least in part on all shared memory addresses. The value of a predetermined bit in the bit sequence (for example, the bit sequence in the mapping scheme). For example, the memory module of the memory module system may provide an address space shared between the module and the application program running on the module and/or the coupled processor; and this shared address space may be a global shared context (e.g., See part of the global shared context 101). Address space sharing can be achieved by making the logical address the global logical address of all modules, where each logical address is associated with a certain physical address of a specific module. Therefore, in some embodiments, the size of the logical address space may be the same as the sum of the physical address spaces of the modules in the memory module system. For example, if there are eight modules, the logical to physical can be realized by the predetermined first 3-bit group at the predetermined position of the logical address bit and the shared memory address associated with the virtual memory address decoded in the code or address space. Association (or mapping) of addresses (for example, 3 bits provide 2^3 numbers or eight numbers-each of the eight modules corresponds to a number). The remainder of the logical and shared address bits, or a portion thereof (eg, the second bit group), can be mapped to a specific physical address within each module using the second mapping scheme. The second mapping scheme may be as simple as the one-to-one mapping, or may be a more complex scheme, such as round-robin scheduling among the banks of each memory device in the module, or interleaving.In some embodiments, the predetermined bit in the shared address bit sequence may include two or more bit groups (for example, see FIG. 8). The bit sequence may be part of a mapping scheme, which may be part of a global shared context (see, for example, global shared context 101). The first set of predetermined bits may provide mappings to physical memory partitions among multiple physical memory partitions of multiple memory modules (for example, see partitions 104a to 104c, partitions 104d to 104f, and partitions 205a to 205i, and see FIG. 8 The first bit group 804 of, which is mapped to the partition 802b), and the second group of predetermined bits can provide a mapping to the data location within the physical memory partition (for example, see the second bit group 806 in FIG. 8, which is mapped to Data location in partition 802b). The data location in the partition can be in a specific bank, bank, or memory array or row or column or cache line or byte or byte sequence or a combination thereof. In these and other examples, the predetermined bit may include four bit groups (see, for example, FIG. 9). The third group of predetermined bits may provide a mapping to a cache set that includes one or more physical memory partitions among a plurality of physical memory partitions of a plurality of memory modules (for example, see the third bit group in FIG. 9 808, which is mapped to at least the cache set divided among the partitions 802b and 802c), and the value of the fourth set of predetermined bits can at least partially provide tag information for the corresponding cache set (for example, see the fourth bit set in FIG. 9 810, which provides tag information for the cache set divided at least in the partitions 802b and 802c).In some embodiments, the mapping of the virtual memory address of the system (for example, see systems 100 and 300) to the shared memory address of the system is based on a page table. The page table may be part of a global shared context (see, for example, global shared context 101). Each entry in the page table can provide a mapping of virtual memory addresses to shared memory addresses. The page table can be read and modified by the processor of the system (for example, see processors 106a to 106b, 106c to 106d, and 206a to 206b), and the page table can be stored in multiple physical memory partitions of multiple memory modules (for example, see Partitions 104a to 104c, partitions 104d to 104f, and partitions 205a to 205i). The page table may be at least partially cached by the processor to access the most recently or frequently used page table entries more quickly. In some embodiments, the page table may be implemented as a database (for example, SQL or a custom database). Access to these database entries can be implemented by accelerated hardware, which can be part of the memory controller. The physical memory location used to store these databases may be different or separate from the physical memory allocated to the global shared context.In such embodiments and other embodiments, each of a plurality of memory modules (see, for example, memory modules 102a to 102b, memory module 202, and memory modules 302a to 302c) may maintain a corresponding portion of the page table, and The corresponding part of the page table of a given memory module of the memory module system provides a mapping for the physical memory address of the given memory module. The modification of the page table may be performed via a modification device that communicates messages to multiple memory modules (for example, the modification device may be at least one external controller, such as the external controllers 306a to 306b shown in FIG. 3, or the above The modification device may be another memory module or a processor of a memory module associated with at least one memory partition of the memory module, or any other device using a global shared context), and the message may contain the modification. The message may be communicated to multiple memory modules based on the corresponding portion of the page table to be modified, and each of the memory modules may be configured to send an acknowledgement to the modifying device that the communicated message has been received and the corresponding modification has been made or the modification has been rejected, and Denial Reason. In other examples, silence in response to the modified message may mean an agreement, and the receiving module only sends a rejection message.Alternatively, each memory module in a plurality of memory modules (see, for example, memory modules 102a to 102b, memory module 202, and memory modules 302a to 302c) may maintain page tables for itself and other memory modules of the memory module system. In such instances, the modification of the page table may be performed via a modification device that broadcasts a message to multiple memory modules (for example, the modification device may be at least one external controller, such as the controllers 306a to 306a shown in FIG. 3). 306b, or the modification device may be another memory module or a processor of a memory module associated with at least one memory partition of the memory module, or any other device that uses a global shared context), and each of the multiple memory modules One can perform modifications to the page table on its copy of the page table. Therefore, at least some of the time when the self-modifying device modifies its own page table set will be modified later by mutual agreement, and therefore there are fewer conflicts. In the case of few conflicts, any device can respond to the message and the reason for rejection or to a request for further negotiation.An application program running on an embodiment of a memory module system (for example, see memory module systems 100 and 300) may have its own virtual address space (for example, a virtual address space contained in a global shared context (for example, see global shared context 101). Address space). The association between the virtual space and the logical address space of various applications can be implemented through a page table (such as the page table described herein). Simply put, such tables can provide virtual-to-logical and shared addressing (e.g., through the associated physical address at each module). Also, the page table can be read and modified by the processor of the memory module, and the table can be stored in the module. Alternatively, a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical and shared and/or physical addresses can be used. Instances of this architecture may include collection associativity, such as the collection associativity used by the collection associative cache. In addition, the association between the virtual space and the logic and shared address space of various applications can be implemented through a page table or a predetermined architecture and/or algorithm or a combination thereof.In an embodiment using a page table, access via a module can be made by each module that has a part of the page table, so that the part only provides a mapping between the physical address of the module and the associated logical address of the module. Modifications to such a distributed page table can be carried out by broadcasting a message containing such a modification from the modification device (or host processor) to all modules, and the module containing only a part of the page table is responsible for maintaining the part of the table. . Modifications to such a distributed page table can be done via sending a direct message to the responsible module. In such instances, after the modification is made or there is a rejection of the update, the confirmation can be provided to the requesting party by the updated memory module.Alternatively, each module may have page tables for itself and all other modules in the memory module system. Modifications to this type of global (and always synchronized) page table are carried out by broadcasting the message containing the modification from the modification module (or host processor) to all other modules of the system, where all modules execute their own corresponding page tables Revise. In such instances, each module has a copy of the page tables of all other modules of the memory module system. In such embodiments, confirmation message delivery is not used, because the page table is updated through mutual agreement between system modules. In some instances, such as when there is an error, an auxiliary message from a module may notify other modules of the error. Therefore, other modules can then reject the error synchronously. In the case of an error, the module can function through mutual agreement to restore the modification. In instances where it is impossible to recover the modification, the system may run a shootdown subroutine, such as a translation look-aside buffer (TLB) shootdown.In some embodiments using a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical and/or physical addresses, each virtual memory address of the memory module system may include a first bit sequence. And, each shared memory address of the system can include a second bit sequence. The mapping of the virtual memory address of the system to the shared memory address of the system may be based at least in part on mapping the first bit sequence to the second bit sequence and mapping the second bit sequence to the first bit sequence. In such instances, the first bit sequence of the virtual memory address of the system is at least partially offset from the second bit sequence of the shared memory address of the system. In addition, the second bit sequence of the shared memory address of the system may be used for the cache address of the cache, and the cache may include a group of physical memory partitions among the multiple physical memory partitions of the multiple memory modules. Furthermore, in some embodiments, the cache is a set-associative cache.The arrangement of the first bit sequence and the second bit sequence may be offset from each other, or a formula containing an offset may be used. Therefore, it is possible to map the address range of some shared applications or modules shared among many applications. For example, an application or module whose address range is shared may be fixed in a global sharing context, but in a virtual space where the application or module is used via sharing, the application or module may be different . The difference is in the offset or the formula containing the offset. For example, if in the global shared context, the shared module is mapped to the address range 30-40 and two applications, then using the shared module in the virtual address space of the application allows the shared module to pass through the offset Shift mapping: For the first application, the offset is +100 (130-140), and for the second application, the offset is +1000 (1030-1040). In this example, an application using the global shared context can map any range to the available virtual address space range of the application through a simple offset or a formula containing an offset. Since the virtual address space is flexible, the application can find a free mapping range. The application compiler or interpreter or hypervisor can provide semantics for integrating offset-based mapping into the application framework.In some embodiments using a predetermined architecture and/or algorithm of how virtual addresses are mapped to logical/shared and/or physical addresses, each memory bank in each memory module of the system (at least some of the modules of the system) At least some memory banks) can perform the role of a set, the group is identified by the first bit group of the virtual address at a predetermined location and the number is mapped to the memory bank module number. For example, if there are 1024 banks and 8 modules per module, there are 8192 sets. Since 8192=2^13, 13 bits can be used for the virtual address in the first bit group. The remainder of the virtual address bits or part (for example, the second bit group) is mapped to a specific physical address in each set. This second group can be or contain a marker group. The tag is stored with the data, and tag matching can be performed to identify whether the data associated with the address is cached in the collection.In such embodiments, a large cache may be formed in the memory module system, and such cache may store a relatively large amount of data. For example, the cache can store the memory capacity of the memory partition of the multi-memory module. In this example, serial-attached SCSI (SAS), SATA, M.2, or PCIe-attached solid-state drives (SSD) can be coupled to such caches. In addition, in such instances, all or most of the processes can run on memory, and the executed applications can be completely cached in a large cache. In addition, each cache set or at least some of the sets can migrate or cache data from other cache sets. Migrating or caching data from other cache sets can be done by changing the first or second bit group association (e.g., bit position) for a specific cache virtual context.In such embodiments and other embodiments, the global shared context (see, for example, global shared context 101) may include the virtual address memory space of multiple applications. The virtual address memory space of multiple applications may include shared memory addresses of forked processes of multiple applications or merged processes of multiple applications. The forked process is created by dispatching processes from at least one parent, and the merge process is created by merging processes from at least two parent processes. In addition, multiple memory modules of the system (see, for example, memory modules 102a, 102b, 202, and 302a to 302c) may include multiple synchronization primitives for synchronizing the memory access operations of multiple memory modules of multiple applications .In some embodiments, since the global shared context (for example, see the global shared context 101) is effective, a large amount of sharing can be performed, and the application program using the system can be composed of various components, shared libraries, and the like. And, this is especially true in some instances where applications can have the same origin or root process. Therefore, when dispatching processes instead of copying the context, the context can be extended by preserving shares in the logical address space that supports the global context. Since the context is global, the processor associated with the memory module does not require context switching between many applications. The virtual context can be cached and maintained in the memory module system (compared to context switching by a centralized processor architecture). The virtual context can be executed by multiple processors of the memory module with the aid of synchronization primitives and addressing in a logical and shared space. Even if a single application context is distributed among several memory modules, it is possible to execute the application context synchronously via such a mechanism.In addition, in such embodiments and other embodiments, parts of the graphics processing pipeline may be distributed among multiple memory modules. In some embodiments, graphics pipelines (eg, graphics pipelines for geometry, projection, lighting, cropping, rasterization, shading, screen streaming, and/or other functions) can be distributed among several memory modules of the system . In some embodiments, since each memory module may include an SoC with a GPU, the pipeline may be executed via SIMD operations and/or via data exchange using high-bandwidth wired and/or wireless interconnection among the modules.In some embodiments, in order to efficiently execute task-level parallelism (for example, multiple applications), each processor on each memory module may only move between contexts cached in the memory; and therefore, each The processor can continuously run the byte code of the application in the logic and shared space. In this sense, the OS of the device and the running application can be merged together, thereby representing a global shared context (for example, see global shared context 101). The value of the shared context is the time it is placed in the memory, especially the time it is placed in the non-volatile memory, and the value is continuously evolving and maintained here according to the user using the device or the system including the memory module.In some embodiments, the system may include multiple memory modules, and each memory module of the multiple memory modules may be configured to execute program code distributed among the multiple memory modules and associated with at least one program. In such instances, each memory module in the plurality of memory modules may include a plurality of physical memory partitions, and each partition in the plurality of physical memory partitions may be associated with at least one physical memory address. And, in such instances, each memory module in the plurality of memory modules may include at least one processor, and each processor of the system may be associated with at least one of the plurality of physical memory partitions.In such and other embodiments, each processor of the system may be configured to execute code at least in part based on the locality of virtual memory access to multiple physical memory partitions, and based on the association with the memory access The virtual memory address decoded in the code to access the physical memory of the system. And, each processor of the system can be configured to map each of the virtual memory addresses to a shared memory address that is associated with the physical memory of multiple memory modules and shared among the multiple memory modules.In some embodiments, if the program code has a copy at each processor of each memory module, and if the code is requesting access to the physical memory of a memory partition associated with the first processor at a certain time, then The processor may run the portion of the code. If after a certain time the program code is requesting access to the physical memory of another memory partition associated with another processor, then the first processor can communicate the program counter and related data to the other processor so that the second The processor continues execution based on the locality of virtual memory accesses to multiple physical memory partitions. In some embodiments, a first set of processors may be used in place of the first processor, and another set of processors may be used in place of another processor. Also, the first group and the other group may overlap.Additionally, in such and other embodiments, each processor of the system may be configured to receive shared memory addresses and data associated with the received shared memory addresses from other processors and memory modules of the system, and The received shared memory address is mapped to the corresponding physical memory address associated with the processor of the physical memory partition of the system. In addition, each processor of the system may be configured to send shared memory addresses and data associated with the sent shared memory addresses to the system based at least in part on the mapping of the sent shared memory addresses to the corresponding physical memory addresses of the system Other processors of the system. In such instances, at least one memory module of the plurality of memory modules may include a part of a graphics processing pipeline distributed among the plurality of memory modules.Additionally, at least some of the embodiments disclosed herein include systems with multiple such memory modules. More specifically, at least some of the embodiments disclosed herein include a memory module having multiple memory chips, at least one controller (eg, CPU or dedicated controller), and configured to communicate input and output data of the memory module At least one interface device. The input and output data bypass at least one processor (e.g., CPU) of the computing device in which the memory module is installed. And, at least one interface device may be configured to communicate input and output data to at least one other memory module in the computing device. In addition, the memory module may be one of a plurality of memory modules of the memory module system. In some embodiments, the memory module system can be designed such that when a memory module is added to the system, the system increases the cost of one memory module by adding more memory partitions, processors associated with the partitions, and increasing the bandwidth of the memory bus. size.In some embodiments, the memory module may be or include DIMM, SO-DIMM, registered DIMM (RDIMM), mini-RDIMM, socketed memory stack or socket system package or another type of memory Package on Package (PoP). Also, in some embodiments, the memory module may be configured to include a dedicated chip, such as a GPU, an artificial intelligence (AI) accelerator, and/or a processing in memory (PIM) unit. In addition, in some embodiments, the memory module can output the result to a peripheral device (for example, a display or another type of user interface) through a wired connection, a wireless connection, or a combination thereof, without passing through the communication between the processor and the memory module. Memory bus. For example, in some embodiments, the memory module can output the result to the peripheral device through a wired connection or a wireless connection, without passing through the memory bus between the memory module and the main processor of the computing device hosting the memory module. Such memory modules and other memory modules disclosed herein can accelerate the processing of the graphics pipeline (for example, data processing of geometry, projection, lighting, cropping, rasterization, shading, screen streaming, etc.). In addition, a system with multiple such memory modules communicating with each other can further speed up the processing of the graphics pipeline.Figure 2 shows a memory module 202 that is somewhat similar to memory module 102a or 102b. In addition, FIG. 2 shows a memory module 202 having multiple memory chips (see, for example, memory chips 204a, 204b, and 204c). Each of the memory chips in the module 202 includes multiple physical memory partitions (see, for example, partitions 205a to 205i). The memory module 202 also has at least one processor that can at least partially implement the global shared context 101 (see, for example, processors 206a and 206b). In addition, at least some of the partitions in the memory module 202 (and the partitions in the memory modules 102a and 102b in FIG. 1) may partially implement the global shared context 101 (see, for example, partitions 205a to 205c and partition 205e). As shown, different embodiments of the memory module 202 may have one processor (e.g., processor 206a), two processors (e.g., processors 206a and 206b), or more than two processors. It should be understood that the dashed box represents optional components. In addition, it should be understood that embodiments of the memory module 202 may have two memory chips or more than two memory chips.The memory described herein, such as the memory of a memory module, may include various types of memory. For example, such memory may include flash memory with flash memory cells. In addition, such memory may include dynamic random access memory (DRAM), which includes DRAM cells, for example. In addition, for example, such a memory may also include non-volatile random access memory (NVRAM), which includes NVRAM cells. The NVRAM cell may include a 3D XPoint memory cell. In addition, the DRAM cell may be a typical DRAM cell among different types of typical DRAM cells, for example, a cell having a ferroelectric element. In addition, the cell may include a ferroelectric transistor random access memory (FeTRAM) cell. The memory cell may also have at least one of a transistor, a diode, a ferroelectric capacitor, or a combination thereof.The memory module 202 is also shown as having at least one interface device (see, for example, interface devices 208a and 208b). As shown, different embodiments of the memory module 202 may have one interface device (e.g., interface device 208a), two interface devices (e.g., interface devices 208a and 208b), or more than two interface devices. And, as mentioned, it should be understood that the dashed box represents optional components. At least one interface device (see, for example, interface devices 208a and 208b) may be configured to communicate input and output data, including data related to the global shared context for the memory module 202. The input and output data can bypass the processor (for example, the main processor) of the system in which the memory module 202 is installed (for example, see interface devices 208a and 208b are connected to the system in which the memory module 202 is installed via connectors 218a and 218b) Other devices 214 and bypass one or more processors 212 of the system in which the memory module is installed). In some embodiments, as shown in FIG. 2, the input and output data bypass the data bus (eg, the main data bus) of the system in which the memory module 202 is installed (for example, see interface devices 208a and 208b via connectors 218a and 218b is connected to other devices 214 of the system in which the memory module is installed and bypasses one or more buses 216 of the system in which the memory module is installed). It should be understood that the dashed connecting member represents an optional connecting member.The memory module 202 is also shown as having a bus 210 (which may include multiple buses) that connects multiple memory chips (for example, see memory chips 204a, 204b, and 204c), processors (for example, see processors 206a and 206b) ) And interface devices (see, for example, interface devices 208a and 208b). The bus 210 may be part of the bus of the system in which the memory module is installed (see, for example, one or more buses 216) that connects the memory module 202 to the rest of the system in which the memory module is installed. As shown by the dashed portion of the bus 210 that connects the memory module to the one or more buses 216 and the rest of the system, in some embodiments, the bus 210 may be separate from the one or more buses 216, and in other embodiments In this case, the bus 210 may be connected to one or more buses 216. It should be understood that the dashed connecting member represents an optional connecting member. One or more of the processors of the memory module 202 (see, for example, processors 206a and 206b) can arbitrate the data communicated via the bus 210, including data related to the global shared context, and the arbitration bypasses the one or more buses 216的连接件 (see, for example, connectors 218a and 218b).The interface devices and other interface devices mentioned herein may include one or more network interface devices, one or more links, one or more buses, one or more ports, one or more peer-to-peer links, or Any combination of it.In some embodiments, the memory module 202 may implement a global shared context (e.g., see global shared context 101). Generally speaking, the global shared context includes multiple instances of the memory modules 102a, 102b, and/or 202 communicating with each other via their interface devices. Global shared context can be beneficial to graphics processing and graphics applications, including processing using SIMD concepts or vector processing concepts, because large amounts of memory are useful, and data processing close to memory can improve graphics processing. In such embodiments and other embodiments, the interface device (see, for example, interface devices 208a and 208b) may be configured to transmit input and output data to at least one of the memory modules installed in the system in which the communication memory module is installed. One other example.In some embodiments, the memory module 202 or another memory module described herein, the processor 206a or another processor or controller described herein, the interface device 208a, or another interface device described herein The memory chips 204a, 204b, and 204c or other memory chips described herein or any combination thereof may be part of an SoC, a system-on-package (SoP) such as a plug-in chipset system, or a heterogeneous die stack. All these embodiments represent tightly integrated IP blocks and chips, which do not necessarily include PCBs for coupling with each other and the rest of the system. Embodiments or other embodiments that include or are part of an SoC may include one or more GPUs or one or more other types of dedicated processors and/or one or more PIM units. Embodiments or other embodiments that include or are part of an SoC may include a processor that may include or be connected to a memory controller, a display sink (for example, HDMI, DisplayPort, or wireless display interface), Radios for wireless interfaces or networks, AI engines or accelerators, neuromorphic processors, scaling processors, vector processors, CPU cores, etc. In such cases, the global shared context provides a framework for applications to use these devices in an integrated and shared manner.Not shown in FIG. 2, the memory module 202 may also include a plurality of electrical contacts. The memory module 202 may also include a PCB configured for insertion into at least one memory slot of the motherboard. In such embodiments, multiple memory chips (see, for example, memory chips 204a, 204b, and 204c) may be coupled to the PCB, and multiple electrical contacts may be on each side of the PCB. In addition, processors (see, for example, processors 206a and 206b) can be coupled to the PCB, and interface devices (see, for example, interface devices 208a and 208b) can be coupled to the PCB.In some embodiments, the processor (see, for example, processors 206a and 206b) may be, include, or be part of at least one dedicated controller. The dedicated processor or controller may be, include, or be part of: GPU, AI accelerator, NPU, another type of dedicated controller, PIM unit, or any combination thereof. Such devices can be unified by using a global shared context, and can accelerate large-scale applications such as neural networks, big data applications, and machine learning through the global shared context.In some embodiments, the interface device (see, for example, interface devices 208a and 208b) may include at least one wireless interface device that communicates at least partially wirelessly, or may include an on-chip optical interconnect that provides optical communication between chips. Another part of the interface device can communicate via wires. The interface device may also be a hybrid interface device with multiple capabilities and/or channels and channel types. The interface device may be, include, or be a part of a network interface device (for example, a wireless network interface device). The interface device may include at least one wireless interface device, and/or the wired link may be configured to communicate via one or more wired and/or wireless networks, peer-to-peer links, ports, buses, and the like. Therefore, messages and data that are being exchanged in relation to the global shared context can use this type of interface.In some embodiments, the memory module 202 may include a first connector configured to connect a plurality of memory chips (eg, memory chips 204a, 204b, and 204c) to one of the plurality of electrical contacts At least some may communicate the input and output data of the multiple memory chips to the processor of the computing device in which the memory module 202 is installed (for example, the main processor of the computing device). The memory module 202 may also include a second connector configured to connect a plurality of memory chips to the processor (see, for example, the processors 206a and 206b). The memory module 202 may also include one or more third connectors configured to connect the processor to the interface device (see, for example, interface devices 208a and 208b) so that the interface device receives processing from other devices The input data of the processor and the output data of the processor are communicated to other devices via the communication path, and the output data bypasses the processor of the computing device in which the memory module 202 is installed. This type of connector can be used with a global shared context.In some embodiments, wireless communication may be performed among multiple memory modules installed in the system. For example, wireless receivers can allow data communication between aligned-in-space modules (similar to DIMMs installed in PC boards) in very close spaces. This can speed up this type of communication. Specifically, in some embodiments, terahertz wireless communication (THz) can provide a speed of 100 Gb/sec. Therefore, in such instances, in-chip or in-module THz radiation can support a large amount of data exchange between the memory modules disclosed herein, which can be used to at least partially implement page table operations and other data exchanges that share a global context.Figure 3 illustrates an example memory module system 300 according to some embodiments of the present disclosure. The memory module system 300 may include the memory module system 100, be a part of the memory module system 100, or be the memory module system 100, which may at least partially implement a global shared context. As depicted in FIG. 3, the memory module system 300 includes multiple memory modules (see, for example, memory modules 302a, 302b, and 302c). Also, each of the memory modules may include multiple memory chips (although not depicted in FIG. 3). Each of the plurality of memory modules (see, for example, memory modules 302a, 302b, and 302c) may be memory module 102a, 102b, or 202. The memory module system 300 may include at least one external controller (for example, see external controllers 306a and 306b) and at least one interface device (for example, see interface devices 308a and 308b). The memory module system 300 is shown as having a bus 310 (which may include multiple buses) that connects multiple memory modules (see, for example, the memory modules 302a, 302b, and 302c), an external controller (see, for example, the external controller 306a) And 306b) and interface devices (see, for example, interface devices 308a and 308b).As shown, different embodiments of the memory module system 300 may have one interface device (e.g., interface device 308a), two interface devices (e.g., interface devices 308a and 308b), or more than two interface devices. And, as mentioned, it should be understood that the dashed box represents optional components. Interface devices (see, for example, interface devices 308a and 308b) may be configured to communicate input and output data for each of the memory module system 300. The input and output data can bypass the processor (for example, the main processor) of the corresponding system in which one of the memory module systems 300 is installed (for example, see the interface devices 308a and 308b via connectors 318a and 318b) The other devices 314 of the system of the memory module system 300 and bypass one or more processors 312 of the host system). The input and output data can be related to data used by the application via the global shared context.In some embodiments, as shown in FIG. 3, the input and output data bypass the data bus (for example, the main data bus) of the host system in which one of the memory module systems 300 is installed (for example, see interface devices 308a and 308b) It is connected to other devices 314 of the system via the connections 318a and 318b and bypasses the bus 316 of the system (which may have multiple buses)). It should be understood that the dashed connecting member represents an optional connecting member. The global shared context can take advantage of bus bypass and speed up some key operations.In addition, the bus 310 may be a part of the bus of the host system in which the memory module system 300 is installed (for example, see bus 316) that connects the memory module system 300 to the host system in which the memory module system is installed. The rest. As shown by the dashed portion of the bus 310 connecting the memory module system to the bus 316 and the rest of the system, in some embodiments, the bus 310 may be separate from the bus 316, and in other embodiments, the bus 310 may be connected to Bus 316. It should be understood that the dashed connecting member represents an optional connecting member. One or more of the external controllers of the memory module system 300 (see, for example, controllers 306a and 306b) can arbitrate the data communicated via the bus 310 and the connectors that bypass the bus 316 (see, for example, the connectors 318a and 318b) . The data may include at least part of the data used to implement the global shared context, such as data used and processed by processors that exchange messages and perform memory accesses to memory partitions.As shown, the external controller (see, for example, external controllers 306a and 306b) is separate from the plurality of memory modules in the memory module system 300 (see, for example, the memory modules 302a, 302b, and 302c). In some embodiments of the memory module system 300, at least one external controller may be configured to be used by multiple memory module controllers or processors (for example, see processors 106a and 106b and memory modules 102a, 102b, 202, and 302a). To 302c) coordinate calculations. These calculations may be related to calculations performed by the processor as part of the global shared context. In addition, the external controller may be configured to coordinate communication by interface devices of multiple memory modules (see, for example, interface devices 208a and 208b and memory modules 102a, 102b, 202, and 302a to 302c).In addition, as shown, the interface devices (see, for example, interface devices 308a and 308b) may be separate from multiple memory modules in the memory module system 300 (see, for example, memory modules 302a, 302b, and 302c). Show the interface devices of the memory module system 300 (see, for example, interface devices 308a and 308b), each of which may include a wireless interface device that communicates at least partially wirelessly, or may include an on-chip optical interconnect that provides optical communication between chips . Another part of the interface device of the memory module system 300 may communicate via wires. The interface device of the memory module system 300 may also be a hybrid interface device having multiple capabilities and/or channels and channel types. The interface device of the memory module system 300 may be, include a network interface device (for example, a wireless network interface device), or be a part of the network interface device. The interface devices of the memory module system 300 may include wireless interface devices, and/or wired links may be configured to communicate via one or more wired and/or wireless networks, peer-to-peer links, ports, buses, and the like. Therefore, such interface devices can provide enhanced connections (e.g., faster connections) for implementing a global shared context.In addition, multiple memory modules (see, for example, memory modules 302a, 302b, and 302c) may be multiple different types of memory structures. For example, the multiple memory modules may be, part of, or include each of the following: one or more DIMMs, one or more SO-DIMMs, one or more RDIMMs, one or more One or more of mini RDIMMs, one or more socket memory stacks, one or more socket system packages or another type of PoP for memory, different types of memory structures or modules, or any of them combination. Such modules can be integrated into the system using a global shared context.In addition, each memory module described herein can be a different type of memory structure. For example, the memory modules described herein can be, part of, or include all of the following: DIMM, SO-DIMM, RDIMM, mini-RDIMM, socket memory stack, or socket system Packaging or another type of PoP for memory.For example, in some embodiments of the memory module system 300, the system may include multiple DIMMs. And, each DIMM of the plurality of DIMMs may include a PCB configured for insertion into a memory slot of an additional PCB separate from the plurality of DIMMs. In addition, each DIMM of the plurality of DIMMs may include a plurality of memory chips coupled to the PCB, a plurality of electrical contacts on each side of the PCB, at least one controller (e.g., at least one dedicated controller) coupled to the PCB And at least one interface device configured to communicate the input and output data of the DIMM. Input and output data bypass the processor of the computing device in which the DIMM and the system are installed. And, in such embodiments of the system 300 with DIMMS, at least one interface device may be configured to communicate input and output data to at least one other DIMM of the plurality of DIMMs. Such data can be part of the global shared context.In addition, in such embodiments of the system 300 with DIMMS, at least one external controller is separate from the multiple DIMMs and can be configured to coordinate calculations by dedicated controllers of the multiple DIMMs. At least one external controller may also be configured to coordinate communication by interface devices of multiple DIMMs. Also, in such embodiments, the additional PCB is separate from the multiple DIMMs, and may include multiple memory slots configured to receive multiple DIMMs. In addition, the external controller may be coupled to an additional PCB, and the additional PCB may be a motherboard, and the external controller may include a CPU or another type of processor, such as a dedicated controller. Such multiple DIMMs can run at least part of the global shared context.In some embodiments, at least one controller of each DIMM of the plurality of DIMMs may be a dedicated controller. For example, the controller may be, part of, or include each of the following: GPU, AI accelerator, NPU, another type of dedicated controller, PIM unit, or any combination thereof. It should be understood that the aforementioned devices and other parts described with respect to FIGS. 1 to 3 can use a global shared context to unify such devices and parts, and accelerate large-scale applications such as neural networks, big data applications, and machine learning.Figure 4 illustrates an example networked system 400 including at least computing devices 402, 422a, 422b, 422c, and 422d according to some embodiments of the present disclosure. In addition, FIG. 4 illustrates an example portion of an example computing device 402, where the computing device is part of a networked system 400. And, FIG. 4 shows how such computing devices can be integrated into various machines, equipment, and systems, such as IoT devices, mobile devices, communication network devices and equipment (for example, see base station 430), equipment (for example, See device 440) and vehicle (for example, see vehicle 450). It should be understood that the parts and devices described in FIG. 4 can use a global shared context to unify such devices and parts, and enable large-scale applications such as neural networks, big data applications, machine learning, etc., used between devices and parts The program speeds up.The computing device 402 and other computing devices of the networked system 400 (see, for example, computing devices 422a, 422b, 422c, and 422d) may be communicatively coupled to one or more communication networks 420. The computing device 402 includes at least a bus 406, a controller 408 (such as a CPU), a memory 410, a network interface 412, a data storage system 414, and other components 416 (it can be any type of component found in a mobile or computing device, such as a GPS component) , I/O components (such as various types of user interface components) and sensors and cameras). The memory 410 may include the memory modules 102a, 102b, and/or 202 and/or the memory module system 100 and/or 300. Other components 416 may include one or more user interfaces (eg, GUI, auditory user interface, tactile user interface, etc.), displays, different types of sensors, tactile, audio and/or visual input/output devices, additional dedicated memory, a Or multiple additional controllers (e.g. GPU) or any combination thereof. The bus 406 communicatively couples the controller 408, the memory 410, the network interface 412, the data storage system 414, and other components 416, and may couple such components to the second memory 412 in some embodiments.The computing device 402 includes a computer system that includes at least a controller 408, a memory 410 (e.g., read only memory (ROM), flash memory, dynamic random access memory (DRAM) (e.g., synchronous DRAM (SDRAM) or Rambus) DRAM (RDRAM), static random access memory (SRAM), cross-point or crossbar memory, crossbar memory, etc.), and the data storage system 414, which communicate with each other via a bus 406 (which may include multiple buses). In some embodiments In this case, the second memory 418 may not communicate via the bus 406.In other words, FIG. 4 contains a block diagram of a computing device 402 having a computer system in which embodiments of the present disclosure can operate. In some embodiments, a computer system may include a set of instructions that, when executed, cause a machine to at least partially perform any one or more of the methods discussed herein. In such embodiments, the machine may be connected (e.g., networked via the network interface 412) to other machines in the LAN, intranet, extranet, and/or the Internet (e.g., see network 420). The machine can be used as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment while being in the capacity of a server or client machine in a client-server network environment operate.The controller 408 represents one or more general processing devices, such as a microprocessor, a CPU, and so on. More precisely, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a single instruction multiple data (SIMD), Multiple instruction multiple data (MIMD), or a processor that implements other instruction sets, or a processor that implements a combination of instruction sets. The controller 408 may also be one or more dedicated processing devices, such as ASIC, programmable logic (such as FPGA), digital signal processor (DSP), network processor, and so on. The controller 408 is configured to execute instructions for performing the operations and steps discussed herein. The controller 408 may further include a network interface device, such as the network interface 412, to communicate via one or more communication networks (such as the network 420).The data storage system 414 may include a machine-readable storage medium (also referred to as a computer-readable medium) on which one or more instruction sets or software embodying any one or more of the methods or functions described herein are stored. The data storage system 414 may have execution capabilities, for example, it may at least partially execute instructions residing in the data storage system. The instructions may also completely or at least partially reside in at least one of the memory 410 and/or the controller 408 during execution of the instructions by the computer system, and at least one of the memory 410 and the controller 408 may also It constitutes a machine-readable storage medium. The memory 410 may be or include the main memory of the computing device 402. The memory 410 may have an execution capability, for example, it may at least partially execute instructions resident in the memory.As mentioned, the networked system 400 includes computing devices, and each of the computing devices may include one or more buses, controllers, memories, network interfaces, storage systems, and other components. In addition, each of the computing devices shown in FIG. 4 and described herein may include or be part of a mobile device or the like, for example, a smart phone, a tablet computer, an IoT device, a smart TV, Smart watches, glasses or other smart home appliances, in-vehicle information systems, wearable smart devices, game consoles, PCs, digital cameras, or any combination thereof. As shown, the computing device can be connected to a network 420, which includes at least a local to device network such as Bluetooth, a wide area network (WAN), a local area network (LAN), an intranet, and a mobile wireless network such as 4G or 5G. , Extranet, Internet and/or any combination thereof. In some embodiments, as shown by the dashed connection 418, the memory 410 may include at least one network interface so that the memory can communicate with other devices via the communication network 420, respectively. For example, the memory module or memory module system of the memory 410 (see, for example, the memory modules 102a, 102b, and 202, and the memory module systems 100 and 300) may have its own network interface so that such components can be connected via the communication network 420 Communicate with other devices separately.Each of the computing devices described herein may be or be replaced by: personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network equipment, A server, network router, switch, or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be performed by this machine.In addition, although a single machine is described for the computing device 402 shown in FIG. 4, the term "machine" should also be considered to include any collection of machines that individually or collectively execute instruction sets (or multiple instruction sets). ) To perform one or more of the methods or operations discussed herein. Moreover, each of the described computing devices and computing systems may each include at least a bus and/or a motherboard, one or more controllers (such as one or more CPUs), a main memory that may include a temporary data storage device, At least one type of network interface, a storage system that may include a permanent data storage device, and/or any combination thereof. In some multi-device embodiments, one device can complete some parts of the method described herein, and then send the completed result to another device via the network, so that the other device can continue other steps of the method described herein .Although the memory, controller, and data storage device parts are shown in the example embodiments as each being a single part, each part should be considered to contain a single part or multiple parts that can store instructions and perform their respective operations. The term "machine-readable storage medium" should also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered to include, but is not limited to, solid-state memory, optical media, and magnetic media.Figures 5 to 7 illustrate flowcharts of example methods 500, 600, and 700 that may be performed by the aspects of the memory module depicted in Figures 1 to 4 according to some embodiments of the present disclosure. For example, each of the methods 500, 600, and 700 may be executed by the processor of the memory module disclosed herein.In FIG. 5, the method 500 starts at step 502 by activating a global context used by at least one program in at least one memory module (for example, see the global shared context 101 shown in FIGS. 1 and 2). At least one memory module may include multiple physical memory partitions (for example, see partitions 104a to 104c, partitions 104d to 104f, and partitions 205a to 205i shown in FIGS. 1 and 2 respectively), and each of the multiple physical memory partitions It can be associated with at least one physical memory address. The at least one memory module may also include at least one processor (for example, see processors 106a to 106, 106c to 106d, and 206a to 206b), and the at least one processor may be related to at least one of the plurality of physical memory partitions United.In such embodiments and other embodiments, at least one processor may be configured to execute code and access a memory having at least one memory module based on a virtual memory address decoded in the code associated with the memory access The physical memory of the module system. At least one processor may be configured to translate and map each of the virtual memory addresses to a shared memory address that is associated with the physical memory of the memory module system and shared among the memory modules of the memory module system. At least one processor may be configured to receive shared memory addresses and data associated with the received shared memory addresses from other processors and memory modules of the memory module system. The at least one processor may be configured to translate and map the received shared memory address to a corresponding physical memory address associated with the at least one processor of the physical memory partition of the memory module system. And, the at least one processor may be configured to send the shared memory address and data associated with the sent shared memory address to the system based at least in part on determining the mapping of the sent shared memory address to the corresponding physical memory address of the system Other processors.At step 504, the method 500 continues to distribute the code of at least one program among the at least one memory module according to the activated global context. The global context (see, for example, the global shared context 101) can be used by the operating system of the device executing at least one program, and the global context can be modified at least according to user interaction with the device. In some embodiments, the method 500 may include distributing at least a portion of a graphics processing pipeline among at least one memory module.At step 506, the method 500 continues to execute each part of the code at least in part according to the locality of the virtual memory access of the program code to the at least one memory module. At step 508, the method 500 continues to access the physical memory of at least one memory module based on the virtual memory address decoded in the code associated with the memory access.In FIG. 6, the method 600 starts at step 602 by executing code by the processor of the memory module of the system and accessing the memory module based on the virtual memory address decoded in the computer program code associated with the memory access of the system The physical storage of the system. At step 604, the method 600 continues with the processor of the memory module mapping each of the virtual memory addresses to a shared memory address associated with the physical memory of the memory module of the system and shared among the memory modules of the system. At step 606, the method 600 continues with the processor of the memory module receiving the shared memory address and data associated with the received shared memory address from the other processors and memory modules of the system.At step 608, the method 600 continues with the processor of the memory module mapping the received shared memory address to the corresponding physical memory address associated with the processor of the physical memory partition of the system. At step 610, the method 600 continues by the processor of the memory module to map the shared memory address and the data associated with the sent shared memory address based at least in part on the mapping of the sent shared memory address to the corresponding physical memory address of the system. Send to other processors in the system.In FIG. 7, the method 700 starts at step 702 by distributing the global context used by the computer program among the memory modules of the memory module system. Step 702 may include step 704 in which the method 700 continues to receive shared memory addresses from other memory modules of the system by the memory module of the system. Step 702 may also include step 706, in which the method 700 continues with the memory module of the system sending the shared memory address to other memory modules of the system.At step 708, the method 700 continues to map the virtual memory address (decoded in the program code associated with the memory access) to the shared memory address associated with the physical memory of the system and shared among the memory modules according to the global context .At step 710, the method 700 continues to distribute the code of at least one program among the at least one memory module according to the distributed global context (for example, via mapping). Step 710 may include step 712 in which the method 700 continues to receive data associated with the received shared memory address from other memory modules of the system by the memory module of the system. Step 710 may include step 714 in which the method 700 continues to send the data associated with the sent shared memory address by the memory module of the system to other memory modules of the system.At step 716, the method 700 continues to execute each part of the code at least partially according to the locality of the virtual memory access of the program code to the memory module of the system. At step 718, the method 700 continues to access the physical memory of the memory module based on the virtual memory address decoded in the code associated with the memory access.In some embodiments, it should be understood that the steps of the methods 500, 600, and 700 can be implemented as a continuous process, for example, each step can be run independently by monitoring input data, performing operations, and outputting data to subsequent steps. And, such steps for each method can be implemented as a discrete event process, for example, each step can be triggered by an event that should trigger and produce a certain output. It should also be understood that each of FIGS. 5 to 7 represents the smallest method within a possible larger method of a computer system that is more complex than the method partially presented in FIGS. 1 to 4. Therefore, the steps depicted in each of Figures 5 to 7 can be combined with other steps that provide access to other steps associated with larger methods of more complex systems.Figures 8 to 9 illustrate example physical memory partitions (see, for example, partition 801, which includes partitions 802a, 802b, 802c, and 802d) according to some embodiments of the present disclosure and mappings to at least one partition and at least one data location in the partition. Shared memory address (for example, see shared memory address 803) bit group. More specifically, the predetermined bit of the shared memory address 803 includes two or more bit groups (for example, see the first bit group 804 and the second bit group 806). The first bit group 804 may provide a mapping to the physical memory partitions among the multiple physical memory partitions of the multiple memory modules described herein (for example, see partition 802b, which maps to the first bit group 804). The second bit group 806 may provide a mapping of data locations within a physical memory partition (see, for example, partition 802b, which contains data locations mapped to the second bit group 806).Specifically, in FIG. 9, the predetermined bit of the shared memory address 803 includes four bit groups. The third bit group 808 may provide a mapping to a cache set that includes one or more of the multiple physical memory partitions of the multiple memory modules described herein (for example, see the third bit group 808, which provides a mapping to at least the cache set associated with the partitions 802b and 802c). The cache set may be distributed across multiple partitions at certain memory locations (e.g., at certain arrays, banks, banks, rows, or columns). And, the value of the fourth bit group 810 can at least partially provide tag information for the corresponding cache set. The tag information in the fourth bit group 810 may provide a tag for determining whether a page or a cache line exists in the cache. The tag matching hardware can perform a tag lookup, and if it finds a tag, then data (eg, data in a page or cache line) is presented or cached in the cache. If the data is not in the cache, the data may need to be accessed in the backing storage device. The tag matching hardware can include multiple comparators or look-up tables (LUTs), or include dedicated memory elements that can provide matching functions. The second bit group 806 may provide a mapping of the data location in the partition (and more specifically, within the cache set) after tag matching (for example, tag matching through the tag information provided by the fourth bit group 810).Regarding the predetermined bits of the shared memory address described herein, the bit groups of the predetermined bits may be arranged in order or out of order, and the groups may be contiguous or not.Some parts of the previous detailed description have been presented with respect to the algorithm and symbolic representation of the operation of the data bits in the computer memory. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the main idea of their work to other technicians in the field. Algorithms are here and generally considered to be self-consistent sequences of operations that produce the desired result. Operations are operations that require physical control of physical quantities. These quantities are usually but not necessarily in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, etc.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the actions and processes of a computer system or similar electronic computing device that manipulates the registers and memory of the computer system and converts the data into a computer system. System memory or registers or other such information storage system similarly represents other data as physical quantities.The present disclosure also relates to equipment for performing the operations herein. This apparatus may be specially constructed for the intended purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in a computer-readable storage medium such as but not limited to any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read-only memory (ROM), Random access memory (RAM), EPROM, EEPROM, magnetic or optical card or any type of medium suitable for storing electronic instructions and each coupled to a computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other device. Various general-purpose systems may be used with programs according to the teachings herein, or the general-purpose systems may prove to be more specialized devices that are easily constructed to perform the methods. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having stored thereon instructions that can be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (eg, computer-readable) medium includes machine (eg, computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk storage media , Optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense. |
A finite state machine (115) is provided that both serializes virtual GPIO signals (135) and messaging signals (136) and that deserializes virtual GPIO signals and the messaging signals. The finite state machine frames the serialized virtual GPIO signals and messaging signals into frames each demarcated by a start bit and an end bit. |
1.An integrated circuit comprising:First processor;a plurality of messaging signal registers, wherein the first processor is configured to write a transmit set of messaging signals into the messaging signal register;Multiple GPIO pins;a GPIO interface configured to receive a first set of signals from the first processor and transmit a portion of the first set of signals as a GPIO signal to a remote processor on the plurality of GPIO pins;Dedicated transmit pin;a finite state machine (FSM) configured to receive a remaining portion of the first set of signals from the GPIO interface and to string the remaining portion as a transmit set of virtual GPIO signals on the dedicated transmit pin Passing to the remote processor, and wherein the FSM is further configured to retrieve a transmit set of messaging signals from the messaging signal register and serially transmit to the remote processor on the dedicated transmit pin The set of transmissions of the messaging signal.2.The integrated circuit of claim 1 further comprising:a dedicated receive pin, wherein the FSM is further configured to serially receive a receive set of virtual GPIO signals from the remote processor on the dedicated receive pin and provide the virtual GPO signal to the GPIO interface Receive set.3.The integrated circuit of claim 2 wherein said GPIO interface is further configured to receive a receive set of GPIO signals from said GPIO pins and to receive a receive set of said GPIO signals to said first processor .4.The integrated circuit of claim 1 wherein said first processor comprises an application processor.5.The integrated circuit of claim 1 wherein said first processor comprises a modem processor.6.The integrated circuit of claim 2 wherein said FSM comprises a in-line-out (PISO) shift register and a serial-in parallel (SIPO) shift register.7.The integrated circuit of claim 2 wherein said FSM is further configured to serially transmit a transmit set of said virtual GPIO signal and a transmit set of said message passing signal in a frame, each frame consisting of The start bit and the end bit are delimited.8.The integrated circuit of claim 7 wherein said FSM is further configured to detect a failure of said remote processor by detecting an end bit that failed to receive one of said frames.9.The integrated circuit of claim 3 wherein the FSM is further configured to serially transmit a transmit set of the virtual GPIO signal and a transmit set of the messaging signal in response to a cycle of an external clock.10.The integrated circuit of claim 9 wherein said FSM is further configured to serially transmit a transmit set of signals in response to a first clock edge of said external clock, and responsive to said external clock The second clock edge receives the receive set serially.11.The integrated circuit of claim 3 wherein said FSM is further configured to serially transmit a set of transmissions of signals as pulse width modulated signals.12.The integrated circuit of claim 11 wherein said FSM comprises an oscillator and at least one counter for counting oscillations from the oscillator, and wherein said FSM is further configured to be responsive to said The count of at least one counter determines the pulse width for each pulse width modulated signal.13.The integrated circuit of claim 12 wherein said oscillator is a ring oscillator.14.The integrated circuit of claim 11 wherein said FSM is further configured to generate each pulse width modulated signal to have a first pulse width or a second pulse width, wherein said second pulse width is greater than The first pulse width.15.A method comprising:Receiving a GPIO signal set from the first processor at the GPIO interface;Transmitting a portion of the GPIO signal set to a remote processor through a dedicated GPIO pin;510:Transmitting the remainder of the GPIO signal set as a virtual GPIO signal to the remote processor on a dedicated transmit pin;A message passing signal is retrieved from a message passing signal register written by the first processor and the retrieved message passing signal is serially transmitted to the remote processor on the dedicated transmit pin.16.The method of claim 15 further comprising:Receiving a receive set of virtual GPIO signals serially from the remote processor on a dedicated receive pin;Receiving a receive set of GPIO signals serially from the remote processor on the dedicated GPIO pin;A receive set of the virtual GPIO signal and a receive set of the GPIO signal are provided to the first processor through the GPIO interface.17.The method of claim 16 further comprising:Receiving a received set of messaging signals serially from the remote processor on the dedicated receive pin;Writing the set of message passing signals to the message passing signal register according to an address of the received set of the messaging signal;A receive set of the message passing signal is retrieved by the first processor from the message passing signal register.18.The method of claim 17 wherein the serially transmitting the virtual GPIO signal and the retrieved message delivery signal are cycles in response to an external clock.19.The method of claim 17 wherein serially transmitting the virtual GPIO signal and the retrieved message passing signal comprises pulse width modulating a signal transmitted on the dedicated transmit pin.20.An integrated circuit comprising:First processor;a plurality of messaging signal registers, wherein the first processor is configured to write a transmit set of messaging signals into the messaging signal register;Multiple GPIO pins;a GPIO interface configured to receive a first set of signals from the processor and transmit a portion of the first set of signals as a GPIO signal to a remote processor on the plurality of GPIO pins;Dedicated transmit pin;Means for performing the following operations: receiving a remaining portion of the first set of signals from the GPIO interface, and serially transmitting the remaining portion as a transmit set of virtual GPIO signals on the dedicated transmit pin to The remote processor, and the transmit set of the message passing signal is retrieved from the messaging signal register and the transmit set of the messaging signal is serially transmitted to the remote processor on the dedicated transmit pin.21.The integrated circuit of claim 20 wherein said means are configured to serially transmit said sets of emissions in response to a cycle of an external clock.22.The integrated circuit of claim 20, further comprising an oscillator; and wherein said means is configured to treat said set of emissions as pulse width modulated in response to counting said oscillations from said oscillator Signals are transmitted serially. |
Hybrid virtual GPIORelated applicationThis application claims the benefit of U.S. Provisional Patent Application Serial No. 61/982, file, filed on Apr The entire contents of the application are hereby incorporated by reference in its entirety.Technical fieldThis application relates to general purpose input/output (GPIO), and more particularly to integrated circuits configured to use pin pairs as virtual GPIO pins.backgroundGeneral Purpose Input/Output (GPIO) enables integrated circuit designers to provide pervasive pins that can be customized for specific applications. For example, depending on user needs, GPIO pins can be programmed as output pins or input pins. A GPIO module or peripheral will typically control a group of pins that can change based on interface requirements. Because of the programmability of GPIO pins, they are typically included in microprocessor and microcontroller applications. For example, an application processor in a mobile device can use several GPIO pins for handshake signaling, such as interprocessor communication (IPC) with a modem processor.For such handshake signaling, the sideband signal can be considered "symmetric" if the sideband signal must be both transmitted and received by the processor. If there are n symmetric sideband signals that need to be exchanged, then each processor requires n*2 GPIOs (one GPIO transmits a given signal and one GPIO receives the signal). For example, a symmetric IPC interface between the modem processor and the application processor can include five signals that translate to the resulting IPC signaling requiring 10 GPIO pins. IPC communication requires so many GPIO pins to increase manufacturing costs. In addition, investing too much GPIO for IPC limits the availability of GPIO to other system-level peripheral interfaces. This problem cannot be solved by moving the IPC communication to the main data bus between the processors, as this violates certain corner conditions.In addition to the GPIO signals, the processor conventionally communicates with external devices, such as via an SPI bus with dedicated transmit pins and receive pins for messaging with external devices. In contrast to GPIO signals, such messaging signals are not specific to a particular pin, in other words, various messages can be transmitted on dedicated messaging transmit pins. The receiving device (a priori) does not know what the message is involved in. This is in contrast to the GPIO signal - the GPIO signal is dedicated to a specific GPIO pin, so the fact that the GPIO signal is received on the corresponding GPIO pin The processor identifies the signal. But this is not the case for messaging signals. Such signals have address bits that the processor uses to route the received messaging signals to the appropriate registers. At the time of registration, the processor must then interpret the registered message. The resulting need for dedicated messaging transmit pins and dedicated messaging receive pins adds significantly to manufacturing costs.Accordingly, there is a need in the art for a hybrid GPIO and messaging architecture that is capable of accommodate a large number of input/output signals without requiring an excessive number of pins.OverviewA hybrid virtual GPIO architecture for communicating between two integrated circuits each having a processor is provided. This architecture is considered "hybrid" because it accommodates both GPIO signals and messaging signals. As discussed earlier, GPIO signals in conventional GPIO systems are dedicated to specific pins. This signal is identified to the receiving processor when a GPIO signal is received on the corresponding GPIO pin. However, the messaging signal is received on a dedicated receive pin, such as on a dedicated receive pin in a Serial Peripheral Interface (SPI) or Inter-Process Communication (IPC) interface. Various messaging signals can thus be received on the same dedicated receive pin. In order to distinguish between messaging signals, conventionally the messaging signal includes an address header that contains an address. The receiving processor routes the received message to the appropriate register based on the address. For example, one type of message may relate to the identity of an installed card, such as a wireless card or a GPS card. Such a message would then have an address mapped to the appropriate register so that the corresponding message content can be registered accordingly. By interpreting the results of the registers, the processor can then interpret the identity of the installed card. Other types of messages can be routed to the appropriate registers in a similar manner.In the hybrid GPIO interface disclosed herein, the messaging signal is transmitted on the same dedicated transmit pin that carries the virtual GPIO signal. The number of virtual GPIO signals and the number of messaging signals can be customized for a given transmit and receive processor pair. A handshake protocol is disclosed such that processors in their respective integrated circuits can be informed of the number of virtual GPIOs and messaging signals. Each integrated circuit also includes a hybrid GPIO interface for communicating with the remote processor using the signal set. The set of signals includes a GPIO signal set, a virtual GPIO signal set, and one or more messaging signals. Each integrated circuit thus includes a set of GPIO pins corresponding to the GPIO signal set. These GPIO pins are used to transmit the GPIO signal set in a conventional manner as is known in the GPIO art.In contrast to the GPIO signal set, the virtual GPIO signal set and these messaging signals are not transmitted on the GPIO pins. Alternatively, each integrated circuit uses a dedicated transmit pin and a dedicated receive pin to transmit and receive the virtual GPIO signal set and these messaging signals. In view of this, the virtual GPIO signal set includes a transmit set and a receive set. A finite state machine (FSM) in each integrated circuit is configured to serially transmit the transmit set to the remote processor through the dedicated transmit pin. The finite state machine is further configured to serially receive the received set of virtual GPIO signals from the remote processor on the dedicated receive pin.The messaging signal can include any type of signal that is typically transmitted on a dedicated bus that is shared by various messaging signals. For example, the messaging signal can include an inter-integrated circuit (I2C) signal for initial configuration of the processor. Just like a virtual GPIO signal, a messaging signal can be divided into a transmit set and a receive set. The FSM uses a dedicated transmit pin to serially transmit a messaging signal transmit set and a dedicated receive pin to serially receive a messaging signal receive set.The processor provides a first set of signals to the hybrid GPIO interface. From the hybrid GPIO interface, a portion of the first set of signals is transmitted to the remote processor as a first set of GPIO signals on the first set of corresponding GPIO pins. The remainder of the first set of signals from the processor is provided in parallel to the FSM by the hybrid GPIO interface. Depending on the content of the remaining portion (GPIO or message passing signal), the FSM can then serially transfer the remaining portion as a transmit set of virtual GPIO signals on the dedicated transmit pin.The GPIO interface also receives a second GPIO signal set from the remote processor on the second set of corresponding GPIO pins. Depending on the mode of operation, the FSM serially receives a receive set of virtual GPIO signals or a receive set of messaging signals from a remote processor, and provides the receive set to the hybrid GPIO interface in parallel.There are two main embodiments of the disclosed hybrid virtual GPIO architecture. In a first embodiment, each frame transmitted on a dedicated transmit pin includes a header that identifies whether the frame is a transmit set comprising a virtual GPIO signal or a transmit set comprising a messaging signal. The header may also indicate that the corresponding frame will identify the vGPIO stream length to be set on the receiver side or indicate an acknowledgement of the expected vGPIO stream length. The frame size is thus variable and is determined by the resulting stream length determination frame. In a second embodiment, the header is extended for frames including both virtual GPIO signals and messaging signals such that the extended header identifies the bit position of the virtual GPIO signal and the messaging signal. The hybrid GPIO interface can then provide a second set of signals to the receiving processor, the second set of signals including a second set of GPIO signals and a set of messaging signals from the remote processor.The FSM transmits a virtual GPIO signal and a transmit set of messaging signals in frames each delimited by a start bit and an end bit. The FSM in the remote processor thus receives the transmitted frame as a received set of virtual GPIO signals and messaging signals. By monitoring whether it receives a complete frame including both the start bit and the end bit, the FSM for one processor can detect if the remote processor has failed.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a block diagram of an example hybrid virtual GPIO architecture.2A is a high level block diagram of a hybrid virtual GPIO architecture in which a processor communicates with a single remote processor.2B is a high level diagram of a hybrid virtual GPIO architecture in which a processor communicates with two remote processors.3 is a block diagram of a hybrid virtual GPIO finite state machine responsive to an external clock.Figure 4 illustrates the format of a virtual GPIO/messaging signal frame.5 is a flow chart of a method practiced by the GPIO architecture of FIG. 1.Figure 6 illustrates a length programming frame used to program virtual GPIO and message frame lengths.Figure 7 illustrates an acknowledgment frame that is transmitted to acknowledge the frame length programmed in response to the frame of Figure 6.FIG. 8 illustrates an example virtual GPIO frame and an example messaging signal frame.Figure 9 illustrates an example combined virtual GPIO and messaging frame.Figure 10 illustrates a hybrid virtual GPIO finite state machine that does not use an external clock.11 is a timing diagram for transmitting data frames by the finite state machine of FIG.The embodiments of the present invention and its advantages are best understood by referring to the following detailed description. It should be appreciated that in the one or more drawings, the same reference numerals are used to identify the same elements.Detailed DescriptionA hybrid virtual general-purpose input/output (GPIO) architecture is provided that enables the system to use pin pairs as if the pin pair constituted a larger number of multiple GPIO pins and dedicated to messaging signals The transmit pin is the same as the dedicated receive pin. As used herein, "messaging signal" refers to a signal that would conventionally be transmitted on a dedicated transmit pin (such as is practiced in IPC or SPI protocols). The messaging signal thus includes an address that enables the receiving processor to route the received messaging signal to the appropriate register. This hybrid virtual GPIO architecture is considered "virtual" because for system-level applications that create those virtual GPIO signals, this is like the virtual GPIO signals being accommodated for input on regular GPIO pins / The output is the same. In other words, a system on chip (SoC) or processor having a virtual GPIO architecture as disclosed herein does not experience a functional difference between a GPIO signal and a virtual GPIO signal. However, only two pins are used to transmit and receive virtual GPIO signals that would otherwise require their own dedicated GPIO pin pair (if the GPIO signal is symmetrical). The hybrid virtual GPIO architecture is considered "hybrid" because the dedicated transmit pins used to carry the virtual GPIO signals are also used to pass the messaging signals to the remote processor. Similarly, a dedicated receive pin that is used to receive the virtual GPIO signal is also used to receive the messaging signal from remote processing.The virtual GPIO signals disclosed herein may be discussed for IPCs between an application processor and a modem processor in a portable mobile phone or other communication device. However, it will be appreciated that the virtual GPIO circuits and techniques disclosed herein are widely applicable to system-on-a-chip (SoC) or application specific integrated circuits (ASICs) that require GPIO capabilities.The disclosed hybrid virtual GPIO architecture makes the health of the transmitting node transparent to the receiving node. This is an important advantage, especially during the debug phase of the software implementation, as it indicates to the receiving processor when the transmitting processor becomes inactive.To enable such robust virtual GPIO capabilities, each integrated circuit includes a dedicated transmit pin coupled to a transmit line on the circuit board, and a dedicated receive pin coupled to the receive line of the circuit board. In view of this, the virtual GPIO signal can be divided into a transmission set for transmission on the transmission line and a reception set for reception on the reception line. If the signaling is symmetric, then the number of signals in the transmit set of each processor is the same. However, the hybrid virtual GPIO architecture disclosed herein is capable of accommodate asymmetric signaling where the transmit set of one processor's virtual GPIO signal is not the same as the remote processor's transmit set. Similar to a virtual GPIO signal, the messaging signal is also transmitted on the dedicated transmit pin and received on the dedicated receive pin.Turning now to the drawings, FIG. 1 illustrates a hybrid virtual GPIO architecture 101 including an application processor integrated circuit 101 and a modem processor integrated circuit 105 within a mobile telephone or other communication device. Because each integrated circuit is coupled to a dedicated transmit line and a dedicated receive line, the transmit line 110a of the application processor integrated circuit 100 is thus the receive line of the modem processor integrated circuit 105. Similarly, the transmit line 110b of the modem processor integrated circuit 105 is the receive line of the application processor integrated circuit 100. These wires or wires are carried on the circuit board or other physical interconnection between the integrated circuits 100 and 105. Each integrated circuit includes a dedicated transmit pin 112 to couple to a corresponding transmit line (eg, line 110b of modem processor integrated circuit 105). Similarly, each integrated circuit includes a dedicated receive pin 111 to couple to a corresponding receive line (e.g., line 110a of modem processor integrated circuit 105). The finite state machine (FSM) 115 in each integrated circuit uses these dedicated lines and pins to control transmission and reception with reference to an external clock signal 120 from an external clock source (eg, a 32 KHz sleep clock).The application processor integrated circuit 100 includes a processor 101. Similarly, modem processor integrated circuit 105 includes a processor 102. Each processor is coupled by a GPIO interface 103 that interfaces with GPIO pins 125 using GPIO interface 103 in a conventional manner. Portions of these signals processed by each hybrid GPIO interface 103 can be transmitted and received as conventional GPIO signals 130 on conventional GPIO pins 125. However, the remainder of these signals processed by GPIO interface 103 are not transmitted or received through conventional GPIO pin 125. Alternatively, some of the remaining signal portions include a plurality of virtual GPIO signals 135 that are transmitted and received by the corresponding FSM 115 using dedicated transmit pins and dedicated receive pins. For receiving and transmitting message passing signals 136, each FSM 115 also interfaces directly with the corresponding processor. Because the messaging signals 136 are not GPIO signals, they are not coupled through the GPIO interface 103. Each FSM 115 transmits and receives a messaging signal 136 through its dedicated transmit pin 112 and receive pin 111. These pins are thus "hybrid" pins because they are used for both virtual GPIO signal 135 and messaging signal 136.The virtual GPIO signals 135 do not each have their own dedicated pins as in the case of the conventional GPIO signal 130. This is quite advantageous because the hybrid virtual GPIO architecture 101 achieves a significant reduction in pins compared to conventional GPIO embodiments where the virtual GPIO signals 135 will each require their own pins. The messaging signal 136 will also conventionally require another dedicated transmit pin and another dedicated receive pin. However, these additional pins are also eliminated in the advantageous hybrid virtual GPIO architecture of the present application.The integrated circuit may include only one FSM 115 or may include a plurality of such components for interfacing with multiple external systems. 2A illustrates a hybrid virtual GPIO architecture in which integrated circuit 200 includes a single FSM 115 for communicating with a remote processor in integrated circuit 205, including its own FSM 115. In contrast, integrated circuit 220 shown in FIG. 2B includes FSM 115A and FSM 115B for communicating with remote processors in integrated circuits 225 and 230, respectively. In view of this, a system on a chip (SoC), such as the processor discussed herein, can be configured with as many FSMs as are needed to accommodate hybrid virtual GPIO signaling with other SoCs.Regardless of the number of FSMs a processor may have, as indicated in Figure 2A, each FSM communicates using its own dedicated transmit pin 240 and receive pin 245.Referring again to FIG. 1, because virtual GPIO signal 135 is tolerated using a finite state machine such as FSM 115, processors 101 and 102 can be in a dormant or other type of dormant state, yet still be able to receive virtual GPIO signals 135 and messages. Signal 136 is passed. In this manner, the virtual GPIO architecture 101 not only advantageously saves the number of pins per GPIO interface 103, but is also low power.As used herein, a "pin" is a structure (such as a pad that covers a wire or other physical interconnect (eg, a package interconnect or a via via interconnect) that an integrated circuit uses to couple to a circuit board. Or the actual terminology of the actual pin). For example, if each integrated circuit has sixteen GPIO pins or pads 125 as shown in FIG. 1, these pins can be configured to accommodate eight symmetric GPIO signals 130 (for clarity, Figure 1 Only four conventional GPIO signals #1 through #4) or sixteen asymmetric GPIO signals 130 are shown. Moreover, each integrated circuit can use lines 110a and 110b to accommodate input/output docking of multiple (n) virtual GPIO signals 135, where n is an arbitrary complex integer. Similarly, each integrated circuit can use lines 110a and 110b to accommodate input/output docking of multiple (m) messaging signals 136, where m is a plural positive integer. There is no difference between the GPIO signal 130 and the virtual GPIO signal 135 for each processor core: they are simply signals that are to be transmitted and received through the GPIO interface 103 as needed. However, because virtual GPIO signal 135 and messaging signal 136 have no dedicated pins (this is in contrast to conventional GPIO signal 130), virtual GPIO signal 135 and messaging signal 136 are serialized in FSM 115 for online 110a and Transfer on 110b. Upon receipt, each FSM 115 deserializes the received serialized virtual GPIO signal and the received serialized messaging signal. Thus, each FSM 115 acts as a serializer/deserializer for the virtual GPIO signal 135 and the message passing signal 136.The processor may need to receive an interrupt signal in response to changes in selected ones of the GPIO signals or messaging signals. For virtual GPIO signal 135 and messaging signal 136, modem power manager (MPM) 140 monitors these selected GPIO signals and messaging signals in a manner programmed through an interrupt configuration register (not illustrated). Each virtual GPIO signal 135 has a corresponding interrupt configuration register. If the virtual GPIO signal 135 is required to generate an interrupt in response to the signal changing state, the corresponding configuration register will be programmed accordingly. Similarly, if virtual GPIO signal 135 or messaging signal 136 is a signal that does not generate an interrupt regardless of whether the signal has changed state, then the corresponding interrupt configuration register will also be programmed accordingly. The MPM 140 can also include a finite state machine. Thus, like the FSM 115, the MPM 140 is low power and active regardless of whether its processor is in sleep mode or some other undulating state.The virtual GPIO signal 135 can be subdivided into a transmit set and a receive set. In a symmetric system, each transmit set can have the same number. Similarly, each receive set can have the same number of signals. However, it will be appreciated that the virtual GPIO architecture 101 is advantageous in that it can readily accommodate asymmetric signaling embodiments in which the transmit set and messaging signals 136 of the virtual GPIO signal 135 are 136. The set of shots have different sizes, and wherein the received set of GPIO signals 135 and the received set of messaging signals 136 also have different sizes. Regardless of whether the architecture 101 is symmetric or asymmetric, each FSM 115 receives the transmit set of virtual GPIO signals 135 from the GPIO interface 103 in parallel, meaning in parallel that each of these transmit sets is carried on the signal itself at the GPIO The wire 103 between the interface 103 and the FSM 115. The messaging signals 136 are not GPIO signals and thus they are not coupled through the GPIO interface 103. In view of this, the hybrid interface represented by each FSM 115 can be given a certain peripheral address by the corresponding processor 101 or 102. Each FSM 115 is configured to decode an address field 137 in the messaging signal 136 such that a given messaging signal 136 can be stored in the corresponding message register 138. These messaging registers 138 are each mapped to a certain offset of the general address for the FSM 115 within the address space of the corresponding processor 101 or 102. In response to an interrupt from the MPM 140, the processor 101 or 102 can then access the messaging register 138 to obtain an appropriate messaging signal 136. As with virtual GPIO signal 135, messaging signal 136 can be subdivided into a transmit set and a receive set. Whether the architecture is symmetrical or asymmetrical, the resulting transmission of these transmit sets by the FSM 115 occurs on a single transmit pin 112. The transmission of virtual GPIO signal 135 from one processor is integrated into the received set of virtual GPIO signals 135 of the remote processor. Similarly, the transmission of messaging signal 136 is integrated into a received set of messaging signals 136 for the remote processor. The FSM 115 of the remote processor then deserializes the received set of virtual GPIO signals 135 so that it can be presented to the GPIO interface 103 in parallel.Each FSM 115 includes a configuration register (not illustrated) that stores the previous state of the transmit set of virtual GPIO signal 135 and messaging signal 136. In this manner, each FSM 115 can monitor the current state of the virtual GPIO signal 135 transmit set received from the GPIO interface 103 and trigger the serial of the corresponding transmit set only if the current state has changed relative to the previous state. Transfer. In other words, the FSM 115 will only trigger a serial transmission of the transmit set if the state is changed by storing the previous state in the configuration register 107 to monitor one or more signals within the transmit set. Each processor is aware of the address of the message passing signal register 138 and can thereby write the desired set of emissions to them and also read any changes in the receive set. The FSM 115 monitors whether the transmit sets of the messaging signals 136 have changed relative to their previous transmissions and will accordingly trigger the transmission of the transmit set to the remote processor. The MSM 140 monitors whether the receive set has changed as previously discussed and interrupts the corresponding processor so that the changed receive set can be processed.As discussed above, each FSM 115 acts as a serializer/deserializer to serialize each transmit set and deserialize each receive set. Figure 3 is a block diagram of the FSM 115 to better illustrate these operations. The FSM 115 exchanges the virtual GPIO signal 135 and the message passing signal 136 with the corresponding processor through the multiplexing module 300. The multiplexing module interfaces with the corresponding processor for the virtual GPIO signal 135 through the virtual GPIO interface 103 and directly interfaces with the corresponding processor for the messaging signal 136. In one embodiment, each FSM 115 includes a logic circuit that will authorize transmission of a transmit set of virtual GPIO signals 135 or a transmit set of messaging signals 136 on transmit line 110a only if there is a change in any of the transmit sets. 301. Logic circuit 301 thus compares the current state of the transmit set of virtual GPIO signal 135 (or message transfer signal 136) with the previous state of the set of transmit signals stored in corresponding configuration register 107. For example, logic circuit 301 can include an exclusive OR (XOR) gate 310 to perform the comparison. The multiplexing module 300 loads the transmit set into the in-line-out (PISO) shift register 315 in parallel. If the enable signal 320 from the XOR gate 310 goes high (indicating a change between the current state of the transmit set and the previous state), the PISO shift register 315 is enabled to serially place it in response to the cycle of the external clock 120. The content is moved out onto the transmission line 110a.The FSM 115 also deserializes the received set of virtual GPIO signals 135 or messaging signals 136 in a similar manner using a serial in/out (SIPO) shift register 325. The received set of virtual GPIO signals 135 and messaging signals 136 are generated by the remote processor and transmitted by the remote processor onto the receive line 110b. The received set of virtual GPIO signals 135 (or messaging signals 136) are successively moved into SIPO shift register 325 in response to the cycle of external clock 120. As discussed further herein, the FSM 115 is configured to perform the transmission of the transmit and receive sets of the transmit set of the virtual GPIO signal 135 and the messaging signal 136 in frames having separate start and end bits.In one embodiment, FSM 115 may be considered to include means for receiving a transmit set of virtual GPIO signals from a GPIO interface and serially transmitting a virtual GPIO signal to a remote processor on a dedicated transmit pin. The transmit set, and the transmit set of the message passing signal retrieved from the messaging signal register and the transmit set of the messaging signal transmitted to the remote processor on the dedicated transmit pin.These frames have a predefined size. In one embodiment, the frame size is determined by the header to be up to a certain number of bits. An example frame 400 is shown in FIG. The header 405 can include two function bits, fn_0 and fn_1. In one embodiment, if both of the functional bits are zero, then the subsequent bits are virtual GPIO signals 135. If fn_0 is zero and fn_1 is equal to 1, then the subsequent bits are message passing signal 136. If fn_0 is one and fn_1 is equal to 0, the subsequent bits represent the length of the virtual GPIO frame desired by the remote processor. Similarly, if both of these functional bits are one, then the subsequent bits represent the acknowledgment of the desired frame length by the remote processor. If the transmit set of virtual GPIO signal 135 (or the transmit set of messaging signal 136) is less than this fixed frame size, the unused bits in each frame may be a don't care value. Alternatively, each FSM 115 can be configured to vary the size of the transmitted frame depending on the number of bits required for a given application. It will be appreciated that the foregoing discussion of encoding using two functional bits is merely an example, and other headers and encoding protocols may be used to identify whether a frame carries a virtual GPIO signal 135, a messaging signal 136, an identification of a virtual GPIO frame length, a virtual The GPIO frame length is acknowledged, the message signal frame length is identified, or the message transmission signal frame length is acknowledged. In one embodiment, frame 400 may also include a type bit (type_bit) associated with the programmed and acknowledged frames discussed further below. For example, in one embodiment, the type bit can be high to identify a virtual GPIO frame and low to identify a message passing signal frame.The number of frames required to transmit the transmit set of virtual GPIO signal 135 or messaging signal 136 depends on the number of signals in a particular transmit set and the frame size. For example, assume that the frame size is eight bits and there are ten virtual GPIO signals 135 in the transmit set. To use the eight-bit frame to transmit the transmit set, then two frames will be required.To detect a complete frame of the received set of virtual GPIO signals 135 or messaging signals 136, the FSM 115 may include the number of cycles required for the external clock 120 after receiving the start bit of the frame as shown in FIG. Counting logic circuit 350. For example, assume that the receive set includes ten virtual GPIO signals 135 received in response to ten cycles of the external clock 120. After detecting the start bit and waiting for another ten cycles of the external clock 120, the logic circuit 350 will then expect to receive the end bit. If the end bit is detected accordingly, then logic circuit 350 can then gate strobe output latch 351 to receive in parallel the received set of virtual GPIO signals 135 that have been moved into SIPO shift register 325 as a complete frame. The received set of latched virtual GPIO signals can then be presented to GPIO interface 103 by multiplexing module 300. Although the received set of messaging signals is loaded into the messaging signal register 138 instead of being routed through the GPIO interface 103, latching of the received set of messaging signals 136 occurs similarly.Referring again to PISO shift register 315, it will be appreciated that the register is configured to frame the transmit set of virtual GPIO signals and messaging signals with start and end bits. The transmit set of virtual GPIO signals is thus transmitted in frame 400 delimited by start and end bits. Since the transmit set of the transmit processor becomes the receive set of the remote processor, the receive set is also framed accordingly. This frame is advantageous because each processor can thereby monitor the health of the remote processor without requiring any additional dedicated pins. For example, each FSM 115 can be configured to weakly pull its dedicated transmit pin 112 during the default state (the current state of the transmit set of virtual GPIO signals is unchanged compared to the previous state) (and thus weaken transmit line 110a) Pull) to the supply voltage. For such an embodiment, the start bit will be a logic zero such that to transmit this start bit, the FSM 115 grounds the transmit line 110a. In this manner, each FSM 115 can detect the received start bit readily by detecting that the receive line 110b has been pulled to ground. In one embodiment, the start bit and the stop bit are logically complementary. If the start bit is a logic zero, the stop bit will thus be a logic high. The payload of the frame can then be extended from the type bit to the stop bit 410 at the end of the demarcation frame.There is a possibility that the processor malfunctions such that it does not properly pull its emission line 110a to the ground. The remote processor will thereby detect this as a start bit and logic circuit 350 will begin counting towards the end of the frame accordingly. However, if the end bit is a logic one, then each FSM 115 charges the transmit line 110a to the supply voltage to signal the end of the frame transmission. If the processor fails so that the remote FSM 115 detects a signal that is considered a start bit, the logic circuit 350 will not detect the end bit and will accordingly inform its processor about the failure of the remote processor.In order to allow for sufficient setup time for reception, the transfer of frame 400 should occur with reference to the first clock edge and the reception takes place with reference to the remaining clock edges. For example, a bit in the PISO shift register 315 can be shifted out for transmission on the transmit line 110a in response to a falling edge or a negative edge of the external clock 120. Conversely, the received bit on receive line 110b can be shifted into SIPO shift register 325 in response to the rising edge or positive edge of clock 120.In order for a processor to detect an inactive state in the remote processor, each FSM 115 can be configured to weakly pull its transmit line in a default state (where no frames are to be transmitted). As previously discussed, the start and stop bits have opposite logic states. The start bit 406 of the frame 400 of Figure 4 may thus be a logical zero (ground) such that the transmit line 110a is pulled low for the transfer of the bit, and the stop bit 410 may be a binary one value such that the transmit line 110a is This bit is transferred and pulled high to the supply voltage. Referring again to FIG. 3, logic circuit 350 is configured to monitor receive line 110b with reference to the rising edge of external clock 120. The default logic state for frameless transmission is indicated by the receive line 110b simply maintaining high at the weak pull-up discussed above. If logic circuit 350 detects that receive line 110b is pulled low (indicating a zero value of start bit 405) on one of the rising edges of external clock 120, then logic circuit 350 waits for a sufficient number of clock cycles based on the predefined size of frame 400 to The logic high value of stop bit 410 is then detected. Receive stop bit 410 indicates to logic circuit 350 that full frame 400 has been completely moved into SIPO shift register 325. At this point, logic circuit 350 gates SIPO shift register 325 such that the received frames are provided to multiplexing module 300 in parallel by latch 351. The received set of virtual GPIO signals (or messaging signals 136) may then be provided to the processor core via GPIO interface 103 accordingly.A relatively slow external clock 120, such as a 32 KHz sleep clock, is sufficient for the signaling requirements of the IPC. For example, assume that the minimum setup and hold requirements for the transmission of virtual GPIO signal 135 and messaging signal 136 are each two nanoseconds, and that the maximum expected lead or lag of external clock 120 received at FSM 115 is six nanoseconds. It can be easily shown that the resulting maximum frequency of the external clock 120 will be 62 MHz. A 32 KHz frequency, such as from a sleep clock, thus provides a very large margin of safety for such embodiments. An example method of operation of architecture 101 will now be outlined.The method of operation of architecture 101 is summarized in the flow chart of FIG. The method begins at step 500 by receiving a GPIO signal set from a first processor at a GPIO interface. Step 505 includes transmitting a portion of the GPIO signal set from the GPIO interface to the remote processor via the GPIO pin. Step 510 includes transmitting the remainder of the GPIO signal set as a virtual GPIO signal from the GPIO interface to the remote processor on the dedicated transmit pin signal. Finally, the method includes an act 515 of retrieving the message passing signal from the message passing signal register written by the first processor and serially transmitting the retrieved message passing signal to the remote processor on the dedicated transmit pin.Consider the advantages of the disclosed virtual hybrid GPIO architecture: only two pins are needed, and any number of virtual GPIO signals 135 and messaging signals 136 can be serialized and deserialized by a finite state machine. The only limitation is the timing requirement for the virtual GPIO signal to reference the external clock 120 and any expected clock lag or lead. In addition, no other pins are needed to make the health of one processor transparent to the other processor.Frame 400 is also quite advantageous because various messaging signals 136 and virtual GPIO signals 135 can be transmitted on dedicated transmit pins 112, using only the overhead of as few as two functional bits. An example programming frame to set the virtual GPIO frame length (and set the messaging signal frame length) is shown in FIG. The programming frame 600 sets the virtual GPIO frame length. Similarly, programming frame 605 sets the messaging signal frame length. The number of bits used to define the frame length (and thus the length of each programming frame) is predefined. Thus, once the FSM 115 sees a header indicating that the program length is set (such as fn_0 discussed above equals 1 and fn_1 is equal to 0), it will read the frame length from the frame body. In view of this, the FSM 115 needs to know if the length of the virtual GPIO frame or messaging frame is programmed. Thus, each header 405 for programming frames 600 and 605 is followed by a frame type bit 610. For example, frame type bit 610 equal to one may indicate that the virtual GPIO frame length is programmed, and frame type bit 610 equal to zero may indicate that the message passing signal frame length is programmed. In one embodiment, each programming frame 600 and 605 has five programming bits ranging from bit-0 to bit-4. Each bit is a power of two, as identified by its name. In other words, bit-0 is a coefficient multiplied by 20, bit-1 is a coefficient multiplied by 21, bit-2 is a coefficient multiplied by 22, bit-3 is a coefficient multiplied by 23, and bit-4 is multiplied by 24 Coefficient. These five programming bits can thus be programmed from zero to 31 frame lengths. The addition of a programming bit will enable programming of frame lengths up to 63, and so on.When the remote FSM 115 receives a programming frame, such as frame 600 or 605, it can proceed to use the acknowledgment frame to acknowledge the defined frame length. An example acknowledgement frame is shown in FIG. Frame 700 is a virtual GPIO acknowledgement frame and frame 705 is a message delivery signal acknowledgement frame. Each frame 700 and 705 includes a header 405 in which the function bit identifies the frame as an acknowledgement frame. In one embodiment, the header 405, in which both of the functional bits are logical one, identifies the acknowledged frame. The frame type bit 710 following the header 405 identifies the acknowledged frame type. In one embodiment, the virtual GPIO acknowledgment frame 700 is identified by the frame type bit 710 being equal to a logical one. Conversely, the message passing signal acknowledgment frame 705 can be identified by the frame type bit 710 being equal to a logical zero. The programming bits following the frame type bit 710 are equal to the programming bits in the corresponding frame 600 or 605.Once the frame length is thus programmed, the frame 800 of the virtual GPIO signal 136 or the frame 805 of the messaging signal can be transmitted as shown in FIG. Referring again to Figure 1, note that there are n virtual GPIO signals 135 and m message passing signals 136. Each frame 800 can thus be dedicated to only one GPIO port (one of the n GPIO signals 135), or it can include one bit for each of the n GPIO signals 135. In other words, GPIO words can be transmitted serially according to various ports or they can be transmitted in parallel. The same serial/parallel considerations apply to messaging signals. Regardless of whether each frame 800 and 805 is carrying multiple ports or only one port, header 405 identifies whether the frame is a virtual GPIO frame or a messaging signal frame.Instead of using separate frames to transmit virtual GPIO signals 135 and messaging signals 136, these signals may be combined in an alternate embodiment of a hybrid virtual GPIO architecture, where each frame includes both GPIO signal 135 and messaging signal 136. . For example, FIG. 9 illustrates an example hybrid frame 900 that includes a header 405 and an extended header 905. The extended header 905 indicates the location of the message passing signal bit and the bit of the virtual GPIO bit that follows the extended header 905 and before the stop bit 410. Depending on the latency requirement, message bit 910 or virtual GPIO bit 915 may be in front of the frame body. In some embodiments, the extended header 905 can include a misalignment bit, such as a CRC bit. Note that the extended header 912 only needs to identify only the location and length of the virtual GPIO bit 915 or only the location and length of the message bit 910, since by default it is known that the remaining bits belong to the remaining bit class.The use of the shared external clock 120 as discussed above is simple and easy to implement, but it requires that each FSM 115 be associated with a clock pin for receiving the shared clock 120. In order to avoid this additional pin requirement, the external clock 120 can be eliminated as discussed in U.S. Provisional Application Serial No. 61/907,947, the disclosure of which is incorporated herein. Referring again to FIG. 1, architecture 101 can thus be modified by eliminating external clock 120 and its corresponding pins. In order to eliminate any need to reserve pins for receiving the shared clock 120 in each integrated circuit, the transmission of the signal transmission set for the transmitting integrated circuit and the receiving integrated circuit is asynchronous. To enable this advantageous asynchronous transfer and reception, each FSM 115 may include or be associated with an oscillator, such as a ring oscillator. The transmit FSM pulse width modulates the signal transmitted on the dedicated transmit pin in response to each bit in the transmit set by counting the oscillations from the oscillator. The bits in the transmit set can be transmitted in a data frame, each bit in the frame being a pulse width modulated version of the corresponding bit in the transmit set. Each bit in the transmitted data frame has a particular bit period for use in pulse width modulation. For example, if the transmitted bit has a binary state (such as binary zero), the FSM can count the first number of oscillations such that a majority of the bit period expires. When the first number of oscillations are counted, the FSM pulsates the dedicated transmit pin with a first binary voltage, such as with a supply voltage VDD. At the beginning of the count, the dedicated transmit pin is pulsed with an opposite second binary voltage state, such as ground.Conversely, if the transmitted bits have opposite binary states (such as binary one), then the FSM starts transmitting bits with a second binary voltage (such as ground) and proceeds to count the second number of oscillations, thereby A small part expires. When counting the second number of oscillations, the FSM pulsates the dedicated transmit pin with the first binary voltage. In this manner, the voltage of the transmit line coupled to the dedicated transmit pin is pulsed with the first binary voltage in accordance with the variable pulse width. If the current transmit bit has a first binary value, the transmit line is pulsed with the first binary voltage according to the first pulse width. Conversely, if the current transmitted bit has an opposite second binary value, then the transmit line is pulsed with the first binary voltage according to the second pulse width.The transmitted data frames received from the remote processor on the dedicated receive pin at the FSM are demodulated in a similar manner. It is convenient to have the default state (or idle mode) of each transmit line (which is the receive line of the receive processor) charged to the supply voltage VDD. This allows the health of the remote processor to be transparent to the receiving processor as discussed further below. The second binary voltage in such an embodiment can then be grounded. The receiving FSM then identifies the beginning of the received bit by detecting when the dedicated receive pin is discharged. The receiving FSM can then begin counting the oscillations from its oscillator. Two counts will then be generated: a first receive count of how many oscillations occur during the bit portion of the dedicated receive pin being charged to the first binary voltage, and a dedicated receive pin being charged to the second binary voltage The second receive count of how many oscillations occur during the bit portion. By comparing the two receive counts, the receiving FSM can determine whether the first pulse width or the second pulse width is applied to the received bit. The received data frames are demodulated accordingly, eliminating the need for a shared clock to coordinate the transmission of data frames on the transmit line. To distinguish between such FSMs and FSM 115 using an external clock, the following FSMs will be tagged with the internal clock FSM.Figure 10 is a block diagram of internal clock FSM 1015 to better illustrate its transmit and receive operations. The FSM 1015 receives the transmit set of virtual GPIO signals 135 from its GPIO interface 103 (shown in Figure 1) through the multiplexing module 300. Alternatively, multiplexing module 300 can receive the transmit set of messaging signal 136 as discussed earlier for FSM 115. The FSM 1015 includes logic circuitry 301 that will authorize the serial transmission of the signal transmit set on the transmit line 110a as a pulse width modulated signal if there is a change in the transmit set compared to the previous state of the transmit set. In this way, there is no need to retransmit a transmission set that has not changed compared to the previous transmission. Logic circuit 301 thus compares the current transmit set of virtual GPIO signals with the previous transmit set stored in latch or configuration register 107. To perform the comparison, logic circuit 301 can include an exclusive OR gate 310 that compares the current transmit set with a previous transmit set stored in configuration register 107 (the previous transmit set can be recorded as "on" as shown in FIG. A GPIO state") XOR. The multiplexing module 300 loads the current transmit set into the in-line-out (PISO) shift register 315 in parallel. If the enable signal 320 from the XOR gate 310 goes high (indicating a change between the current transmit set and the transmit set stored in the register 107), the PISO shift register 315 is then enabled to respond to the shift signal 120 serially. The content is removed from the transmission line 110a.Each signal fire set includes a data frame stored in PISO shift register 315. The FSM 1015 includes a pulse width modulator 355 that pulse width modulates the bit emission set removed from the PISO shift register 315 into a pulse width modulated output signal that is driven to the remote processor on the transmit line 110a. The modulation is responsive to counting of an oscillation cycle from the oscillator, such as a count of the transmit ring oscillator output signal 360 from the transmit ring oscillator (RO) 361. Modulator 355 and transmit ring oscillator 361 can be triggered by asserting an enable signal from XOR gate 310. In response to the trigger, modulator 355 gates shift signal 120 such that PISO shift register 315 shifts the initial shift of the signal transmit set to modulator 355.Modulator 355 includes at least one counter that counts the loops in ring oscillator output signal 360 (e.g., counters 1105 and 1110 shown in Figure 11 as further described below). Depending on the desired pulse width from pulse width modulation, the counter counts to the first count or counts to a second count greater than the first count. After counting a sufficient number of cycles to satisfy the appropriate one of the first and second counts, the counter re-gated the shift signal 120 such that subsequent bits from the data frame stored in the PISO shift register 315 are Moved into modulator 355. In this manner, one bit of the signal transmission set of the data frame stored in the PISO shift register 315 is shifted into the modulator 355 at a time. Depending on the binary value of each bit shifted out of PISO shift register 315, pulse width modulator 355 pulse width modulates the corresponding pulse transmitted on transmit line 110a. In view of this, each processor can be configured to pull its transmit line 110a weakly to the supply voltage VDD during the default state (no data transfer). In such an embodiment, as shown in the timing diagram of FIG. 11, for data frames, the pulse transfer of the long bit time period begins by discharging the transmit line 110a to ground (VSS). Each pulse width modulation bit transfer begins with discharging the emission line 110a to some initial discharge portion of the ground landing period (such as 25% of the bit time period). Depending on the bit value, modulator 355 maintains discharge line 110a for a majority of the bit period (e.g., 75%), or charges line 110a back to VDD immediately after the initial discharge portion of the bit period expires. In other words, a binary value can be modulated into a relatively narrow high voltage (VDD) pulse in the bit period, and the complement of the binary value can be modulated into a relatively wide high voltage (VDD) pulse in the bit period.The initial bit of the example data frame shown in Figure 11 is a binary zero. In one embodiment, the binary zero can be modulated to a first pulse width, wherein the emission line 110a is maintained at ground for 75% of the bit period. Most of this one bit period corresponds to most of the counter 1110 counting to the second count. The bit to be transmitted is a binary zero, and the pulse width modulator 355 thus keeps the transmission line 110a discharged until the second count is satisfied. When the second count is reached, the pulse width modulator 355 will then pulse the transmit line 110a to the supply voltage VDD for the remainder of the bit period. The pulse duration then corresponds to a fraction of the counter 1105 that counts to the first count (only 25% of the bit period). The resulting voltage pulse transmitted on this line 110a will have a pulse width of only 25% of the bit period.Conversely, binary one can be modulated to a second pulse width, with emission line 110a being grounded only during a small portion of the discharge portion, such as the first 25% of the bit period. The emission line 110a will then be discharged until the first count is met. As determined by resetting most of the counter 410 to zero and counting until it satisfies the second count, once the first count is satisfied, the pulse width modulator 355 pulls the pulsed transmit line 110a up to the supply voltage VDD for up to The remainder of the bit period. The second pulse width at which the voltage of the emission line 110a is charged to the power supply voltage VDD will include 75% of the bit period. However, it will be appreciated that different pulse widths can be used in alternative embodiments to represent the desired binary value.In one embodiment, modulator 355 can include logic circuit 1100. Depending on the bit value, logic circuit 1100 triggers a small portion of counter 1105 or a majority of counter 1110 to begin counting. However, it will be appreciated that a single counter can be used to count to the first or second count depending on the desired pulse width modulation. At the time of triggering by logic circuit 1110, a small portion of counter 1105 or a majority of counter 1110 counts the loop from transmitting ring oscillator (RO) 361. For example, the small portion counter 1105 can be configured to count a sufficient number of cycles corresponding to 25% of the bit time period, at which time it asserts the output signal to indicate that the first count is satisfied. Similarly, most of the counter 1110 can be configured to count a sufficient number of cycles corresponding to 75% of the bit time period, at which time it asserts its output signal. In this embodiment, modulator 355 is configured to discharge transmit line 110a to ground at the beginning of each bit time period. Depending on the bit value, modulator 355 charges transmit line 110a back to supply voltage VDD at the assertion of the output signal from the appropriate counter. For example, the first bit in the data frame is a binary zero, so modulator 355 asserts transmit line 110a as high as VDD when counter 1105 asserts its output signal. Similarly, because the second bit in the data frame is a binary one, modulator 355 asserts transmit line 110a as high as VDD as counter 1110 asserts its output signal. It will be appreciated that the initial 25% low cycle is only an example and other scores for the bit time period can also be achieved.In one embodiment, the combination of logic circuit 41100, counters 1105 and 1110, modulator 355, and SIPO shift register 315 can be considered to include for serially processing each signal in the transmit set into a corresponding pulse width modulated signal. A sequential apparatus, wherein the apparatus is configured to determine each serially processed signal by counting an oscillation from an oscillator to one of a first technique and a second count in response to a binary value of the serially processed signal The pulse width, and wherein the device is further configured to transmit a corresponding pulse width modulated signal sequence to the remote processor over the dedicated transmit pin via the dedicated transmit pin.Referring again to FIG. 9, FSM 1015 also deserializes the signal reception set (virtual GPIO and/or messaging signal) in a similar manner using a serial-to-in and out (SIPO) shift register 325. Demodulator 370 demodulates the received pulse width modulated signal received from remote processor on receive line 110b. The demodulator 370 is configured to detect the beginning of the received data frame from the received pulse width modulated signal, such as by detecting the discharge of the receive line 110b, to trigger the receive ring oscillator 375 to begin oscillating the receive ring oscillator output signal. 380. Note that in an alternative embodiment, oscillators 375 and 361 can include the same oscillator. Similar to demodulator 355, demodulator 370 can include counters, such as low counter 415 and high counter 420. In each bit period, the low counter 415 is triggered to count when the receive line 110b is discharged. Instead, the high counter 420 is triggered to count when the receive line 110b is charged to the supply voltage VDD. In an alternative embodiment, counters 415 and 420 may be implemented using a single shared counter that counts the number of oscillations in each binary voltage state of receive line 110b. By comparing the counts from counters 415 and 420, demodulator 370 can form demodulated data signal 382 accordingly. In particular, if the count from the high counter 420 is greater than the count from the low counter 415 in a given bit period, the demodulator 370 can drive the demodulated data signal 382 up to the supply voltage VDD to indicate the received A relatively wide pulse. Conversely, if the count from the low counter is greater, the demodulator 370 can discharge the demodulated data signal 382 to VSS to indicate that a relatively narrow pulse was received.Demodulator 370 can also assert shift signal 381 into SIPO shift register 325 upon detection of a count from a bit time period boundary. The SIPO shift register 325 will then move from the demodulator 370 into the demodulated data signal 382. The FSM module 1015 can be configured to process a predefined data frame size of the signal transmit and receive sets determined by the programmed frames discussed above. Both counters 415 and 420 are initialized at the beginning of the bit time period. The low counter 415 counts the loop from the receive ring oscillator 375 when the receive line 110b voltage is low, and the high counter 420 counts the loop from the receive ring oscillator 375 when the receive line voltage is high (VDD) . Comparator 425 thus performs a demodulation bit decision by comparing the low count (CL) from low counter 415 with the high count (CH) from high counter 420 at the end of each bit time period. The bit periods may determine that the high counter 420 triggered from whenever the receive line 110b is discharged stops counting and outputs CH. Counter 420 can be initialized at each bit time boundary accordingly. At the end of each bit period, in one embodiment, if CL is greater than CH, comparator 425 drives modulated data signal 382 low, which corresponds to binary zero demodulation. In contrast, in such embodiments, if CH is greater than CL at the end of the bit period, the comparator drives the demodulated data signal 382 high, which corresponds to the demodulation of binary one. The SIPO shift register 325 registers each of the demodulated bits in response to the strobe of the shift signal 381.Many modifications, substitutions and changes can be made in the materials, devices, arrangements and methods of use of the device of the present disclosure without departing from the spirit of the present disclosure, as will be appreciated by those of ordinary skill in the art and in light of the specific application at hand. And scope. In view of the above, the scope of the present disclosure should not be limited to the specific embodiments illustrated and described herein (as they are merely some examples of the present disclosure), but should be completely equivalent to the appended claims and their functional equivalents. quite. |
Aspects described herein include devices, wireless communication apparatuses, methods, and associated operations for heatsinks integrating millimeter wave and non-millimeter wave operation. In some aspects, an apparatus comprising a millimeter wave (mmW) module is provided. The apparatus includes at least one mmW antenna and at least one mmW signal node configured to communicate a data signal in association with the at least one mmW antenna. The apparatus further includes mixing circuitry configured to convert between the data signal and a mmW signal for communications associated with the at least one mmW antenna. The apparatus further includes a heatsink comprising a non-mmW antenna and a non-mmW feed point coupled to the non-mmW antenna. The non-mmW feed point is configured to provide a signal path to the non-mmW antenna for a non-mmW signal. The heatsink is mechanically coupled to the mmW module. |
CLAIMSWhat is claimed is:1. A wireless communication apparatus, comprising: a millimeter wave (mmW) module comprising: at least one mmW antenna; at least one mmW signal node configured to communicate a data signal in association with the at least one mmW antenna; mixing circuitry configured to convert between the data signal and a mmW signal for communications associated with the at least one mmW antenna; and a heatsink comprising a non-mmW antenna and a non-mmW feed point coupled to the non-mmW antenna, the non-mmW feed point configured to provide a signal path to the non-mmW antenna for a non-mmW signal, wherein the heatsink is mechanically coupled to the mmW module.2. The wireless communication apparatus of claim 1, wherein the at least one mmW antenna is configured to radiate in a first effective beam width from a first side of the mmW module, and wherein the non-mmW antenna is structured with a gap positioned at the first side of the mmW module.3. The wireless communication apparatus of claim 2, wherein the at least one mmW antenna is configured to radiate mmW signals in the first effective beam width at frequencies greater than 20 gigahertz, and wherein the non-mmW antenna is configured to radiate at frequencies less than 7 gigahertz without interfering with the mmW signals in the first effective beam width.4. The wireless communication apparatus of claim 2, wherein the heatsink is physically coupled to two or more sides of the mmW module other than the first side using a heat dispersion adhesive.5. The wireless communication apparatus of claim 1 , wherein the heatsink is mechanically coupled to the mmW module to facilitate heat transfer from the mmW module to the non- mmW antenna.386. The wireless communication apparatus of claim 1 , wherein the heatsink is mechanically coupled to the mmW module using a heat dispersion adhesive.7. The wireless communication apparatus of claim 1, wherein the heatsink is configured to dissipate heat received from the at least one mmW antenna via one or more conductors used to transmit the non-mmW signal.8. The wireless communication apparatus of claim 1, wherein the heatsink comprises an integral metal structure.9. The wireless communication apparatus of claim 1, wherein the heatsink is physically connected to a thermal dissipation medium and configured to transfer thermal energy received from the mmW module to the thermal dissipation medium via conduction.10. The wireless communication apparatus of claim 9, wherein the thermal dissipation medium is air around the non-mmW antenna.11. The wireless communication apparatus of claim 1, wherein the non-mmW antenna is a quarter wavelength slot antenna with a radiating structure formed by a gap between the heatsink and a frame metal with the non-mmW feed point structured across the gap between the heatsink and the frame metal.12. The wireless communication apparatus of claim 1, wherein the non-mmW antenna is an inverted-F antenna comprising a ground plane coupled to a first side of the mmW module and conductors coupled to the ground plane and at least a second side of the mmW module different from the first side of the mmW module.13. The wireless communication apparatus of claim 1, wherein the non-mmW antenna is a positioning system antenna configured to receive Global Navigation Satellite System signals at approximately 1.575 gigahertz.14. The wireless communication apparatus of claim 1, wherein the at least one mmW antenna includes a plurality of antennas of an antenna array;39
wherein the mmW module further comprises phase shifting circuitry for each antenna of the plurality of antennas configurable to transmit or receive a beamformed beam in an effective beam width range.15. The wireless communication apparatus of claim 1, wherein the mmW module further comprises power management circuitry and mmW circuitry, wherein the power management circuitry is configured to supply system voltages the mmW circuitry.16. The wireless communication apparatus of claim 1, wherein the non-mmW antenna includes a conductor physically coupled to the mmW module, wherein the conductor has a length of approximately 24.1 millimeters.17. The wireless communication apparatus of claim 1, wherein the non-mmW antenna is a quarter wavelength monopole antenna.18. The wireless communication apparatus of claim 1, wherein the non-mmW antenna is a half wavelength loop antenna.19. The wireless communication apparatus of claim 1, further comprising: a display screen; and control circuitry coupled to the display screen, the non-mmW feed point, and the at least one mmW signal node.20. A method of operating a wireless communication apparatus, comprising: receiving, at a millimeter wave (mmW) signal node of a mmW module, a mmW signal, the mmW module comprising at least one mmW antenna; receiving, at a heatsink comprising a non-mmW antenna, a non-mmW signal, wherein the heatsink is mechanically coupled to the mmW module at a physical interface; receiving, at the heatsink via the physical interface, thermal energy from the mmW module; and dissipating, utilizing the heatsink comprising the non-mmW antenna, the thermal energy received from the mmW module via conduction to a thermal dissipation medium.4021. The method of claim 20, wherein the mmW signal is relayed from the at least one mmW antenna to communication circuitry of the mmW module via the mmW signal node.22. The method of claim 20, wherein the mmW signal is transmitted via the at least one mmW antenna.23. The method of claim 20, wherein the non-mmW signal is received at the non-mmW antenna from a non-mmW signal feed for wireless transmission via the non-mmW antenna.24. The method of claim 20, wherein the non-mmW signal is a wireless global positioning system (GPS) signal received at the non-mmW antenna, and routed to GPS circuitry of the wireless communication apparatus via a non-mmW feed.25. The method of claim 20, wherein the mmW signal is a reflection of a radar signal received at the at least one mmW antenna, and routed to radar circuitry of the wireless communication apparatus.26. The method of claim 20, wherein the thermal dissipation medium is air around the non-mmW antenna.27. The method of claim 20, wherein the thermal dissipation medium is a heat transfer fluid configured to transfer thermal energy from the non-mmW antenna.28. The method of claim 20, wherein the physical interface comprises a thermally conductive adhesive physically binding portions of one or more surfaces of the heatsink to portions of one or more surfaces of the mmW module.29. An apparatus comprising: means for receiving a mmW signal; and means for jointly receiving a non-mmW signal while dissipating thermal energy received from the means for receiving the mmW signal via thermal conduction.30. The apparatus of claim 29, further comprising a thermally conductive adhesive used to physically attach portions of one or more surfaces of the means for receiving the mmW signal to portions of one or more surfaces of the means for jointly receiving the non-mmW signal while dissipating the thermal energy received from the means for receiving the mmW signal. |
HEATSINK FOR MILLIMETER WAVE (MMW) AND NON-MMW ANTENNA INTEGRATIONFIELD[0001] The present disclosure relates generally to electronics and wireless communications, and more specifically to antennas for use with such wireless communications.BACKGROUND[0002] Wireless communication devices and technologies are becoming ever more prevalent. Wireless communication devices generally transmit and receive communication signals. A communication signal is typically processed by a variety of different components and circuits. In some modem communication systems, many different wavelengths of electromagnetic waves can be used in a single device. Supporting different wavelengths for wireless communications can involve managing complex interactions among device elements while managing interactions and interference between elements supporting communications on the different wavelengths.SUMMARY[0003] Various implementations of systems, methods and devices within the scope of the appended claims each have several aspects, no single one of which is solely responsible for the desirable attributes described herein. Without limiting the scope of the appended claims, some prominent features are described herein.[0004] Aspects described herein include heatsinks for wireless devices with a millimeter wave (mmW) module including one or more antennas for communications at frequencies above 20 gigahertz (GHz) (e.g., above approximately 24 GHz), as well as an antenna for non-mmW communications where the non-mmW antenna is at least a portion of a structure used for heat dissipation in the heatsink. The power involved in mmW communications and the compact size of a mmW module can result in significant heat generation by the mmW module. In some such modules, a heatsink can be used to dissipate the heat generated by the mmW module. Such a heatsink can consume significant amounts of space in a device environment where space is an important and limited resource. Aspects described herein include devices which integrate a heatsink structure designed to dissipate heat from a mmW module with a non-mmW antenna formed by at
least a portion of the heatsink. Such devices can provide improved device performance in the form of additional functionality provided by an additional antenna, improved heat dissipation using a heatsink with a mmW module, and/or a compact device structure by using at least a portion of the heatsink as a non-mmW antenna.[0005] In some aspects, a device is provided, comprising a millimeter wave (mmW) module comprising: at least one mmW antenna; at least one mmW signal node configured to communicate a data signal in association with the at least one mmW antenna; mixing circuitry configured to convert between the data signal and a mmW signal for communications associated with the at least one mmW antenna; and a heatsink comprising a non-mmW antenna, the heatsink further comprising a non-mmW feed point coupled to the non-mmW antenna to provide a signal path to the non-mmW antenna for a non-mmW signal, wherein heatsink is mechanically coupled to the mmW module.[0006] In some aspects, the at least one mmW antenna is configured to radiate in a first effective beam width from a first side of the mmW module, and wherein the non-mmW antenna is structured with a gap positioned at the first side of the mmW module.[0007] In some aspects, the at least one mmW antenna is configured to radiate mmW signals in the first effective beam width at frequencies greater than 20 gigahertz, and wherein the non-mmW antenna is configured to radiate at frequencies less than 7 gigahertz without interfering with the mmW signals in the first effective beam width.[0008] In some aspects, the heatsink is physically coupled to two or more sides of the mmW module other than the first side using a heat dispersion adhesive.[0009] In some aspects, the heatsink is mechanically coupled to the mmW module to facilitate heat transfer from the mmW module to the non-mmW antenna.[0010] In some aspects, the heatsink is mechanically coupled to the mmW module using a heat dispersion adhesive.[0011] In some aspects, the heatsinkis configured to dissipate heat received from the mmW antenna via one or more conductors used to transmit the non-mmW signal.[0012] In some aspects, the heatsink comprises an integral metal structure.
[0013] In some aspects, the heatsink is physically connected to a thermal dissipation medium and configured to transfer thermal energy received from the mmW module to the thermal dissipation medium via conduction.[0014] In some aspects, the thermal dissipation medium is air around the non-mmW antenna.[0015] In some aspects, the non-mmW antenna is a quarter wavelength slot antenna with a radiating structure formed by a gap between the heatsink and a frame metal with the feed point structured across the gap between the heatsink and the frame metal.[0016] In some aspects, the non-mmW antenna is an inverted-F antenna comprising a ground plane coupled to a first side of the mmW module and conductors coupled to the ground plane and at least a second side of the mmW module different from the first side of the mmW module.[0017] In some aspects, the non-mmW antenna is a positioning system antenna configured to receive Global Navigation Satellite System signals at approximately 1.575 gigahertz.[0018] In some aspects, the at least one mmW antenna includes a plurality of antennas of an antenna array; wherein the mmW module further comprises phase shifting circuitry for each antenna of the of antennas configurable to transmit or receive a beamformed beam in an effective beam width range.[0019] In some aspects, the mmW module further comprises power management circuitry and mmW circuitry, wherein the power management circuitry is configured to supply system voltages the mmW circuitry.[0020] In some aspects, the non-mmW antenna includes a conductor physically coupled to the mmW module, wherein the conductor has a length of approximately 24.1 millimeters.[0021] In some aspects, the non-mmW antenna is a quarter wavelength monopole antenna.[0022] In some aspects, the non-mmW antenna is a half wavelength loop antenna.
[0023] In some aspects, a method of operating a wireless communication apparatus is provided. The method comprises receiving, at a millimeter wave (mmW) signal node of a mmW module, a mmW signal, the mmW module comprising at least one mmW antenna; receiving, at a non-mmW antenna, a non-mmW signal, wherein heatsink is mechanically coupled to the mmW module at a physical interface; receiving, by the non- mmW antenna via the physical interface, thermal energy from the mmW module; and dissipating, utilizing a heatsink comprising the non-mmW antenna, the thermal energy received from the mmW module via conduction to a thermal dissipation medium.[0024] In some aspects, the mmW signal is relayed from the at least one mmW antenna to communication circuitry of the mmW module via the mmW signal node.[0025] In some aspects, the mmW signal is transmitted via the at least one mmW antenna.[0026] In some aspects, the non-mmW signal is received at the non-mmW antenna from a non-mmW signal feed for wireless transmission via the non-mmW antenna.[0027] In some aspects, the non-mmW signal is a wireless global positioning system (GPS) signal received at the non-mmW antenna, and routed to GPS circuitry of the wireless communication apparatus via a non-mmW feed.[0028] In some aspects, the mmW signal is a reflection of a radar signal received at the mmW antenna, and routed to radar circuitry of the wireless communication apparatus.[0029] In some aspects, the thermal dissipation medium is air around the non-mmW antenna.[0030] In some aspects, the thermal dissipation medium is a heat transfer fluid configured to transfer thermal energy from the non-mmW antenna.[0031] In some aspects, the physical interface comprises a thermally conductive adhesive physically binding portions of one or more surfaces of the heatsink to portions of one or more surfaces of the mmW module.[0032] Another aspect of the disclosure provides for an apparatus. The apparatus comprises means for receiving a mmW signal; means for jointly receiving a non-mmW
signal while dissipating thermal energy received from the means for receiving the mmW signal via thermal conduction.[0033] Some aspects further comprise a thermally conductive adhesive used to physically attach portions of one or more surfaces of the means for receiving the mmW signal to portions of one or more surfaces of the means for jointly receiving the non-mmW signal while dissipating the thermal energy received from the means for receiving the mmW signal.[0034] In some aspects, the apparatuses described above can include a mobile device with a camera for capturing one or more pictures. In some aspects, the apparatuses described above can include a display screen for displaying one or more pictures. The summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification, any or all drawings, and each claim.[0035] The foregoing, together with other features and embodiments, will become more apparent upon referring to the following specification, claims, and accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGS[0036] In the figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as “102a” or “102b”, the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral encompass all parts having the same reference numeral in all figures.[0037] FIG. 1 is a diagram showing a wireless communication system communicating with a wireless device that can be implemented according to aspects described herein.[0038] FIG. 2A is a block diagram showing portions of a wireless device in which aspects the present disclosure may be implemented.
[0039] FIG. 2B is a block diagram showing portions a wireless device in which aspects of the present disclosure may be implemented.[0040] FIG. 2C is a block diagram illustrating aspects of a wireless device in which aspects of the present disclosure may be implemented.[0041] FIG. 3A is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0042] FIG. 3B is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0043] FIG. 4A is a diagram illustrating aspects of a device including a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0044] FIG. 4B is a diagram illustrating an implementation of a heatsink in accordance with aspects described herein.[0045] FIG. 4C is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0046] FIG. 4D is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0047] FIG. 4E is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0048] FIG. 5 A is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0049] FIG. 5B is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0050] FIG. 5C is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0051] FIG. 6 is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.
[0052] FIG. 7 is a diagram illustrating aspects of a heatsink and a mmW module for integration of mmW and non-mmW antennas in accordance with aspects described herein.[0053] FIGS. 8A, 8B, 8C and 8D are block diagrams illustrating a mmW module in accordance with aspects of the disclosure.[0054] FIG. 9 is a flow diagram describing an example of the operation of a method for operation of a device including a mmW module and an integrated heatsink with a non- mmW antenna in accordance with some aspects.[0055] FIG. 10 is a functional block diagram of an apparatus including a mmW module and an integrated heatsink with a non-mmW antenna in accordance with some aspects.DETAILED DESCRIPTION[0056] The detailed description set forth below in connection with the appended drawings is intended as a description of exemplary implementations and is not intended to represent the only implementations in which the invention may be practiced. The term “exemplary” used throughout the description means “serving as an example, instance, or illustration,” and should not necessarily be construed as preferred or advantageous over other exemplary implementations. The detailed description includes specific details for the purpose of providing a thorough understanding of the exemplary implementations. In some instances, some devices are shown in block diagram form. Drawing elements that are common among the following figures may be identified using the same reference numerals.[0057] Standard form factors for devices such as cell phones, tablets, laptop computers, cellular hotspot devices, and other such devices are subject to increasingly limited space. At the same time, additional wireless communication systems are being integrated into such devices. Performance and space tradeoffs are design considerations in all such devices. Millimeter wavelength (mmW) modules that include mmW circuitry, e.g., transmission (Tx) and receive (Rx) elements for mmW communications, are subject to significant power usage and associated heat generation. Metal heatsink structures used with mmW modules consume space resources for heat dispersion, and can interfere with non-mmW wireless performance due to interference with non-mmW electromagnetic signals.
[0058] Aspects described herein include devices with heatsinks configured for integration of millimeter wave (mmW) and non-mmW antennas. Aspects include devices with modified heatsinks with an added data feed (e.g., a connection point for receiving non- mmW signals for wireless communication and/or services) and by structuring the heatsink such that at least a portion functions as an antenna for non-mmW signals. The heatsink can be jointly structured for both dissipation of thermal energy and antenna operation for non-mmW frequencies. In some aspects, the heatsink is physically coupled to a mmW module that includes one or more mmW antennas. In some cases, the heatsink is structured as an antenna to transmit in a given non-mmW set of frequencies (e.g., frequencies less than approximately 20 gigahertz (GHz), for example approximately 7 GHz or below, frequencies at or around approximately 1.6 GHz, frequencies at or around approximately 1. 1 GHz, etc.). Similarly, the mmW module may include one or more antennas configured to transmit or receive mmW signals at frequencies greater than approximately 20 GHz.[0059] Such a device with a heatsink integrating a non-mmW antenna with a mmW module may improve the performance of the device with efficient usage of space. In some aspects, some such devices can leverage space efficiency where the combination of a heatsink and non-mmW antenna are integrated into a single heatsink element including the non-mmW antenna to add additional functionality in a given design space. Additional device improvements will be apparent from the descriptions provided herein.[0060] FIG. 1 is a diagram showing a wireless device 110 communicating with a wireless communication system 120. In accordance with aspects described herein, the wireless device can include mmW and non-mmW communication elements with implementations of a heatsink integrating a mmW module with a non-mmW antenna. The wireless communication system 120 may be a Long Term Evolution (LTE) system, a Code Division Multiple Access (CDMA) system, a Global System for Mobile Communications (GSM) system, a wireless local area network (WLAN) system, a 5G NR (new radio) system, or some other wireless system. A CDMA system may implement Wideband CDMA (WCDMA), CDMA IX, Evolution-Data Optimized (EVDO), Time Division Synchronous CDMA (TD-SCDMA), or some other version of CDMA. Communication elements of the wireless device 110 for implementing mmW and non-mmW communications in accordance with any such communication standards can be supported by various designs of a heatsink in accordance with aspects described herein. For
simplicity, FIG. 1 shows wireless communication system 120 including two base stations 130 and 132 and one system controller 140. In general, a wireless communication system may include any number of base stations and any set of network entities.[0061] The wireless device 110 may also be referred to as a user equipment (UE), a mobile station, a terminal, an access terminal, a subscriber unit, a station, etc. Wireless device 110 may be a cellular phone, a smartphone, a tablet, or other such mobile device (e.g., a device integrated with a display screen). Other examples of the wireless device 110 include a wireless modem, a personal digital assistant (PDA), a handheld device, a laptop computer, a smartbook, a netbook, a tablet, a cordless phone, a medical device, a device configured to connect to one or more other devices (for example through the internet of things), a wireless local loop (WLL) station, a Bluetooth device, etc. Wireless device 110 may communicate with wireless communication system 120. Wireless device 110 may also receive signals from broadcast stations (e.g., a broadcast station 134) and/or signals from satellites (e.g., a satellite 150 in one or more global navigation satellite systems (GNSS), etc.). Wireless device 110 may support one or more radio technologies for wireless communication such as LTE, WCDMA, CDMA IX, EVDO, TD-SCDMA, GSM, 802.11, 5G, etc.[0062] The wireless communication system 120 may also include a wireless device 160. In an exemplary embodiment, the wireless device 160 may be a wireless access point, or another wireless communication device that comprises, or comprises part of a wireless local area network (WLAN). In an exemplary embodiment, the wireless device 110 may be referred to as a customer premises equipment (CPE), which may be in communication with a base station 130 and a wireless device 110, or other devices in the wireless communication system 120. In some embodiments, the CPE may be configured to communicate with the wireless device 160 using WAN signaling and to interface with the base station 130 based on such communication instead of the wireless device 160 directly communicating with the base station 130. In exemplary embodiments where the wireless device 160 is configured to communicate using WLAN signaling, a WLAN signal may include WiFi, or other communication signals.[0063] Wireless device 110 may support carrier aggregation, for example as described in one or more LTE or 5G standards. In some embodiments, a single stream of data is transmitted over multiple carriers using carrier aggregation, for example as opposed to
separate carriers being used for respective data streams. Wireless device 110 may be able to operate in a variety of communication bands including, for example, those communication bands used by LTE, WiFi, 5G or other communication bands, over a wide range of frequencies. Wireless device 110 may also be capable of communicating directly with other wireless devices without communicating through a network.[0064] In general, carrier aggregation (CA) may be categorized into two types - intraband CA and inter-band CA. Intra-band CA refers to operation on multiple carriers within the same band. Inter-band CA refers to operation on multiple carriers in different bands.[0065] FIG. 2A is a block diagram showing a wireless device 200 in which aspects of the present disclosure may be implemented. The wireless device 200 may, for example, be an embodiment of the wireless device 110 illustrated in FIG. 1. In some examples, the wireless device 200 (or any of the devices or elements illustrated in any of FIGs. 2A-2C) may be an example of any of the devices illustrated in FIG. 1.[0066] FIG. 2A shows an example of a transceiver 220 having a transmitter 230 and a receiver 250. In general, the conditioning of the signals in the transmitter 230 and the receiver 250 may be performed by one or more stages of amplifier, filter, upconverter, downconverter, etc. These circuit blocks may be arranged differently from the configuration shown in FIG. 2A. Furthermore, other circuit blocks not shown in FIG. 2A may also be used to condition the signals in the transmitter 230 and receiver 250. Unless otherwise noted, any signal in FIG. 2A, or any other figure in the drawings, may be either single-ended or differential. Some circuit blocks in FIG. 2A may also be omitted.[0067] In the example shown in FIG. 2A, wireless device 200 generally comprises the transceiver 220 and a data processor 210. The data processor 210 may include a processor 296 operatively coupled to a memory 298. The memory 298 may be configured to store data and program codes shown generally using reference numeral 299, and may generally comprise analog and/or digital processing components. The transceiver 220 includes a transmitter 230 and a receiver 250 that support bi-directional communication. In general, wireless device 200 may include any number of transmitters and/or receivers for any number of communication systems and frequency bands. All or a portion of the transceiver 220 may be implemented on one or more analog integrated circuits (ICs), RF ICs (RFICs), mixed-signal ICs, etc.
[0068] A transmitter or a receiver may be implemented with a super-heterodyne architecture or a direct-conversion architecture. In the super-heterodyne architecture, a signal is frequency-converted between radio frequency (RF) and baseband in multiple stages, e.g., from RF to an intermediate frequency (IF) in one stage, and then from IF to baseband in another stage for a receiver. In the direct-conversion architecture, a signal is frequency converted between RF and baseband in one stage. The super-heterodyne and direct-conversion architectures may use different circuit blocks and/or have different requirements. In the example shown in FIG. 2 A, transmitter 230 and receiver 250 are implemented with the direct-conversion architecture.[0069] In the transmit path, the data processor 210 processes data to be transmitted and provides in-phase (I) and quadrature (Q) analog output signals to the transmitter 230. In an exemplary embodiment, the data processor 210 includes digital-to-analog-converters (DAC's) 214a and 214b for converting digital signals generated by the data processor 210 into the I and Q analog output signals, e.g., I and Q output currents, for further processing. In other embodiments, the DACs 214a and 214b are included in the transceiver 220 and the data processor 210 provides data (e.g., for I and Q) to the transceiver 220 digitally.[0070] Within the transmitter 230, bandpass (e.g., lowpass) filters 232a and 232b filter the I and Q analog transmit signals, respectively, to remove undesired images caused by the prior digital-to-analog conversion. Amplifiers (Amp) 234a and 234b amplify the signals from bandpass filters 232a and 232b, respectively, and provide I and Q baseband signals. An upconverter 240 having upconversion mixers 241a and 241b upconverts the I and Q baseband signals with I and Q transmit (TX) local oscillator (LO) signals from a TX LO signal generator 290 and provides an upconverted signal. A filter 242 filters the upconverted signal to remove undesired images caused by the frequency upconversion as well as noise in a receive frequency band. A power amplifier 244 amplifies the signal from filter 242 to obtain the desired output power level and provides a transmit RF signal. The transmit RF signal is routed through a duplexer or switch 246 and transmitted via an antenna array 248. While examples discussed herein utilize I and Q signals, those of skill in the art will understand that components of the transceiver may be configured to utilize polar modulation.[0071] In the receive path, the antenna array 248 receives communication signals and provides a received RF signal, which is routed through duplexer or switch 246 and
provided to a low noise amplifier (LNA) 252. The switch 246 is designed to operate with a specific RX-to-TX duplexer frequency separation, such that RX signals are isolated from TX signals. The received RF signal is amplified by LNA 252 and filtered by a filter 254 to obtain a desired RF input signal. Downconversion mixers 261a and 261b in a downconverter 260 mix the output of filter 254 with I and Q receive (RX) LO signals (i. e. , LO_I and LO_Q) from an RX LO signal generator 280 to generate I and Q baseband signals. The I and Q baseband signals are amplified by amplifiers 262a and 262b and further filtered by baseband (e.g., lowpass) filters 264a and 264b to obtain I and Q analog input signals, which are provided to data processor 210. In the exemplary embodiment shown, the data processor 210 includes analog-to-digital-converters (ADC's) 216a and 216b for converting the analog input signals into digital signals to be further processed by the data processor 210. In some embodiments, the ADCs 216a and 216b are included in the transceiver 220 and provide data to the data processor 210 digitally.[0072] In FIG. 2A, TX LO signal generator 290 generates the I and Q TX LO signals used for frequency upconversion, while RX LO signal generator 280 generates the I and Q RX LO signals used for frequency downconversion. Each LO signal is a periodic signal with a particular fundamental frequency. A phase locked loop (PLL) 292 receives timing information from data processor 210 and generates a control signal used to adjust the frequency and/or phase of the TX LO signals from LO signal generator 290. Similarly, a PLL 282 receives timing information from data processor 210 and generates a control signal used to adjust the frequency and/or phase of the RX LO signals from LO signal generator 280.[0073] In an exemplary embodiment, the RX PLL 282, the TX PLL 292, the RX LO signal generator 280, and the TX LO signal generator 290 may alternatively be combined into a single LO generator circuit 295, which may include common or shared LO signal generator circuitry to provide the TX LO signals and the RX LO signals. Alternatively, separate LO generator circuits may be used to generate the TX LO signals and the RX LO signals.[0074] Wireless device 200 may support CA and may (i) receive multiple downlink signals transmitted by one or more cells on multiple downlink carriers at different frequencies and/or (ii) transmit multiple uplink signals to one or more cells on multiple uplink carriers. Those of skill in the art will understand, however, that aspects described
herein may be implemented in systems, devices, and/or architectures that do not support carrier aggregation.[0075] Certain components of the transceiver 220 are functionally illustrated in FIG. 2A, and the configuration illustrated therein may or may not be representative of a physical device configuration in certain implementations. For example, as described above, transceiver 220 may be implemented in various integrated circuits (ICs), RF ICs (RFICs), mixed-signal ICs, etc. In some embodiments, the transceiver 220 is implemented on a substrate or board such as a printed circuit board (PCB) having various modules, chips, and/or components. For example, the power amplifier 244, the filter 242, and the switch 246 may be implemented in separate modules or as discrete components, while the remaining components illustrated in the transceiver 220 may be implemented in a single transceiver chip.[0076] The power amplifier 244 may comprise one or more stages comprising, for example, driver stages, power amplifier stages, or other components, that can be configured to amplify a communication signal on one or more frequencies, in one or more frequency bands, and at one or more power levels. Depending on various factors, the power amplifier 244 can be configured to operate using one or more driver stages, one or more power amplifier stages, one or more impedance matching networks, and can be configured to provide good linearity, efficiency, or a combination of good linearity and efficiency.[0077] In an exemplary embodiment in a super-heterodyne architecture, the power amplifier 244 and LNA 252 (and filter 242 and/or 254 in some examples) may be implemented separately from other components in the transmitter 230 and receiver 250, and may be implemented on a millimeter wave integrated circuit. An example superheterodyne architecture is illustrated in FIG. 2B.[0078] FIG. 2B is a block diagram showing a wireless device in which aspects of the present disclosure may be implemented. Certain components, for example which may be indicated by identical reference numerals, of the wireless device 200a in FIG. 2B may be configured similarly to those in the wireless device 200 shown in FIG. 2A and the description of identically numbered items in FIG. 2B will not be repeated.
[0079] The wireless device 200a is an example of a heterodyne (or superheterodyne) architecture in which the upconverter 240 and the downconverter 260 are configured to process a communication signal between baseband and an intermediate frequency (IF). For example, the upconverter 240 may be configured to provide an IF signal to an upconverter 275. In an exemplary embodiment, the upconverter 275 may comprise summing function 278 and upconversion mixer 276. The summing function 278 combines the I and the Q outputs of the upconverter 240 and provides a non-quadrature signal to the mixer 276. The non-quadrature signal may be single ended or differential. The mixer 276 is configured to receive the IF signal from the upconverter 240 and TX RF LO signals from a TX RF LO signal generator 277, and provide an upconverted mmW signal to phase shift circuitry 281. While PLL 292 is illustrated in FIG. 2B as being shared by the signal generators 290, 277, a respective PLL for each signal generator may be implemented.[0080] In an exemplary embodiment, components in the phase shift circuitry 281 may comprise one or more adjustable or variable phased array elements, and may receive one or more control signals from the data processor 210 over connection 289 and operate the adjustable or variable phased array elements based on the received control signals.[0081] In an exemplary embodiment, the phase shift circuitry 281 comprises phase shifters 283 and phased array elements 287. Although three phase shifters 283 and three phased array elements 287 are shown for ease of illustration, the phase shift circuitry 281 may comprise more or fewer phase shifters 283 and phased array elements 287.[0082] Each phase shifter 283 may be configured to receive the mmW transmit signal from the upconverter 275, alter the phase by an amount, and provide the mmW signal to a respective phased array element 287. Each phased array element 287 may comprise transmit and receive circuitry including one or more filters, amplifiers, driver amplifiers, and/or power amplifiers. In some embodiments, the phase shifters 283 may be incorporated within respective phased array elements 287.[0083] The output of the phase shift circuitry 281 is provided to an antenna array 248. In an exemplary embodiment, the antenna array 248 comprises a number of antennas that typically correspond to the number of phase shifters 283 and phased array elements 287, for example such that each antenna element is coupled to a respective phased array
element 287. In an exemplary embodiment, the phase shift circuitry 281 and the antenna array 248 may be referred to as a phased array.[0084] In a receive direction, an output of the phase shift circuitry 281 is provided to a downconverter 285. In an exemplary embodiment, the downconverter 285 may comprise an I/Q generation function 291 and a downconversion mixer 286. In an exemplary embodiment, the mixer 286 downconverts the receive mmW signal provided by the phase shift circuitry 281 to an IF signal according to RX mmW LO signals provided by an RX mmW LO signal generator 279. The I/Q generation function 291 receives the IF signal from the mixer 286 and generates I and Q signals for the downconverter 260, which downconverts the IF signals to baseband, as described above. While PLL 282 is illustrated in FIG. 2B as being shared by the signal generators 280, 279, a respective PLL for each signal generator may be implemented.[0085] In some embodiments, the upconverter 275, downconverter 285, and the phase shift circuitry 281 are implemented on a common IC. In some embodiments, the summing function 278 and the I/Q generation function 291 are implemented separate from the mixers 276 and 286 such that the mixers 276, 286 and the phase shift circuitry 281 are implemented on the common IC, but the summing function 278 and I/Q generation function 291 are not (e.g., the summing function 278 and I/Q generation function 291 are implemented in another IC coupled to the IC having the mixers 276, 286). In some embodiments, the LO signal generators 277, 279 are included in the common IC. In some embodiments in which phase shift circuitry is implemented on a common IC with 276, 286, 277, 278, 279, and/or 291, the common IC and the antenna array 248 are included in a module, which may be coupled to other components of the transceiver 220 via a connector. In some embodiments, the phase shift circuitry 281, for example, a chip on which the phase shift circuitry 281 is implemented, is coupled to the antenna array 248 by an interconnect. For example, components of the antenna array 248 may be implemented on a substrate and coupled to an integrated circuit implementing the phase shift circuitry 281 via a flexible printed circuit.[0086] In some embodiments, both the architecture illustrated in FIG. 2A and the architecture illustrated in FIG. 2B are implemented in the same device. For example, a wireless device 110 or 200 may be configured to communicate with signals having a frequency below about 20 GHz using the architecture illustrated in FIG. 2A and to
communicate with signals having a frequency above about 20 GHz using the architecture illustrated in FIG. 2B. In devices in which both architectures are implemented, one or more components of FIGs. 2A and 2B that are identically numbered may be shared between the two architectures. For example, both signals that have been downconverted directly to baseband from mmW and signals that have been downconverted from mmW to baseband via an IF stage may be filtered by the same baseband filter 264. In other embodiments, a first version of the filter 264 is included in the portion of the device which implements the architecture of FIG. 2A and a second version of the filter 264 is included in the portion of the device which implements the architecture of FIG. 2B.[0087] FIG. 2C is a block diagram 297 showing in greater detail an embodiment of some of the components of FIG. 2B. In an exemplary embodiment, the upconverter 275 provides an mmW transmit signal to the phase shift circuitry 281 and the downconverter 285 receives an mmW receive signal from the phase shift circuitry 281. In an exemplary embodiment, the phase shift circuitry 281 comprises an mmW variable gain amplifier (VGA) 284, a splitter/ combiner 288, the phase shifters 283 and the phased array elements 287. In an exemplary embodiment, the phase shift circuitry 281 may be implemented on a millimeter-wave integrated circuit (mmWIC). In some such embodiments, the upconverter 275 and/or the downconverter 285 (or just the mixers 276, 286) are also implemented on the mmWIC. In an exemplary embodiment, the mmW VGA 284 may comprise a TX VGA 293 and an RX VGA 294. In some embodiments, the TX VGA 293 and the RX VGA 294 may be implemented independently. In other embodiments, the VGA 284 is bidirectional. In an exemplary embodiment, the splitter/combiner 288 may be an example of a power distribution network and a power combining network. In some embodiments, the splitter/combiner 288 may be implemented as a single component or as a separate signal splitter and signal combiner. The phase shifters 283 may be coupled to respective phased array elements 287. Each respective phased array element 287 is coupled to a respective antenna element in the antenna array 248. In an exemplary embodiment, phase shifters 283 and the phased array elements 287 receive control signals from the data processor 210 over connection 289. The exemplary embodiment shown in FIG. 2C comprises a 1x4 array having four phase shifters 283-1, 283-2, 283-3 and 283-n, four phased array elements 287-1, 287-2, 287-3 and 287-n, and four antennas 248-1, 248- 2, 248-3 and 248-n. However, a 1x4 phased array is shown for example only, and other configurations, such as 1x2, 1x6, 1x8, 2x3, 2x4, or other configurations are possible.
[0088] Examples illustrated with respect to FIGs. 2B and 2C implement phase shifting (e.g., using phase shifters 283) in a signal path of the wireless device 200a. In other examples, the phase shifters 283 are omitted, and a phase of a signal may be adjusted by varying a phase at the mixers 276, 286. In some examples, the LO signal generators 277, 279 are configured to provide oscillating signals having varied phase in order to produce TX and/or RX signals having different phases. In some such examples, more than one mixer is implemented for the TX path and/or the RX path in the circuitry 281.[0089] The circuitry of FIGs. 2B and 2C can, in some implementations, generate sufficient heat to cause operation problems for a device if the heat is not appropriately dissipated. One device configuration is to attach a metallic heatsink to a mmW module supporting mmW communications, with a separate and distinct non-mmW antenna implemented in the device separate from the heatsink to prevent the heatsink from interfering with operation of the non-mmW antenna while providing sufficient heat transfer and dissipation to manage heat generated by the mmW module.[0090] FIG. 3A is a diagram illustrating aspects of a heatsink 330 and a mmW module 310 for integration of mmW and non-mmW antennas in an apparatus 300 in accordance with aspects described herein. FIG. 3B is an additional diagram illustrating aspects of the heatsink 330 and the mmW module 310 for integration of mmW and non-mmW antennas in accordance with aspects described herein. FIG. 3 A particularly shows an exploded view of separate parts of apparatus 300 that are in physical contact when implemented in a device, to clarify the structure of the example components of apparatus 300. FIG. 3B shows a connected view where apparatus 300 is assembled with the heat dispersion adhesive 320 not visible at selected physical contact interfaces between the mmW module 310 and the heatsink 330.[0091] As shown, the apparatus 300 includes the mmW module 310, the heat dispersion adhesive 320, and the heatsink 330 which is structured as a non-mmW antenna. The mmW module 310 includes one or more mmW antennas for enabling mmW communications, as well as additional supporting circuitry, which can include various aspects of the circuitry described above in FIGs. 2B and 2C. Additional details of internal structures of mmW modules such as the mmW module 310 are discussed below with respect to FIGs. 8A, 8B, 8C, and 8D.
[0092] The apparatus 300 additionally may include various forms of the heat dispersion adhesive 320. In some aspects, the apparatus 300 includes a thermally conductive epoxy adhesive as the heat dispersion adhesive 320. Such epoxy adhesives can include silicone epoxies, polyurethane epoxies, and other such epoxy materials, which can be selected based on the expected thermal environment and desired thermal transfer characteristics. Some thermal conductive epoxies in accordance with aspects described herein have a thermal conductivity of approximately 0.5 Watts per square meter (W/mK) (e.g., between approximately 0.4 and 0.6 W/mK). High performance thermal epoxies may have thermal conductivity over 1.5 W/mK (e.g., between 1.5 and 3 W/mK) in some implementations. In some implementations, a heat dispersion adhesive 320 can be combined with a nonadhesive thermal material to further improve heat transfer performance with a pattern of adhesive combined with non-adhesive thermal transfer material. Such non-adhesive thermal transfer materials (e.g., thermal paste, thermal grease, etc.) can have thermal conductivity characteristics up to approximately 70 W/mK using filler materials such as zinc oxide, ceramics, aluminum, copper, silver, graphite, and/or carbon nanoparticles along with other materials. In different implementations, electrically conductive or electrically non-conductive adhesives can be used, or combinations of such adhesives can be used based on a particular design and antenna operation to prevent mmW and non- mmW antennas from interfering with each other. Some such epoxies can include silver filled epoxy, graphite filled epoxy, or other such conductive epoxies. In some aspects, the heat dispersion adhesive 320 can be a thermally conductive tape material. In other aspects, other such adhesives can be used, or combinations of various adhesives can be used.[0093] In some aspects of such an apparatus, the heat dispersion adhesive 320 is optional, or alternative heat dispersion materials can be used. In some aspects, a non- adhesive conductive material can be used at portions of the physical connection between the mmW module 310 and the heatsink 330. In such aspects, the apparatus can use alternative methods of maintaining a connection between the mmW module 310 and heatsink 330, such as mechanical fasteners at fixed points, adhesives at certain points other than where a heat transfer material is located, or other such mechanisms for maintaining a mechanical (e.g., physical) connection between the mmW module 310 and the heatsink 330 to facilitate heat transfer from the mmW module 310 to the heatsink 330, and associated heat dispersion via the heatsink 330.
[0094] As described herein, the apparatus 300 includes one or more mmW antennas in the mmW module 310, and also includes a non-mmW antenna as part of the heatsink 330. The apparatus 300 includes a non-mmW antenna as part of the design structure of the heatsink 330, and the non-mmW antenna may be configured to dissipate heat or may otherwise be designed into thermal characteristics for the heatsink 330. Such a design can function with a metallic or conductive portion of the structure for the heatsink 330 integrated directly as a non-mmW antenna without sacrificing mmW or non-mmW antenna performance, and while preserving heat dispersion characteristics. By fine tuning the structure of the heatsink 330 as part of design of the apparatus 300, the non-mmW antenna aspect of the heatsink 330 allows flexibility to provide antenna performance or additional radio access technology (RAT) functionality for a given mmW module based on the particular design of the heatsink 330 and design preferences of a device including the apparatus 300. For example, parameters (width, length, thickness, shape, material, grounding points, distance from the mmW module 310, etc.) of the non-mmW antenna may be adjusted based on frequency at which communications may be transmitted and/or received, based on desired antenna efficiency or radiated power, based electrical or conductive components which will be positioned near the apparatus 300 when included in a device, etc. As illustrated in FIG. 3A and FIG. 3B, the heatsink 330 includes metal structures that can be configured for particular RAT and frequency operation, as well as providing physical structures for connections between the mmW module 310 and the heatsink 330 (e.g., using adhesive 320), as well as the illustrated structures for physically fastening apparatus 300 to other elements of a device (e.g., via screw holes for fastening to frame structures of a mobile device, a laptop, a tablet, CPE, or any other such devices including mmW and non-mmW wireless communication support). In some embodiments, screws or other connectors which fasten the heatsink 330 to a frame or chassis of a device (e.g., via the holes illustrated in FIGs. 3A and 3B) couple the heatsink 330 to system ground (e.g., near each end).[0095] In various aspects, the apparatus 300 can be configured with additional control or communication circuitry configured to provide data signals compatible with a particular RAT. As described herein, “data signals” include signals transmitted and received as part of a communication system, ranging codes in global positioning systems, radar signals (e.g., transmissions or reflections including data about local objects), or other such codes or signals including information that can be received by an antenna and processed by
control circuitry coupled to the antenna. The non-mmW antenna can receive an amplified signal via a signal feed that is particularly configured and amplified to a given gain level for the non-mmW antenna and an associated RAT. Such a RAT may, for example, have particular power transmission limits, with the data signal amplified to within a threshold level of the power transmission limits in order to provide for acceptable transmission distances while avoiding excessive electromagnetic exposure to sensitive objects or individuals near the apparatus 300. The heatsink in such aspects is not simply reflecting ambient signals, but is configured as a non-mmW device configured to receive signals in a particular RAT configuration and/or transmit signals in the RAT configuration, within power limits defined by the RAT standard operation. For example, the non-mmW antenna of the non-mmW device may be configured to resonate or radiate at a certain frequency so as to provide a desired gain to communication signals, operate with a desired EIRP, or perform according to another metric that is determined to be effective for wireless communication. In some embodiments, the non-mmW antenna is an example of the antenna array 248 in FIG. 2A. Signals are directed to or from the non-mmW antenna using a signal feed element coupled to the heatsink. Similarly, mmW module 310 can be coupled to circuitry that provides a data signal at mmW signal nodes (e.g., ports). Signals passed between the mmW antenna(s) and a mmW signal node (e.g., a signal port or a node or position in the signal path) are processed in the mmW module (e.g., subject to beamforming, phase shifting, power amplification, etc.) to provide defined communication performance, for example as described above in FIGs. 2B and 2C.[0096] FIG. 4 A is a diagram illustrating aspects of a device 401 including an apparatus 400. Various aspects and portions of the apparatus 400 are illustrated in additional detail in FIGs. 4B-4E, and may not be individually visible or identified in the illustration of FIG. 4A.[0097] The apparatus 400 includes a heatsink 430 and a mmW module 410 for integration of mmW and non-mmW antennas in accordance with aspects described herein. While device 401 particularly illustrates an example of a mobile device, in other aspects, the apparatus 400 can be integrated and/or customized for any type of device including wireless communication support for mmW and non-mmW frequencies as described above. The apparatus 400 particularly includes heatsink 430 configured as a non-mmW antenna having a feed point 432, as well as mmW module 410 and a heat dispersion
adhesive 420. In one implementation, the mmW module 410 is approximately 2 millimeters (mm) wide, 3.5 mm tall, and 24 mm long. In some such implementations, the heatsink 430 can include mechanical attachments to the mmW module 410 that extend along any surface of the dimensions of the mmW module 410. In some implementations, the heatsink 430 can extend any distance past the dimensions of the mmW module to provide structure for the non-mmW antenna that makes up part of the heatsink and radiates thermal energy to a heat dissipation environment (e.g., air, a thermal dissipation liquid, etc.). In some examples, the adhesive 320 covers a side of the mmW module 410 that is approximately 17.5 mm of the 24 mm length of the mmW module on the side that is approximately 3.5 mm. In some examples, the adhesive 320 can further extend past the side of the mmW module 410 by approximately 2 mm, and extend across a portion of the heatsink 430 attached to frame metal 450 without touching the mmW module 410. As illustrated in FIG. 4D, this results in a gap between the mmW module 410 and a portion of the heatsink 430 that mechanically (e.g., physically) directly attaches to 450, with the adhesive 420 attached to the heatsink 430 on one end and across the gap, where the adhesive 420 is not attached to the mmW module 410 across the gap (e.g., as shown in the lower right side of the apparatus 400 in FIG. 4D just above the frame metal 450 and to the right of the non-mmW feed point 432. In other examples, other such dimensions can be used, with heatsink dimensions configured to both support a given non-mmW antenna frequency and to support a given level of heat transfer from the mmW module 410 to a dispersion environment via the heatsink 430.[0098] FIG. 4B is a diagram illustrating an implementation of the heatsink 430 as a non- mmW antenna in accordance with aspects described herein. As illustrated, the heatsink 430 includes a non-mmW feed point 432 for receiving non-mmW signals to be transmitted using the integrated heatsink 430 or a portion thereof as a non-mmW antenna and/or for receiving signals wirelessly received by the non-mmW antenna of the heatsink 430. For example, the feed point 432 may be coupled to the PA 244 or switch 246 illustrated in FIG. 2A, for example using a cable or conductive trace or line, etc. Additional details of the non-mmW antenna operation and configuration to avoid interference with mmW communications using the mmW module are discussed below with respect to FIG. 4E.[0099] The heatsink 430 is configured with conductive elements configured to radiate signals at non-mmW frequencies. In some aspects, the heatsink is designed to radiate at
frequencies at or around 1.6 GHz to receive global positioning system (GPS) signals (e.g., 1.575 GHz). In other aspects, the antenna can be designed to receive other non-mmW GPS signals (e.g., 1.2276 GHz, L2; 1.176 GHz, L5; etc.). In further aspects, the antenna can be designed to receive or transmit signals below 7 GHz, in communication bands between 1.5 GHz and 4.75 GHz, 800 megahertz (MHz) to 1.2 GHz, 600 MHz to 700 MHz (e.g., LTE low bands), 6 GHz to 7 GHz (e.g., WiFi 6E bands), or at other such non-mmW frequencies or frequency ranges, for example to communicate according to a 5G, 4G, 3G, 2G, WiFi, Bluetooth, etc. standard or according to another communication protocol or strategy.[00100] In some examples, the heatsink 430 is constructed using a single, integral piece of a material. For example, the heatsink 430 illustrated in FIGs. 4A-4E may be constructed of a single piece of metal folded into the illustrated shape. In other examples, the heatsink 430 is composed of several pieces which are physically (e.g., permanently or semipermanently) connected together. For example, the heatsink 430 may be constructed by fastening several different conductive pieces and/or materials together.[00101] FIG. 4C is a diagram illustrating aspects of the apparatus 400 including the heatsink 430 and the mmW module 410 for integration of mmW and non-mmW antennas in accordance with aspects described herein. FIG. 4D is a diagram illustrating aspects of the apparatus 400 including the heatsink 430 and the mmW module 410 for integration of mmW and non-mmW antennas in accordance with aspects described herein. FIG. 4E is a diagram illustrating aspects of the apparatus 400 including the heatsink 430 and the mmW module 410 for integration of mmW and non-mmW antennas along with an effective beam width 440 for mmW communications in accordance with aspects described herein. FIG. 4C shows an exploded view of the apparatus 400 including non-mmW antenna and heatsink 430, heat dispersion adhesive 420, and mmW module 410. FIG. 4D shows a front view of apparatus 400, and FIG. 4E shows an end view of apparatus 400. While apparatus 400 of FIGs. 4A-E is one specific example of an apparatus with a heatsink for integrating mmW and non-mmW antennas, other implementations will be apparent based on operational characteristics and heat dispersion design preferences.[00102] As shown, the heat dispersion adhesive 420 need not be limited to the direct contact areas where heatsink 430 would directly contact the mmW module 410 frame (e.g., packaging or other such mmW frame or structure). By physically coupling the heat
dispersion adhesive 420 to additional portions of the heatsink 430, the thermal dissipation of energy transferred from mmW module 410 to heatsink 430 via heat dispersion adhesive 420 can be increased. While additional surface area contact between mmW module 410 and heat dispersion adhesive 420 allows a greater transfer of thermal energy from the mmW module, in certain implementations, a limiting factor in thermal performance is the ability for heatsink 430 to dissipate the thermal energy, and so increased contact between the heat dispersion adhesive 420 to the heatsink 430 may increase the heat dissipation performance of the apparatus 400 without the heat dispersion adhesive 420 being connected to all available surfaces of the mmW module 410. In other implementations, transfer of thermal energy from the mmW module 410 to the heatsink 430 may be a limiting factor in thermal performance, and so the surface of the mmW module 410 physically coupled to the heat dispersion adhesive 420 is maximized to limit the thermal performance bottleneck. As described above, in other implementations, other thermal transfer configurations, including configurations with no heat dispersion adhesive, may be used based on the specific thermal performance characteristics of a corresponding mmW module. In addition to the thermal performance of the heat dispersion adhesive 420 along with the heat transfer characteristics of the heatsink 430 and the mmW module 410 (e.g., the frame or packaging of the mmW module 410), a thermal dissipation medium around the heatsink 430 can also impact design of the apparatus 400. If, for example, air is the thermal dissipation medium, the presence of venting or a fan will impact the expected thermal dissipation performance of the heatsink 430. Similarly, if the apparatus 400 is structured within an environment with a specially designed heat transfer fluid or liquid other than air (e.g., having greater thermal dissipation characteristics than air), the heatsink 430 can be structured differently.[00103] Additionally, a device including the apparatus 400 can have frame metal 450 which can be used to provide a reference voltage (e.g., ground) connection to portions of the heatsink 430. Additionally, in some aspects, the frame including the mid-frame metal can further include metal or other thermally conductive surfaces that can further serve as a heat transfer medium to assist heatsink 430 in conducting thermal energy away from the mmW module 410. In some implementations, the mid-frame metal can be part of the electrical design of the antenna aspects of the heatsink 430. In other aspects, the frame metal 450 can be electrically isolated from the heatsink 430, or can be structured of other
material to provide a physical structure and placement for the apparatus 400 without impacting the electrical or wireless performance of the apparatus 400.[00104] In some implementations, the heatsink 430 is configured for physical (e.g., mechanical) connections with only one side of a mmW module. In other implementations, the surface area connection is maximized with physical connections between two or more sides of the mmW module where conductors and/or reference voltage (e.g., ground) elements of a non-mmW antenna wrap around edges of a mmW module frame to provide physical coupling and associated heat transfer from multiple sides of mmW module 410.[00105] In some implementations, the heatsink connection can be structured as part of a particular antenna design. For example, a monopole or dipole antenna can be configured with conductors on a single side of a mmW module or with conductors on a single side of a mmW module and a reference (e.g., ground) plane on a different side of the mmW module. In some implementations, an L-antenna or an inverted-F can be configured with portions of the L or F shape of antenna conductors wrapped around the mmW module to have conductors on multiple sides of the mmW module. In such implementations, the conductor placement, the feed point positioning, and placement of any ground plane around a frame or package shape of the mmW module can be particularly designed based on the desired antenna characteristics of the non-mmW antenna. For example, in some implementations the heatsink 430 illustrated in FIGs. 4A-4E is configured as an inverted- F antenna, with the portion of the heatsink 430 that is coupled to the frame metal 450 being a ground coupling. The position of the feed point 432, the total electrical length of the heatsink starting from the feed point 432, a distance between the feed point 432 and the grounded portion, etc. may all be adjust so as to implement an antenna having a desired radiation frequency, a desired impedance, etc. Similarly, in loop antenna implementations for the non-mmW antenna/heatsink, the antenna can wrap completely around the mmW module frame, or wrap around the outside of the frame with a small break depending on the particular loop antenna design. In other aspects, antennas with any such shapes that also allow physical connections and associated thermal transfer between a mmW module and a heatsink/non-mmW antenna can be used. Further, while examples having one feed point 432 are discussed above, additional feed points may be used. For example, two feedpoints may be used to configure the non-mmW antenna as a dipole. As another example, the heatsink may be split into two portions which are electrically disconnected
and a feedpoint may be connected to each portion such that two non-mmW antennas may be formed from the heatsink. In some examples, more than two antennas are formed by the heatsink. Additional examples are described below with respect to FIGs. 5, 6, and 7.[00106] In some aspects, the thermally conductive adhesive 420 may have electrically conductive or electrically resistive properties, and positioning of the heat dispersion adhesive or different heat dispersion adhesives (e.g., with different electrical characteristics) can also be positioned to impact antenna operation.[00107] In addition to the thermal performance of the heatsink 430, the non-mmW communication performance of heatsink 430 and the mmW communication performance of mmW module 410 are important characteristics of the apparatus 400. As illustrated, mmW module 410 has an effective beam width 440 (e.g., an area for the mmW signal focus, where the area outside the effective beam width 440 has a signal power below a threshold value or below a threshold ratio from a peak value at a center of the effective beam width). In some aspects, the effective beam width can be a range within which the beam can be steered (e.g., beam steering can achieve acceptable power transmission or other acceptable performance characteristics within the define effective beam width 440). Such a beam width can be based on the antenna array (e.g., one or more antenna elements of the mmW module 410) and/or phase shift circuitry of the mmW module 410, as well as interference from the non-mmW antenna characteristics of heatsink 430. In the implementation illustrated in FIG. 4E, the effective beam width 440 allows the mmW module 410 to radiate and/or receive signals in the gap area where no conductor of the non-mmW antenna (e.g., the heatsink. 430) is present. The illustrated structure allows the mmW signals and the non-mmW signals to communicate independently without regard for interference between the mmW and non-mmW antenna signals.[00108] In such embodiments, the particular non-mmW antenna can then be configured to avoid interference with the effective beam width 440 for the mmW module 410. In the example of the apparatus 400 of FIGs. 4A-E, the particular non-mmW antenna shape of the heatsink 430 allows transmission of non-mmW signals without interference with mmW signals in the effective beam width (e.g., the three-dimensional beam pattern) from mmW module 410. In other configurations of non-mmW antennas, including monopole, dipole, L-antenna, inverted-F, loop antenna, and other such configurations, physical positioning of the non-mmW antenna (e.g., conductor elements and reference or ground
elements of the heatsink 430) can be implemented to avoid interference between mmW signals and non-mmW signals while providing for operation of the non-mmW antenna as a heatsink to dissipate thermal energy from the mmW module 410.[00109] In some examples, heat dissipated from a portion of the heatsink 430 which primarily resonates or radiates when communicating at a configured frequency is relatively small, for example minimal or approximately zero. Thus, while a portion of the heatsink 430 may be configured as a non-mmW antenna, it is not required that non-mmW portion significantly contribute to the dissipation of heat. Similarly, portions of the heatsink 430 which primarily or significantly dissipate heat may not effectively radiate signals in a frequency at which a device incorporating the heatsink 430 is configured to operate. In some examples, a heat dissipation portion of the heatsink 430 does not effectively radiate such signals, but forms a portion of the antenna. For example, in the configuration illustrated in FIGs. 4A-4E, the portion of the heatsink 430 that is coupled to the frame metal 450 may dissipate a significant amount or proportion of heat and may be configured to provide a ground coupling for an invented-F non-mmW antenna configuration, but may not effectively radiate signals in a desired communication frequency. In other embodiments, heat dissipation and communication signal radiation portions may partially, mostly, or entirely overlap or be the same.[00110] FIGs. 5A, 5B, and 5C are diagrams illustrating aspects of an apparatus 500 including a heatsink 530 (e.g., comprising a non-mmW antenna) and a mmW module 510 for integration of mmW and non-mmW antennas in accordance with aspects described herein. The heatsink 530 of FIGs. 5A-C comprises a quarter wavelength slot antenna created by configuring the heatsink to form a slot structure 534 between the heatsink and the frame metal 550. The non-mmW feed 532 is used in a signal path to provide non- mmW data signals (e.g., communication data, GPS data, etc.) to and/or from the heatsink 530 comprising the non-mmW antenna. The frame metal 550 provides attachment and mechanical structure to fix the apparatus 500 within a larger device (e.g., a mobile phone, a laptop, etc.), and can also function to provide a reference plane (e.g., a ground).[00111] FIG. 5A shows a front view of apparatus 500 looking across slot structure 534, with mmW module 510 between the point of view and the heatsink 530. FIG. 5B shows an end view of apparatus 500 looking at an end view of the long slot structure 534. In FIG. 5B, the feed 532 is blocked by the portion of heatsink 530 connected directly to the frame
metal 550 (e.g., the heatsink 530 is between the feed 532 and the point of view of FIG. 5B). FIG. 5C shows the same point of view as FIG. 5A, but without the mmW module 510. None of FIGs. 5A-C show an adhesive. In some examples, the area of heatsink 530 covered by mmW module 510 in FIG. 5 A would have an adhesive used to facilitate conduction of thermal energy from the mmW module 510 to the heatsink 530 for dissipation of the heat. In other aspects, such a thermally conductive adhesive can cover additional area of the heatsink 530, so that in certain portions of the heatsink, the adhesive is coupled on one side to the heatsink 530, and the other side of the adhesive layer is not coupled to the mmW module, in order to assist with the transfer of thermal energy from the mmW module 510 to the heatsink 530. In some such examples, the adhesive is roughly “L” shaped so as to create a gap between the mmW module 510 and a portion of the heatsink 530 where the adhesive is exposed, similar in some respects to the configuration described with respect to FIG. 4D. In other examples, other configurations can be used.[00112] The heatsink 530 is illustrated as having a wide portion that contacts (directly or indirectly) all of a backside of the mmW module 510. In contrast, the heatsink 430 has a broad base for attachment to the frame 450 and a narrow arm extending therefrom which wraps around at least one side of the mmW module 410 and follows an edge of the mmW module 410 without contacting the mmW module 410. In some examples, spacing the narrow arm or any wrap-around portions of the heatsink a threshold distance from the mmW module 410 reduces the likelihood of degrading performance of any of the mmW antennas and the non-mmW antenna. As illustrated in FIGs. 4, the mmW module 410 may be coupled to the heatsink at only a small segment other than the broad base. The width and/or length of the heatsink may be designed to meet one or more intended uses. For example, wrapping portions of the heatsink around the mmW module may allow for an increased length of the non-mmW antenna in some configurations. In some such configurations, the non-mmW antenna may form a meandering antenna (and may form part of a MIFA, for example). Further, while the heatsinks 430, 530, are illustrated in FIGs. 4 and 5 as being in close proximity to an attached mmW module or roughly following a shape of the mmW module, a portion of the heatsink may extend significantly beyond or away from the mmW module. This may allow for greater heat dissipation and/or for increased non-mmW antenna size in some examples.
[00113] As illustrated by the apparatus 500, the heatsink 530 can include portions of the heatsink that form a non-mmW antenna (e.g., a slot antenna using the slot structure 534). In other implementations, rather than forming the slot structure 534 between the heatsink 530 and the frame metal 550, cutouts in the heatsink 530 can be used to form a slot structure, so that the slot structure can be formed entirely of the heatsink metal. In other examples, any other such structure can be used in accordance with aspects described herein to provide a heatsink for a mmW module, where the heatsink comprises a non- mmW antenna.[00114] FIG. 6 is a diagram illustrating aspects of an apparatus 600 comprising a heatsink 630 (e.g., where the heatsink 630 comprises a non-mmW antenna) and a mmW module 610 for integration of mmW and non-mmW antennas in accordance with aspects described herein. While heatsink 530 of FIGs. 5A-C comprises a quarter wavelength slot antenna, the heatsink 630 of FIG. 6 comprises a quarter wavelength monopole antenna. The monopole has a conductive element that extends from the feed 632 near the frame metal 650 up the height of the mmW module 610, and across the length of the mmW module 610. In various implementations a conductive element of the monopole antenna can be different physical and/or electrical lengths for particular application support. In one implementation, the monopole antenna of apparatus 600 is approximately 24.1 mm. In some implementations, the heatsink and the associated conductive element can extend entirely across a length of the associated mmW module, can be shorter than a length of the mmW module, can extend past an edge of the mmW module (e.g., so that the heatsink and the conductive element of the antenna that is integrated with the heatsink is not adj acent to or touching the mmW module directly or via a thermally conductive adhesive). In other examples, other similar antennas with other conductor layouts (e.g. other than a monopole layout) can be used.[00115] FIG. 7 is a diagram illustrating aspects of an apparatus 700 including a heatsink 730 (e.g., where the heatsink 730 comprises a non-mmW antenna) and a mmW module 710 for integration of mmW and non-mmW antennas in accordance with aspects described herein. In FIG. 7, the feed 732 is used as part of a data signal route between the non-mmW antenna portions of the heatsink 730 to circuitry of a device including the apparatus 700. As described above, this can be data as part of a communication system, part of a GPS system, or part of any other such wireless communication or wireless
sensing system. The apparatus 700 of FIG. 7 includes the heatsink 730 comprising a half wavelength loop antenna. As illustrated, the heatsink structure has a non-mmW feed 732 near the heatsink 730 and the frame metal 750. The heatsink extends from the feed 732 up a side of the mmW module 710, and across atop length of the mmW module 710, then down the opposite side of the mmW module 710 away from the feed 732. As above, the frame metal 750 can be used as part of the structure of the apparatus 700 both to support the antenna structure of the heatsink 730, and to physically fix the position of the apparatus 700 in a device and the elements of the apparatus 700 in relative positions. The space below the mmW module 710 which is between the mmW module 710 and the frame metal 750 can be an air gap and can, in some implementations, include an air gap and/or a space for thermally conductive adhesive.[00116] FIGS. 8A, 8B and 8C are block diagrams collectively illustrating some aspects of a millimeter wave (mmW) module in accordance with some aspects of the disclosure. FIG. 8A shows a side view of a millimeter wave (mmW) module 800. The mmW module 800 may be an example of the mmW modules 310 and 410 shown in FIGs. 3A-B and 4A- E. In some aspects, the mmW module 800 may comprise a 1x8 phased array fabricated on a substrate 803. In some aspects, the mmW module 800 may comprise a mmWIC 810, a power management IC (PMIC) 815, a connector 817 and a plurality of antennas 821, 822, 823, 824, 825, 826, 827 and 828 fabricated on a substrate 803. Fewer or additional antennas than illustrated may be implemented. Further, while linear arrays are illustrated in FIGs. 8, a two dimensional array may be implemented.[00117] FIG. 8B is a top perspective view of the mmW module 800 showing the mmWIC 810, a PMIC 815, a connector 817 and a plurality of antennas 821, 822, 823, 824, 825, 826, 827 and 828 on the substrate 803. While the antennas 821-828 are shown for ease of explanation, in some configurations the antennas 821-828 may not be visible in such view, for example because they are integral and/or flush with the substrate 803. In some examples, the connector 817 is used to couple the upconverter 240, and/or the downconverter 260, and/or the functions 278, 291 (which all may be implemented external to the module 800) to the upconverter 275 and/or downconverter 285, or to the mixer 276 and/or the mixer 286 (which all may be implemented in the mmWIC 810). The PMIC 815 may be configured to supply system voltages to such components in the mmWIC 810 or other circuitry in the mmWIC 810. FIG. 8C is a bottom perspective view
of the mmW module 800 showing the antennas 821, 822, 823, 824, 825, 826, 827 and 828 on the substrate 803.[00118] FIG. 8D shows an alternative embodiment of a millimeter wave (mmW) module 850. The mmW module 850 may be similar to the mmW module 800 shown in FIG. 8A, but is arranged as a 1x6 array. In some aspects, the mmW module 850 may comprise a 1x6 phased array fabricated on a substrate 853. In some aspects, the mmW module 850 may comprise a plurality of antennas 871, 872, 873, 874, 875 and 876 fabricated on the substrate 853.[00119] In some aspects, every phase array element associated with each antenna 871, 872, 873, 874, 875 and 876 on the mmW module 850 are structured within a thermally conductive frame or with additional thermally conductive elements to convect thermal energy to an exterior of the mmW module 850, and then to a heatsink. Such a frame may be metallic or of any other such material suitable for providing thermal transfer of heat energy from mmW module 850 while avoiding interference with mmW signals from each of antennas 871, 872, 873, 874, 875, and 876 (e.g., in an associated effective beam width for the antenna array). Such a frame or package structure can further be particularly configured based on an expected non-mmW antenna configuration and associated physical interfaces for thermal conduction of heat energy to allow the non-mmW to act as a heat sink to dissipate thermal energy from the mmW module while allowing the mmW and non-mmW antennas to operate without mutual interference. Such interference refers to signals and antenna elements disrupting signals to or from another antenna. To avoid mutual interference, the non-mmW antenna of the heatsink is configured to avoid or limit disruption of signals sent to or from the mmW module, and the mmW module is similarly configured to avoid or limit disruption (e.g., interference) with signals sent to or from the non-mmW antenna of the heatsink. As described, such an apparatus combining the elements of a mmW module and a heatsink operating as a non-mmW antenna can be fabricated with reduced size given the lack of separate non-mmW antennas and mmW module heatsink(s). A wide variety of thermal transfer characteristics and non-mmW communication performance can be implemented with modifications to the mmW packaging and heatsink designs.[00120] FIG. 9 is a flow diagram describing an example of the operation of a method for reflection type phase shifting in accordance with some aspects. The blocks in the method
900 can be performed in or out of the order shown, and in some embodiments, can be performed at least in part in parallel.[00121] The method 900 includes block 902 which involves receiving, at a millimeter wave (mmW) signal node of a mmW module, a mmW signal, the mmW module comprising at least one mmW antenna. The mmW signal can be a signal generated for transmission via the at least one mmW antenna using circuitry of a device, or can be a signal received via the at least one mmW antenna and following a signal path including the mmW signal node to circuitry of the mmW module for processing.[00122] The method 900 includes block 904, which involves receiving, at a heatsink comprising a non-mmW antenna, a non-mmW signal, wherein the heatsink is mechanically coupled to the mmW module at a physical interface. The heatsink can be any structure including a non-mmW antenna described herein, including a heatsink comprising a quarter wave slot antenna, a loop antenna, a monopole antenna, an inverted- F antenna, or any other such antenna.[00123] The method 900 includes block 906, which involves receiving, by the heatsink comprising the non-mmW via the physical interface, thermal energy from the mmW module. The physical interface can include a thermally conductive adhesive, a direct physical contact between the heatsink and the mmW that allows conduction of thermal energy, or any combination of direct contact or any other thermally conductive materials as described herein.[00124] The method 900 includes block 908, which involves dissipating, utilizing the heatsink comprising the non-mmW antenna, the thermal energy received from the mmW module via conduction to a thermal dissipation medium.[00125] FIG. 10 is a functional block diagram of an apparatus for reflection type phase shifting in accordance with some aspects. The apparatus 1000 comprises means 1002 for transmitting orreceiving a mmW signal, and means 1004 for j ointly receiving a non-mmW signal and dissipating thermal energy received from the means 1002 for receiving the mmW signal. In some aspects, the means 1002 for receiving the mmW signal is a means for transmitting and/or receiving mmW signals, such an antenna for communicatin 5G signals, or a radar element used to transmit a radar pulse and/or receive a reflection of the radar pulse that includes information (e.g., data) regarding nearby objects. Radar signals
may be processed by radar circuitry in the device 200a. In some aspects, the means 1004 for receiving the non-mmW signal is a means for transmitting and/or receiving non-mmW signals, such as a communication antenna. In other implementations, the means 1004 is a GPS antenna configured to receive GPS code patterns. In some aspects, a thermally conductive adhesive is used to physically attach portions of one or more surfaces of the means 1002 for receiving the mmW signal to portions of one or more surfaces of the means 1004 for jointly receiving the non-mmW signal while dissipating the thermal energy received from the means for receiving the mmW signal. Means 1004 for jointly receiving the non-mmW signal and dissipating thermal energy can be any heatsink described herein that comprises a non-mmW antenna, including the heatsinks of FIGs. 3A, 3B, 4A-C, 5A-C, 6, and 7, as well as additional heatsinks described but not specifically illustrated (e.g., heatsinks comprising central slot antennas, etc.).[00126] Devices, networks, systems, and certain means for transmitting or receiving signals described herein may be configured to communicate via one or more portions of the electromagnetic spectrum. The electromagnetic spectrum is often subdivided, based on frequency or wavelength, into various classes, bands, channels, etc. In 5G NR two initial operating bands have been identified as frequency range designations FR1 (410 MHz - 7.125 GHz) and FR2 (24.25 GHz - 52.6 GHz). The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “sub-6 GHz” band in various documents and articles, and will be referred to herein as “sub-7 GHz”. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” (mmW) band in documents and articles, despite including frequencies outside of the extremely high frequency (EHF) band (30 GHz - 300 GHz) which is identified by the International Telecommunications Union (ITU) as a “mmWave” or mmW band.[00127] With the above aspects in mind, unless specifically stated otherwise, it should be understood that the term “sub-7 GHz” or the like if used herein may broadly represent frequencies that may be less than 7 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “mmWave”, mmW, or the like if used herein may broadly represent frequencies that may include mid-band frequencies, may be within FR2, or may be within the EHF band.
[00128] The circuit architecture described herein described herein may be implemented on one or more ICs, analog ICs, mmWICs, mixed-signal ICs, ASICs, printed circuit boards (PCBs), electronic devices, etc. The circuit architecture described herein may also be fabricated with various IC process technologies such as complementary metal oxide semiconductor (CMOS), N-channel MOS (NMOS), P-channel MOS (PMOS), bipolar junction transistor (BJT), bipolar-CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), heterojunction bipolar transistors (HBTs), high electron mobility transistors (HEMTs), silicon-on-insulator (SOI), etc.[00129] An apparatus implementing the circuit described herein may be a stand-alone device or may be part of a larger device. A device may be (i) a stand-alone IC, (ii) a set of one or more ICs that may include memory ICs for storing data and/or instructions, (iii) an RFIC such as an RF receiver (RFR) or an RF transmitter/receiver (RTR) or corresponding mmW elements, (iv) an ASIC such as a mobile station modem (MSM), (v) a module that may be embedded within other devices, (vi) a receiver, cellular phone, wireless device, handset, or mobile unit, (vii) etc.[00130] Although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims.[00131] Illustrative aspects of the present disclosure include, but are not limited to:[00132] Aspect 1: A wireless communication apparatus, comprising: a millimeter wave (mmW) module comprising: at least one mmW antenna; at least one mmW signal node configured to communicate a data signal in association with the at least one mmW antenna; mixing circuitry configured to convert between the data signal and a mmW signal for communications associated with the at least one mmW antenna; and a heatsink comprising a non-mmW antenna, the heatsink further comprising a non-mmW feed point coupled to the non-mmW antenna to provide a signal path to the non-mmW antenna for a non-mmW signal, wherein the heatsink is mechanically coupled to the mmW module.
[00133] Aspect 2: The wireless communication apparatus of aspect 1, wherein the at least one mmW antenna is configured to radiate in a first effective beam width from a first side of the mmW module, and wherein the non-mmW antenna is structured with a gap positioned at the first side of the mmW module.[00134] Aspect 3: The wireless communication apparatus of aspect 2, wherein the at least one mmW antenna is configured to radiate mmW signals in the first effective beam width at frequencies greater than 20 gigahertz, and wherein the non-mmW antenna is configured to radiate at frequencies less than 7 gigahertz without interfering with the mmW signals in the first effective beam width.[00135] Aspect 4: The wireless communication apparatus of any of aspects 1 through 3, wherein the heatsink is physically coupled to two or more sides of the mmW module other than the first side using a heat dispersion adhesive.[00136] Aspect 5: The wireless communication apparatus of any of aspects 1 through 4, wherein the heatsink is mechanically coupled to the mmW module to facilitate heat transfer from the mmW module to the non-mmW antenna.[00137] Aspect 6: The wireless communication apparatus of any of aspects 1 through 5, wherein the heatsink is mechanically coupled to the mmW module using a heat dispersion adhesive.[00138] Aspect 7: The wireless communication apparatus of any of aspects 1 through 6, wherein the heatsink is configured to dissipate heat received from the mmW antenna via one or more conductors used to transmit the non-mmW signal.[00139] Aspect 8: The wireless communication apparatus of any of aspects 1 through 7, wherein the heatsink comprises an integral metal structure.[00140] Aspect 9: The wireless communication apparatus of any of aspects 1 through 8, wherein the heatsink is physically connected to a thermal dissipation medium and configured to transfer thermal energy received from the mmW module to the thermal dissipation medium via conduction.[00141] Aspect 10: The wireless communication apparatus of aspect 9, wherein the thermal dissipation medium is air around the non-mmW antenna.[00142] Aspect 11: The wireless communication apparatus of any of aspects 1 through 10, wherein the non-mmW antenna is a quarter wavelength slot antenna with a radiating structure formed by a gap between the heatsink and a frame metal with the feed point structured across the gap between the heatsink and the frame metal.
[00143] Aspect 12: The wireless communication apparatus of any of aspects 1 through 10, wherein the non-mmW antenna is an inverted-F antenna comprising a ground plane coupled to a first side of the mmW module and conductors coupled to the ground plane and at least a second side of the mmW module different from the first side of the mmW module.[00144] Aspect 13: The wireless communication apparatus of any of aspects 1 through 10, wherein the non-mmW antenna is a positioning system antenna configured to receive Global Navigation Satellite System signals at approximately 1.575 gigahertz.[00145] Aspect 14: The wireless communication apparatus of any of aspects 1 through13, wherein the at least one mmW antenna includes a plurality of antennas of an antenna array; wherein the mmW module further comprises phase shifting circuitry for each antenna of the plurality of antennas configurable to transmit or receive a beamformed beam in an effective beam width range.[00146] Aspect 15: The wireless communication apparatus of any of aspects 1 through14, wherein the mmW module further comprises power management circuitry and mmW circuitry, wherein the power management circuitry is configured to supply system voltages the mmW circuitry.[00147] Aspect 16: The wireless communication apparatus of any of aspects 1 through 10 or 14 through 15, wherein the non-mmW antenna includes a conductor physically coupled to the mmW module, wherein the conductor has a length of approximately 24.1 millimeters.[00148] Aspect 17: The wireless communication apparatus of any of aspects 1 through 10 or 14 through 15„ wherein the non-mmW antenna is a quarter wavelength monopole antenna.[00149] Aspect 18: The wireless communication apparatus of any of aspects 1 through 10 or 14 through 15, wherein the non-mmW antenna is a half wavelength loop antenna.[00150] Aspect 19: The wireless communication apparatus of any of aspects 1 through 18, further comprising: a display screen; and control circuitry coupled to the display screen, the non-mmW feed point, and the mmW signal node.[00151] Aspect 20: A method of operating a wireless communication apparatus, comprising: receiving, at a millimeter wave (mmW) signal node of a mmW module, a mmW signal, the mmW module comprising at least one mmW antenna; receiving, at a heatsink comprising a non-mmW antenna, a non-mmW signal, wherein the heatsink is mechanically coupled to the mmW module at a physical interface; receiving, at the
heatsink via the physical interface, thermal energy from the mmW module; and dissipating, utilizing the heatsink comprising the non-mmW antenna, the thermal energy received from the mmW module via conduction to a thermal dissipation medium.[00152] Aspect 21: The method of aspect 20, wherein the mmW signal is relayed from the at least one mmW antenna to communication circuitry of the mmW module via the mmW signal node.[00153] Aspect 22: The method of aspect 20, wherein the mmW signal is transmitted via the at least one mmW antenna.[00154] Aspect 23: The method of any of aspects 20 through 22, wherein the non-mmW signal is received at the non-mmW antenna from a non-mmW signal feed for wireless transmission via the non-mmW antenna.[00155] Aspect 24: The method of any of aspects 20 through 22, wherein the non-mmW signal is a wireless global positioning system (GPS) signal received at the non-mmW antenna, and routed to GPS circuitry of the wireless communication apparatus via a non- mmW feed.[00156] Aspect 25: The method of any of aspects 20 through 22, wherein the mmW signal is a reflection of a radar signal received at the mmW antenna, and routed to radar circuitry of the wireless communication apparatus.[00157] Aspect 26: The method of any of aspects 20 through 25, wherein the thermal dissipation medium is air around the non-mmW antenna.[00158] Aspect 27: The method of any of aspects 20 through 25, wherein the thermal dissipation medium is a heat transfer fluid configured to transfer thermal energy from the non-mmW antenna.[00159] Aspect 28: The method of any of aspects 20 through 27, wherein the physical interface comprises a thermally conductive adhesive physically binding portions of one or more surfaces of the heatsink to portions of one or more surfaces of the mmW module. [00160] Aspect 29: An apparatus comprising: means for receiving a mmW signal; and means for jointly receiving a non-mmW signal while dissipating thermal energy received from the means for receiving the mmW signal via thermal conduction.[00161] Aspect 30: The apparatus of claim 29, further comprising a thermally conductive adhesive used to physically attach portions of one or more surfaces of the means for receiving the mmW signal to portions of one or more surfaces of the means for jointly receiving the non-mmW signal while dissipating the thermal energy received from the means for receiving the mmW signal.
[00162] Aspect 31 : An apparatus comprising means for performing operations according to any of aspects 1 through 19 above.[00163] Aspect 32: A non-transitory computer-readable storage medium comprising instructions stored thereon which, when executed by one or more processors, cause the one or more processors to implement operations according to any of aspects 1 through 29 above. |
Various embodiments include methods and devices for managing optional commands. Some embodiments may include receiving an optional command from an optional command request device, determining whether the optional command can be implemented, and transmitting, to the optional command request device, an optional command no data response in response to determining that the optional command cannot be implemented. |
CLAIMSWhat is claimed is:1. A method performed in a processor of a computing device, comprising: receiving an optional command from an optional command request device; determining whether the optional command can be implemented; and transmitting, to the optional command request device, an optional command no data response in response to determining that the optional command cannot be implemented.2. The method of claim 1, further comprising generating the optional command no data response in response to determining that the optional command cannot be implemented.3. The method of claim 1, further comprising: determining whether a component of the computing device receiving the optional command is an optional command terminal device; and generating the optional command no data response in response to determining that the component of the computing device receiving the optional command is an optional command terminal device.4. The method of claim 3, further comprising interpreting an optional command terminal device ID field of the optional command, wherein determining whether a component of the computing device receiving the optional command is an optional command terminal device is based on interpreting the optional command terminal device ID field.5. The method of claim 1, further comprising: determining whether an optional command no data response condition is met; and
generating the optional command no data response in response to determining that the optional command no data response condition is met.6. The method of claim 5, further comprising interpreting an optional command no data response condition field of the optional command, wherein determining whether an optional command no data response condition is met is based on interpreting the optional command no data response condition field.7. The method of claim 1, wherein transmitting the optional command no data response comprises transmitting the optional command no data response by an optional command terminal device, the method further comprising: receiving the optional command no data response from the optional command terminal device; and reissuing the optional command in response to receiving the optional command no data response.8. The method of claim 1, wherein transmitting the optional command no data response comprises transmitting the optional command no data response by an optional command terminal device, the method further comprising: receiving the optional command no data response from the optional command terminal device; and abandoning the optional command in response to receiving the optional command no data response.9. The method of claim 1, wherein the optional command comprises an optional command terminal device ID field configured to indicate at least one optional command terminal device allowed to respond to the optional command with an
optional command no data response.10. The method of claim 1, wherein the optional command comprises an optional command no data response condition field configured to indicate at least one condition for responding to the optional command with an optional command no data response.11. The method of claim 1, wherein the optional command no data response comprises an optional command no data response terminal device ID field configured to indicate an optional command terminal device transmitting the optional command no data response.12. The method of claim 1, wherein the optional command no data response comprises an optional command no data response condition field configured to indicate an optional command no data response condition met for transmitting the optional command no data response.13. The method of claim 1, further comprising terminating the optional command in response to determining that the optional command cannot be implemented, wherein terminating the optional command comprises preventing the optional command from being forwarded to a device along an optional command transaction path.14. The method of claim 13, wherein terminating the optional command comprises: converting the optional command to a conventional command; and forwarding the conventional command to the device along the optional command transaction path.15. The method of claim 1, further comprising: generating the optional command by the optional command request device; and
transmitting the optional command along an optional command transaction path.16. The method of claim 1, wherein determining whether the optional command can be implemented comprises determining whether the optional command can be implemented based on at least one of missing requested data, requested data being out of bounds for a buffer, a cost of implementing the optional command exceeding a cost threshold, implementation of the optional command resulting in an exception, error, or fault, or being denied access to a requested location or target data.17. A computing device, comprising: an optional command terminal device configured with optional command terminal device-executable instructions to perform operations comprising: receiving an optional command from an optional command request device; determining whether the optional command can be implemented; and transmitting, to the optional command request device, an optional command no data response in response to determining that the optional command cannot be implemented.18. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations further comprising generating the optional command no data response in response to determining that the optional command cannot be implemented.19. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations further comprising:
determining whether the optional command terminal device receiving the optional command is an optional command terminal device; and generating the optional command no data response in response to determining that the optional command terminal device receiving the optional command is an optional command terminal device.20. The computing device of claim 19, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations further comprising interpreting an optional command terminal device ID field of the optional command, wherein determining whether the optional command device receiving the optional command is an optional command terminal device is based on interpreting the optional command terminal device ID field.21. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations further comprising: determining whether an optional command no data response condition is met; and generating the optional command no data response in response to determining that the optional command no data response condition is met.22. The computing device of claim 21, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations further comprising interpreting an optional command no data response condition field of the optional command, wherein determining whether an optional command no data response condition is met is based on interpreting the optional command no data response condition field.23. The computing device of claim 17, further comprising the optional command request device, wherein the optional command request device is configured with optional command request device-executable instructions to perform operations comprising: receiving the optional command no data response from the optional command terminal device; and reissuing the optional command in response to receiving the optional command no data response.24. The computing device of claim 17, further comprising the optional command request device, wherein the optional command request device is configured with optional command request device-executable instructions to perform operations comprising: receiving the optional command no data response from the optional command terminal device; and abandoning the optional command in response to receiving the optional command no data response.25. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations such that the optional command comprises an optional command terminal device ID field configured to indicate at least one optional command terminal device allowed to respond to the optional command with an optional command no data response.26. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations such that the optional command comprises an optional command no data response condition field configured to indicate at least one
condition for responding to the optional command with an optional command no data response.27. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations such that the optional command no data response comprises an optional command no data response terminal device ID field configured to indicate the optional command terminal device transmitting the optional command no data response.28. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations such that the optional command no data response comprises an optional command no data response condition field configured to indicate an optional command no data response condition met for transmitting the optional command no data response.29. The computing device of claim 17, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations further comprising terminating the optional command in response to determining that the optional command cannot be implemented, wherein terminating the optional command comprises preventing the optional command from being forwarded to a device along an optional command transaction path.30. The computing device of claim 29, wherein the optional command terminal device is configured with optional command terminal device-executable instructions to perform operations such that terminating the optional command comprises: converting the optional command to a conventional command; and forwarding the conventional command to the device along the optional
command transaction path.31. The computing device of claim 17, further comprising the optional command request device, wherein the optional command request device is configured with optional command request device-executable instructions to perform operations comprising: generating the optional command; and transmitting the optional command along an optional command transaction path.32. The computing device of claim 17, wherein the optional command terminal device is configured with optional command device-executable instructions to perform operations such that determining whether the optional command can be implemented comprises determining whether the optional command can be implemented based on at least one of missing requested data, requested data being out of bounds for a buffer, a cost of implementing the optional command exceeding a cost threshold, implementation of the optional command resulting in an exception, error, or fault, or being denied access to a requested location or target data.33. A computing device, comprising: means for receiving an optional command from an optional command request device; means for determining whether the optional command can be implemented; and means for transmitting, to the optional command request device, an optional command no data response in response to determining that the optional command cannot be implemented.34. The computing device of claim 33, further comprising means for generating the optional command no data response in response to determining that the optional command cannot be implemented.35. The computing device of claim 33, further comprising: means for determining whether a component of the computing device receiving the optional command is an optional command terminal device; and means for generating the optional command no data response in response to determining that the component of the computing device receiving the optional command is an optional command terminal device.36. The computing device of claim 35, further comprising means for interpreting an optional command terminal device ID field of the optional command, wherein means for determining whether a component of the computing device receiving the optional command is an optional command terminal device is based on interpreting the optional command terminal device ID field.37. The computing device of claim 33, further comprising: means for determining whether an optional command no data response condition is met; and means for generating the optional command no data response in response to determining that the optional command no data response condition is met.38. The computing device of claim 37, further comprising means for interpreting an optional command no data response condition field of the optional command, wherein means for determining whether an optional command no data response condition is met comprises means for determining whether an optional command no data response condition is met based on interpreting the optional command no data response condition field.39. The computing device of claim 33, further comprising: means for receiving the optional command no data response from an optional command terminal device; and means for reissuing the optional command in response to receiving the optional command no data response.40. The computing device of claim 33, further comprising: means for receiving the optional command no data response from an optional command terminal device; and means for abandoning the optional command in response to receiving the optional command no data response.41. The computing device of claim 33, wherein the optional command comprises an optional command terminal device ID field configured to indicate at least one optional command terminal device allowed to respond to the optional command with an optional command no data response.42. The computing device of claim 33, wherein the optional command comprises an optional command no data response condition field configured to indicate at least one condition for responding to the optional command with an optional command no data response.43. The computing device of claim 33, wherein the optional command no data response comprises an optional command no data response terminal device ID field configured to indicate an optional command terminal device transmitting the optional command no data response.44. The computing device of claim 33, wherein the optional command no data response comprises an optional command no data response condition field configured80
to indicate an optional command no data response condition met for transmitting the optional command no data response.45. The computing device of claim 33, further comprising means for terminating the optional command in response to determining that the optional command cannot be implemented, wherein means for terminating the optional command comprises means for preventing the optional command from being forwarded to a device along an optional command transaction path.46. The computing device of claim 45, wherein means for terminating the optional command comprises: means for converting the optional command to a conventional command; and means for forwarding the conventional command to the device along the optional command transaction path.47. The computing device of claim 33, further comprising: means for generating the optional command; and means for transmitting the optional command along an optional command transaction path.48. The computing device of claim 33, wherein the optional command terminal device is configured with optional command device-executable instructions to perform operations such that determining whether the optional command can be implemented comprises determining whether the optional command can be implemented based on at least one of missing requested data, requested data being out of bounds for a buffer, a cost of implementing the optional command exceeding a cost threshold, implementation of the optional command resulting in an exception, error, or fault, or being denied access to a requested location or target data.8149. A non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processing device of a computing device to perform operations comprising: receiving an optional command from an optional command request device; determining whether the optional command can be implemented; and transmitting, to the optional command request device, an optional command no data response in response to determining that the optional command cannot be implemented.50. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising generating the optional command no data response in response to determining that the optional command cannot be implemented.51. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising: determining whether a component of the computing device receiving the optional command is an optional command terminal device; and generating the optional command no data response in response to determining that the component of the computing device receiving the optional command is an optional command terminal device.52. The non-transitory processor-readable storage medium of claim 51, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising interpreting an optional command terminal device ID field of the optional command, wherein determining whether a component of the computing device receiving the optional82
command is an optional command terminal device is based on interpreting the optional command terminal device ID field.53. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising: determining whether an optional command no data response condition is met; and generating the optional command no data response in response to determining that the optional command no data response condition is met.54. The non-transitory processor-readable storage medium of claim 53, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising interpreting an optional command no data response condition field of the optional command, wherein determining whether an optional command no data response condition is met is based on interpreting the optional command no data response condition field.55. The non-transitory processor-readable storage medium of claim 49, wherein: the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that transmitting the optional command no data response comprises transmitting the optional command no data response by an optional command terminal device; and the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising: receiving the optional command no data response from the optional command terminal device; and reissuing the optional command in response to receiving the optional command no data response.8356. The non-transitory processor-readable storage medium of claim 49, wherein: the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that transmitting the optional command no data response comprises transmitting the optional command no data response by an optional command terminal device; and the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising: receiving the optional command no data response from the optional command terminal device; and abandoning the optional command in response to receiving the optional command no data response.57. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that the optional command comprises an optional command terminal device ID field configured to indicate at least one optional command terminal device allowed to respond to the optional command with an optional command no data response.58. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that the optional command comprises an optional command no data response condition field configured to indicate at least one condition for responding to the optional command with an optional command no data response.59. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that the optional command no84
data response comprises an optional command no data response terminal device ID field configured to indicate an optional command terminal device transmitting the optional command no data response.60. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that the optional command no data response comprises an optional command no data response condition field configured to indicate an optional command no data response condition met for transmitting the optional command no data response.61. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations further comprising terminating the optional command in response to determining that the optional command cannot be implemented, wherein terminating the optional command comprises preventing the optional command from being forwarded to a device along an optional command transaction path.62. The non-transitory processor-readable storage medium of claim 61, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that terminating the optional command comprises: converting the optional command to a conventional command; and forwarding the conventional command to the device along the optional command transaction path.63. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device85
of the computing device to perform operations further comprising: generating the optional command; and transmitting the optional command along an optional command transaction path.64. The non-transitory processor-readable storage medium of claim 49, wherein the stored processor-executable instructions are configured to cause the processing device of the computing device to perform operations such that determining whether the optional command can be implemented comprises determining whether the optional command can be implemented based on at least one of missing requested data, requested data being out of bounds for a buffer, a cost of implementing the optional command exceeding a cost threshold, implementation of the optional command resulting in an exception, error, or fault, or being denied access to a requested location or target data.86 |
Read Optional And Write Optional CommandsRELATED APPLICATIONS[0001] This application claims the benefit of priority from U.S. Patent Application No. 17/068,293, filed October 12, 2020, entitled “Read Optional And Write Optional Commands" the entire contents of which is herein incorporated by reference.BACKGROUND[0002] All memory access transactions (e.g., read or write transactions) in a system on chip (SoC) have a main memory (e.g., random access memory (RAM)) as a terminating agent for the memory access transaction. Some memory access transactions can return results before reaching the main memory if data resides in a memory (e.g., cache) found on a path between an issuing device (e.g., processor) of the memory access transaction and the terminating agent. The memory access transactions can reach the termination agent when the memory access transaction fails at the devices on the path between the issuing device and the terminating agent. Memory access transactions that fail to complete prior to reaching the termination agent may be transmitted off of the SoC to reach the terminating agent. Memory access transactions that are transmitted off of the SoC incur greater resource costs (e.g., time and bandwidth) compared to memory access transactions that remain on the SoC.SUMMARY[0003] Various disclosed aspects may include apparatuses and methods for managing optional commands. Various aspects may include receiving an optional command from an optional command request device, determining whether the optional command can be implemented, and transmitting, to the optional command
request device, an optional command no data response in response to determining that the optional command cannot be implemented.[0004] Some aspects may further include generating the optional command no data response in response to determining that the optional command cannot be implemented.[0005] Some aspects may further include determining whether a component of a computing device receiving the optional command is an optional command terminal device, and generating the optional command no data response in response to determining that the device receiving the optional command is an optional command terminal device.[0006] Some aspects may further include interpreting an optional command terminal device ID field of the optional command, in which determining whether a device receiving the optional command is an optional command terminal device is based on interpreting the optional command terminal device ID field.[0007] Some aspects may further include determining whether an optional command no data response condition is met, and generating the optional command no data response in response to determining that the optional command no data response condition is met.[0008] Some aspects may further include interpreting an optional command no data response condition field of the optional command, in which determining whether an optional command no data response condition is met is based on interpreting the optional command no data response condition field.[0009] In some aspects, transmitting the optional command no data response may include transmitting the optional command no data response by an optional command terminal device. Some aspects may further include receiving the optional command no data response from the optional command terminal device, and reissuing the optional command in response to receiving the optional command no data response.
[0010] In some aspects, transmitting the optional command no data response may include transmitting the optional command no data response by an optional command terminal device. Some aspects may further include receiving the optional command no data response from the optional command terminal device, and abandoning the optional command in response to receiving the optional command no data response.[0011] In some aspects, the optional command may include an optional command terminal device ID field configured to indicate at least one optional command terminal device allowed to respond to the optional command with an optional command no data response.[0012] In some aspects, the optional command may include an optional command no data response condition field configured to indicate at least one condition for responding to the optional command with an optional command no data response.[0013] In some aspects, the optional command no data response may include an optional command no data response terminal device ID field configured to indicate an optional command terminal device transmitting the optional command no data response.[0014] In some aspects, the optional command no data response may include an optional command no data response condition field configured to indicate an optional command no data response condition met for transmitting the optional command no data response.[0015] Some aspects may further include terminating the optional command in response to determining that the optional command cannot be implemented, in which terminating the optional command may include preventing the optional command from being forwarded to a device along an optional command transaction path.[0016] In some aspects, terminating the optional command may include converting the optional command to a conventional command, and forwarding the conventional command to a device along the optional command transaction path.
[0017] In some aspects, determining whether the optional command can be implemented may include determining whether the optional command can be implemented based on at least one of missing requested data, requested data being out of bounds for a buffer, a cost of implementing the optional command exceeding a cost threshold, implementation of the optional command resulting in an exception, error, or fault, or being denied access to a requested location or target data.[0018] Some aspects may include generating the optional command by the optional command request device, and transmitting the optional command along an optional command transaction path.[0019] Further aspects include a computing device having an optional command terminal device and an optional command request device configured to perform operations of any of the methods summarized above. Further aspects include a computing device having means for performing functions of any of the methods summarized above. Further aspects include a non-transitory processor-readable medium having stored thereon processor-executable instructions configured to cause a processor and other components of a computing device to perform operations of any of the methods summarized above.BRIEF DESCRIPTION OF THE DRAWINGS[0020] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate example embodiments of various embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the claims.[0021] FIG. 1 is a component block diagram illustrating an example computing device suitable for implementing various embodiments.[0022] FIG. 2 is a component block diagram illustrating an example system on chip (SoC) suitable for implementing various embodiments.
[0023] FIG. 3 is a component block diagram illustrating example processing device suitable for implementing various embodiments.[0024] FIG. 4 is a component block and signaling diagram illustrating an example of a read optional command and transaction suitable for implementing various embodiments.[0025] FIG. 5 is a component block and signaling diagram illustrating an example of a write optional command and transaction suitable for implementing various embodiments.[0026] FIG. 6 is a component block and signaling diagram illustrating an example of a read optional command used in avoiding livelock suitable for implementing various embodiments.[0027] FIG. 7 is a component block and signaling diagram illustrating an example of a read optional command suitable for implementing various embodiments.[0028] FIG. 8 is a component block and signaling diagram illustrating an example of a read optional command for data streamlining suitable for implementing various embodiments.[0029] FIG. 9 is a process flow diagram illustrating a method for read optional and/or write optional commands according to an embodiment.[0030] FIG. 10 is a process flow diagram illustrating a method for managing unsuccessful read optional and/or write optional commands according to an embodiment.[0031] FIGS. 11A and 1 IB are process flow diagrams illustrating methods for managing read optional and/or write optional no data responses according to an embodiment.
[0032] FIG. 12 is a component block diagram illustrating an example mobile computing device suitable for implementing a read optional and/or write optional command enabled system in accordance with the various embodiments.[0033] FIG. 13 is a component block diagram illustrating an example mobile computing device suitable for implementing a read optional and/or write optional command enabled system in accordance with the various embodiments.[0034] FIG. 14 is a component block diagram illustrating an example server suitable for implementing a read optional and/or write optional command enabled system in accordance with the various embodiments.DETAILED DESCRIPTION[0035] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the claims.[0036] Various embodiments include methods and computing devices implementing such methods that provide for read optional and/or write optional commands. Some embodiments may include a read optional and/or write optional command implemented by modifying an existing read and/or write command in a bus protocol, a new command in the bus protocol, a new command in an instruction set architecture (ISA).[0037] In some embodiments, the read optional and/or write optional command may be configured to indicate to a device along a read optional and/or write optional transaction path, between a read optional and/or write optional command request device and a main memory (e.g., random access memory (RAM)), to respond to an unsuccessful read optional and/or write optional command with a no data response signal. In some embodiments, the read optional and/or write optional command may
be configured to indicate to the device along the read optional and/or write optional transaction path which devices may respond to the unsuccessful read optional and/or write optional command with a no data response signal. In some embodiments, the read optional and/or write optional command may be configured to indicate to the device along the read optional and/or write optional transaction path a condition for which devices may respond to the unsuccessful read optional and/or write optional command with a no data response signal.[0038] In some embodiments, a no data response signal may indicate to the read optional and/or write optional command request device that the read optional and/or write optional command was unsuccessful. In some embodiments, a no data response signal may indicate to the read optional and/or write optional command request device at which device along the read optional and/or write optional transaction path the read optional and/or write optional command was unsuccessful. In some embodiments, a no data response signal may indicate to the read optional and/or write optional command request device a condition for why the read optional and/or write optional command was unsuccessful.[0039] In some embodiments, the read optional and/or write optional command request device may be configured to retry the read optional and/or write optional command in response to receiving a no data response signal. In some embodiments, the read optional and/or write optional command request device may be configured to abandon the read optional and/or write optional command in response to receiving a no data response signal.[0040] The term “optional command” is used herein to refer to a read optional command and/or write optional command as described herein. The term “optional command request device” is used herein to refer to a component of a computing device configured to issue an optional command. The term “optional command terminal device” is used herein to refer to a component of the computing device configured to issue an optional command no data response in response to not
implementing a received optional command, such as a request device terminal device and/or a write optional terminal device. The term “optional command no data response” is used herein to refer to a read optional no data response and/or a write optional no data response issued by an optional command terminal device in response to not implementing an optional command. The term “optional command transaction path” is used herein to refer to a read optional command communication path and/or write optional command communication path of the optional command between an optional command request device and a main memory.[0041] In some embodiments, the optional command may be configured to indicate to a component of the computing device that it is an optional command. The component of the computing device may be configured as an optional command terminal device and may issue an optional command no data response in response to not implementing the received optional command.[0042] In some embodiments, the optional command may be configured to indicate to a component of the computing device which component of the computing device may be an optional command terminal device. In some embodiments, the optional command may include an optional command terminal device identifier (ID) field configured to indicate at least one optional command terminal device allowed to respond to the optional command with an optional command no data response. The term “optional command terminal device ID field” is used herein to refer to a read optional command terminal device ID field and/or a write optional command terminal device ID field. In some embodiments the optional command terminal device ID field may include an optional command terminal device ID configured to indicate a component of the computing device that may be an optional command terminal device. The term “optional command terminal device ID” is used herein to refer to a read optional command terminal device ID and/or a write optional terminal device ID.[0043] In some embodiments, the optional command may be configured to indicate to a component of the computing device a reason for why an optional command
terminal device may not implement the optional command and return an optional command no data response. In some embodiments, the optional command may include an optional command no data response condition field configured to indicate at least one condition for responding to the optional command with an optional command no data response. The term “optional command no data response condition field” is used herein to refer to a read optional command no data response condition field and/or a write optional command no data response condition field. In some embodiments the optional command no data response condition field may include an optional command no data response condition configured to indicate a reason or condition for responding to the optional command with an optional command no data response. The term “optional command no data response condition” is used herein to refer to a read optional command no data response condition and/or a write optional command no data response condition.[0044] The optional command no data response may be an architected response specifically for use in response to optional commands. The term “optional command no data response” is used herein to refer to a read optional no data response and/or a write optional no data response. The optional command no data response may be issued by any or specific optional command terminal devices along an optional command transaction path. The optional command no data response may be different from and in addition to error or failure signaling, in which the optional command terminal device may forward the optional command and/or hold the optional command until the optional command can be implemented. The optional command no data response may be configured to indicate to the optional command device that the optional command is not implemented. The optional command no data response may be an acceptable state, such as an “OK” state, for the optional command request device, rather than an error or fault state triggered by an error or fault signal for failure of a memory access request. Not implementing the optional command may be an acceptable state in that the implementation of the optional command may not be required, for example, even when the implementing the optional command may be
critical. As such, not implementing the optional command may not be an error or fault state triggered by an error or fault signal that may be configured to indicate no, incomplete, improper, etc. implementation of a required memory access. The optional command request device may be configured to retry and/or abandon the optional command in response. In some embodiments, the optional command request device may be preconfigured with how to respond to the optional command no data response. In some embodiments, the optional command request device may determine how to respond to the optional command no data response. The optional command no data response may include an optional command no data response field configured to indicate the type of the response to the optional command. The term “optional command no data response field” is used herein to refer to a read optional no data response field and/or a write optional no data response field.[0045] In some embodiments, the optional command no data response may include an optional command no data response terminal device ID field configured to indicate which optional command terminal device responds to the optional command with the optional command no data response. The term “optional command no data response terminal device ID field” is used herein to refer to a read optional no data response terminal device ID field and/or a write optional no data response terminal device ID field. In some embodiments, the optional command no data response may include an optional command no data response condition field configured to indicate a condition for which the optional command terminal device responds to the optional command with the optional command optional no data response. The term “optional command no data response condition field” is used herein to refer to a read optional no data response condition field and/or a write optional no data response condition field.[0046] The terms “computing device” and “mobile computing device” are used interchangeably herein to refer to any one or all of cellular telephones, smartphones, personal or mobile multi-media players, personal data assistants (PDA’s), laptop computers, tablet computers, convertible laptop s/tablets (2-in-l computers),
smartbooks, ultrabooks, netbooks, palm-top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, mobile gaming consoles, wireless gaming controllers, and similar personal electronic devices that include a memory, and a programmable processor. The term “computing device” may further refer to stationary computing devices including personal computers, desktop computers, all-in-one computers, workstations, super computers, mainframe computers, embedded computers (such as in vehicles and other larger systems), servers, multimedia computers, and game consoles.[0047] FIG. 1 illustrates a system including a computing device 100 suitable for use with various embodiments. The computing device 100 may include an SoC 102 with a central processing unit 104, a memory 106, a communication interface 108, a memory interface 110, a peripheral device interface 120, and a processing device 124. The computing device 100 may further include a communication component 112, such as a wired or wireless modem, a memory 114, an antenna 116 for establishing a wireless communication link, and/or a peripheral device 122. The processor 124 may include any of a variety of processing devices, for example a number of processor cores.[0048] The term “system-on-chip” or “SoC” is used herein to refer to a set of interconnected electronic circuits typically, but not exclusively, including a processing device, a memory, and a communication interface. A processing device may include a variety of different types of processors 124 and/or processor cores, such as a general purpose processor, a central processing unit (CPU) 104, a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), a secure processing unit (SPU), an intellectual property unit (IPU), a subsystem processor of specific components of the computing device, such as an image processor for a camera subsystem or a display processor for a display, an auxiliary processor, a peripheral device processor, a single-core processor, a multicore processor, a controller, and/or a microcontroller. A processing device may further embody other
hardware and hardware combinations, such as a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and/or time references. Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon.[0049] An SoC 102 may include one or more CPUs 104 and processors 124. The computing device 100 may include more than one SoC 102, thereby increasing the number of CPUs 104, processors 124, and processor cores. The computing device 100 may also include CPUs 104 and processors 124 that are not associated with an SoC 102. Individual CPUs 104 and processors 124 may be multicore processors. The CPUs 104 and processors 124 may each be configured for specific purposes that may be the same as or different from other CPUs 104 and processors 124 of the computing device 100. One or more of the CPUs 104, processors 124, and processor cores of the same or different configurations may be grouped together. A group of CPUs 104, processors 124, or processor cores may be referred to as a multi-processor cluster.[0050] The memory 106 of the SoC 102 may be a volatile or non-volatile memory configured for storing data and processor-executable code for access by the CPU 104, the processor 124, or other components of SoC 102. The computing device 100 and/or SoC 102 may include one or more memories 106 configured for various purposes. One or more memories 106 may include volatile memories such as random access memory (RAM) or main memory, or cache memory. These memories 106 may be configured to temporarily hold a limited amount of data received from a data sensor or subsystem, data and/or processor-executable code instructions that are requested from non-volatile memory, loaded to the memories 106 from non-volatile memory in anticipation of future access based on a variety of factors, and/or intermediary processing data and/or processor-executable code instructions produced by the CPU 104 and/or processor 124 and temporarily stored for future quick access
without being stored in non-volatile memory. In some embodiments, any number and combination of memories 106 may include one-time programmable or read-only memory.[0051] The memory 106 may be configured to store data and processor-executable code, at least temporarily, that is loaded to the memory 106 from another memory device, such as another memory 106 or memory 114, for access by one or more of the CPU 104, the processor 124, or other components of SoC 102. The data or processorexecutable code loaded to the memory 106 may be loaded in response to execution of a function by the CPU 104, the processor 124, or other components of SoC 102. Loading the data or processor-executable code to the memory 106 in response to execution of a function may result from a memory access request to the memory 106 that is unsuccessful, or a “miss,” because the requested data or processor-executable code is not located in the memory 106. In response to a miss, a memory access request to another memory 106 or memory 114 may be made to load the requested data or processor-executable code from the other memory 106 or memory 114 to the memory 106. Loading the data or processor-executable code to the memory 106 in response to execution of a function may result from a memory access request to another memory 106 or memory 114, and the data or processor-executable code may be loaded to the memory 106 for later access.[0052] The memory interface 110 and the memory 114 may work in unison to allow the computing device 100 to store data and processor-executable code on a volatile and/or non-volatile storage medium, and retrieve data and processor-executable code from the volatile and/or non-volatile storage medium. The memory 114 may be configured much like an embodiment of the memory 106 in which the memory 114 may store the data or processor-executable code for access by one or more of the CPU 104, the processor 124, or other components of SoC 102.[0053] In some embodiments, the memory 114 may be non-volatile and thus may retain the information after the power of the computing device 100 is shut off. When
the power is turned back on and the computing device 100 reboots, the information stored on the memory 114 may be available to the computing device 100.[0054] In some embodiments, the memory 114 may be volatile, and thus will not retain the information after the power of the computing device 100 is shut off The memory interface 110 may control access to the memory 114 and allow the CPU 104, the processor 124, or other components of the SoC 102 to read data from and write data to the memory 114.[0055] Some or all of the components of the computing device 100 and/or the SoC 102 may be arranged differently and/or combined while still serving the functions of the various embodiments. The computing device 100 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of the computing device 100.[0056] FIG. 2 illustrates an SoC 230 (e.g., SoC 102 in FIG. 1), which may be a component of a computing device (e.g., computing device 100 in FIG. 1) with multiple peripheral device components suitable for implementing an embodiment. With reference to FIGS. 1 and 2, the SoC 230 may include a variety of components as described above. Some such components and additional components may be subsystems of the computing device 100.[0057] The SoC 230 may include various communication components (e.g., communication interface 108, memory interface 110, peripheral device interface 120 in FIG. 1) configured to communicatively connect the components of the SoC 230 that may transmit, receive, and share data. The communication components may include a system hub 200, a protocol converter 208, and a system network on chip (NoC) 224.[0058] The communication components may facilitate communication between subsystem components. In some embodiments, the subsystem components may include processors (e.g., CPU 104, processor(s) 124 in FIG. 1) in CPU clusters 206. In some embodiments, the subsystem components may include various peripheral
device subsystems (e.g., communication component 112, peripheral device 122 in FIG. 1) having one or more processors (e.g., CPU 104, processor(s) 124 in FIG. 1), such as camera, video, display, audio, and wireless communication subsystems 218, 220, 222, 232, 234. In some embodiments, the subsystem components may include other specialized processors (e.g., processor(s) 124 in FIG. 1), such as a graphics processor unit (GPU) 210, a modem digital signal processor (DSP) 212, an application processor unit (APU) 214, and other hardware accelerators.[0059] The communication components may facilitate communication between the peripheral device subsystems 218, 220, 222, 232, 234 and the processors 206, 210, 212, 214 with other components such as memory devices (e.g., memory 106, 114 in FIG. 1), including a system cache 202, a random access memory (RAM) 228, and various memories included in the processors 206, 210, 212, 214 and peripheral device subsystems 218, 220, 222, 232, 234, such as cache memories.[0060] Various memory control devices (e.g., memory interface 110 in FIG. 1), such as a system cache controller 204, a memory interface 216, and a memory controller 226, may be configured to control access to the various memories by the peripheral device subsystems 218, 220, 222, 232, 234 and the processors 206, 210, 212, 214 and implement operations for the various memories, which may be requested by the peripheral device subsystems 218, 220, 222, 232, 234 and the processors 206, 210, 212, 214.[0061] The peripheral device subsystems 218, 220, 222, 232, 234 may also include various processors (e.g., CPU 104, processor(s) 124 in FIG. 1), controllers (e.g., processor(s) 124 in FIG. 1), sensors, receivers, transmitters, and dedicated memories, such as caches and memory registers, configured for controlling and implementing functionalities of the peripheral devices of the subsystems 218, 220, 222, 232, 234.[0062] The descriptions herein of the SoC 230 and its various components illustrated in FIG. 2 are only meant to be examples and in no way limiting. Several of the components of the illustrated example SoC 230 may be variably configured,
combined, and separated. Several of the components may be included in greater or fewer numbers and may be located and connected differently within the SoC 230 or separate from the SoC 230. Similarly, numerous other components, such as other memories, processors, peripheral device subsystems, interfaces, and controllers, may be included in the SoC 230.[0063] FIG. 3 illustrates components of a computing device (e.g., computing device 100 in FIG. 1) suitable for implementing an embodiment. With reference to FIGS. 1- 3, a processor 300 (e.g., central processing unit 104, processor 124 in FIG. 1, CPU clusters 206, GPU 210, DSP 212, APU 214 in FIG. 2) may include multiple processor types, including, for example, a CPU and various hardware accelerators, such as a GPU, a DSP, an SPU, an APU, an IPU, a subsystem processor of specific components of the computing device, an auxiliary processor, a peripheral device processor, controllers/microcontrollers, etc.[0064] The processor 300 may also include a custom hardware accelerator, which may include custom processing hardware and/or general purpose hardware configured to implement a specialized set of functions. The processors 300 may include any number of processor cores 302, 304, 306, 308. A processor 300 having multiple processor cores 302, 304, 306, 308 may be referred to as a multicore processor.[0065] The processor 300 may have a plurality of homogeneous or heterogeneous processor cores 302, 304, 306, 308. A homogeneous processor may include a plurality of homogeneous processor cores. The processor cores 302, 304, 306, 308 may be homogeneous in that, the processor cores 302, 304, 306, 308 of the processor 300 may be configured for the same purpose and have the same or similar performance characteristics. For example, the processor 300 may be a general purpose processor, and the processor cores 302, 304, 306, 308 may be homogeneous general purpose processor cores. The processor 300 may be a GPU or a DSP, and the processor cores 302, 304, 306, 308 may be homogeneous graphics processor cores or
digital signal processor cores, respectively. The processor 300 may be a custom hardware accelerator with homogeneous processor cores 302, 304, 306, 308.[0066] A heterogeneous processor may include a plurality of heterogeneous processor cores. The processor cores 302, 304, 306, 308 may be heterogeneous in that the processor cores 302, 304, 306, 308 of the processor 300 may be configured for different purposes and/or have different performance characteristics. The heterogeneity of such heterogeneous processor cores may include different instruction set architecture, pipelines, operating frequencies, etc. An example of such heterogeneous processor cores may include what are known as “big. LITTLE” architectures in which slower, low-power processor cores may be coupled with more powerful and power-hungry processor cores. In similar embodiments, an SoC (for example, SoC 122 of FIG. 1, SoC 230 in FIG. 2) may include any number of homogeneous or heterogeneous processors 300. In various embodiments, not all off the processor cores 302, 304, 306, 308 need to be heterogeneous processor cores, as a heterogeneous processor may include any combination of processor cores 302, 304, 306, 308 including at least one heterogeneous processor core.[0067] Each of the processor cores 302, 304, 306, 308 of a processor 300 may be designated a private processor core cache (PPCC) memory 310, 312, 314, 316 that may be dedicated for read and/or write access by a designated processor core 302, 304, 306, 308. The private processor core cache 310, 312, 314, 316 may store data and/or instructions, and make the stored data and/or instructions available to the processor cores 302, 304, 306, 308, to which the private processor core cache 310, 312, 314, 316 is dedicated, for use in execution by the processor cores 302, 304, 306, 308. The private processor core cache 310, 312, 314, 316 may include volatile memory. The private processor core cache 310, 312, 314, 316 may a physical cache and/or a virtual cache.[0068] Groups of the processor cores 302, 304, 306, 308 of a processor 300 may be designated a shared processor core cache (SPCC) memory 320, 322 that may be
dedicated for read and/or write access by a designated group of processor core 302, 304, 306, 308. The shared processor core cache 320, 322 may store data and/or instructions, and make the stored data and/or instructions available to the group processor cores 302, 304, 306, 308 to which the shared processor core cache 320, 322 is dedicated, for use in execution by the processor cores 302, 304, 306, 308 in the designated group. The shared processor core cache 320, 322 may include volatile memory.[0069] The processor 300 may include a shared processor cache memory 330 that may be dedicated for read and/or write access by the processor cores 302, 304, 306, 308 of the processor 300. The shared processor cache 330 may store data and/or instructions, and make the stored data and/or instructions available to the processor cores 302, 304, 306, 308, for use in execution by the processor cores 302, 304, 306, 308. The shared processor cache 330 may also function as a buffer for data and/or instructions input to and/or output from the processor 300. The shared cache 330 may include volatile memory.[0070] Multiple processors 300 may access a shared system cache memory 340 (e.g., memory 106 in FIG. 1, system cache 202 in FIG. 2) that may be dedicated for read and/or write access by the processor cores 302, 304, 306, 308 of the multiple processors 300. The shared system cache 340 may store data and/or instructions and make the stored data and/or instructions available to the processor cores 302, 304, 306, 308, for use in execution by the processor cores 302, 304, 306, 308. The shared system cache 340 may also function as a buffer for data and/or instructions input to and/or output from the multiple processors 300. The shared system cache 340 may include volatile memory.[0071] The example illustrated in FIG. 3 showing the four processor cores 302, 304, 306, 308, the four private processor core caches 310, 312, 314, 316, two groups of processor cores 302, 304, 306, 308 and the shared processor core caches 320, 322, the one processor shared cache 320, and the one shared system cache 340 is not meant to
limit the various embodiments to these specific numbers of components. The computing device 100, the SoC 102, or the processor 300 may individually or in combination include fewer or more than the four processor cores 302, 304, 306, 308, four private processor core caches 310, 312, 314, 316, and two shared processor core caches 320, 322, one processor shared cache 320, and/or one shared system cache 340 as illustrated and described herein.[0072] For ease of reference, the terms “processor,” “multicore processor,” “processor core,” “controller,” and “microcontroller” may be used interchangeably herein. The descriptions herein of the illustrated computing device and its various components are only meant to be exemplary and not to be limiting. Several of the components of the illustrated example computing device may be variably configured, combined, and separated. Several of the components may be included in greater or fewer numbers and may be located and connected differently within the SoC or separate from the SoC.[0073] FIGS. 4 and 5 illustrate examples of a read optional command 408 and a read optional no data response 410 in FIG. 4, and a write optional command 510 and a write optional no data response 516 in FIG. 5. With reference to FIGS. 1-5, each of the read optional command 408, the read optional no data response 410, the write optional command 510, and the write optional no data response 516 may be represented as ordered bits. Each bit or group of bits in the order of the read optional command 408 and the write optional command 510 may be configured to indicate certain information to a terminal device (e.g., CPU 104, memory 106, communication interface 108, memory interface 110, peripheral device interface 120, processor 124 in FIG. 1, system hub 200, system cache 202, cache controller 204, protocol converter 208, processors 206, 210, 212, 214, memory interface 216, subsystems 218, 220, 222, 232, 234, NoC 224, memory controller 226 in FIG. 2, processor 300, processor cores 302, 304, 306, 308, processor core caches 310, 312, 314, 316, shared processor core caches 320, 322, processor shared cache 320, shared system cache 340 in FIG 3), read
optional terminal device 402 and/or write optional terminal device 502. Each bit or group of bits in the order of the read optional no data response 410 and the write optional no data response 516 may be configured to indicate certain information to a request device (e.g., CPU 104, processor 124 in FIG. 1, CPU clusters 206, GPU 210, DSP 212, APU 214, subsystems 218, 220, 222, 232, 234 in FIG. 2, processor 300, processor cores 302, 304, 306, 308 in FIG 3), read optional request device 400 and/or write optional request device 500. In the examples illustrated in FIGS. 4 and 5, portions of the read optional command 408, the read optional no data response 410, the write optional command 510, and the write optional no data response 516 shown using broken lines are information that may optionally be included in any combination in the commands 408, 510 and/or responses 410, 516.[0074] In some embodiments, to implement the read optional command 408 and/or the write optional command 510, a bit order for an existing read and/or write command of a bus protocol may be modified to indicate to read optional terminal device 402 and/or write optional terminal device 502 that the bit order is for the read optional command 408 and/or the write optional command 510.[0075] In some embodiments, to implement the read optional command 408 and/or the write optional command 510, a bus protocol may be updated with a new bit order that may be used to indicate to read optional terminal device 402 and/or write optional terminal device 502 that the bit order is for the read optional command 408 and/or the write optional command 510.[0076] In some embodiments, to implement the read optional command 408 and/or the write optional command 510, an instruction set architecture (ISA) may be updated to generate a new bit order that may be used to indicate to read optional terminal device 402 and/or write optional terminal device 502 that the bit order is for the read optional command 408 and/or the write optional command 510.[0077] A bit and/or groups of bits configured to indicate certain information to a read optional request device 400, read optional terminal device 402, write optional request
device 500, write optional terminal device 502 may be referred to herein as a “field.” The read optional command 408 may include an address field configured to indicate a target data and/or location for the read optional command 408. The read optional command 408 may include a read optional command field configured to indicate the type of the command. In some embodiments, the read optional command 408 may include a read optional terminal device ID field configured to indicate which one or more of read optional terminal devices 402 may respond to the read optional command 408 with the read optional no data response 410. In some embodiments, the read optional command 408 may include a read optional no data response condition field configured to indicate one or more conditions for which the read optional terminal devices 402 may respond to the read optional command 408 with the read optional no data response 410.[0078] In some embodiments the read optional terminal device ID and/or the read optional no data response condition may be inherent to the read optional command 408 and may not need to be explicitly added as values in the read optional terminal device ID field and/or the read optional no data response condition field. For example, a read optional terminal device 402 may be configured to respond to the read optional command 408 under certain conditions without the read optional command 408 having the read optional terminal device ID field and/or the read optional no data response condition field.[0079] The read optional no data response 410 may include a read optional no data response field configured to indicate the type of the response. In some embodiments, the read optional no data response 410 may include a read optional no data response terminal device ID field configured to indicate which read optional terminal device 402 responds to the read optional command 408 with the read optional no data response 410. In some embodiments, the read optional no data response 410 may include a read optional no data response condition field configured to indicate a
condition for which the read optional terminal device 402 responds to the read optional command 408 with the read optional no data response 410.[0080] The write optional command 510 may include an address field configured to indicate a target data and/or location for the write optional command 510. The write optional command 510 may include a write optional command field configured to indicate the type of the command. In some embodiments, the write optional command 510 may include a write optional terminal device ID field configured to indicate which one or combination of write optional terminal devices 502 may respond to the write optional command 510 with the write optional no data response 516. In some embodiments, the write optional command 510 may include a write optional no data response condition field configured to indicate one or more conditions for which the write optional terminal devices 502 may respond to the write optional command 510 with the write optional no data response 516.[0081] In some embodiments the write optional terminal device ID and/or the write optional no data response condition may be inherent to the write optional command 510 and may not need to be explicitly added as values in the write optional terminal device ID field and/or the write optional no data response condition field. For example, a write optional terminal device 502 may be configured to respond to the write optional command 510 under certain conditions without the write optional command 510 having the write optional terminal device ID field and/or the write optional no data response condition field[0082] The write optional no data response 516 may include a write optional no data response field configured to indicate the type of the response. In some embodiments, the write optional no data response 516 may include a write optional no data response terminal device ID field configured to indicate which write optional terminal device 502 responds to the write optional command 510 with the write optional no data response 516. In some embodiments, the write optional no data response 516 may include a write optional no data response condition field configured to indicate a
condition for which the write optional terminal device 502 responds to the write optional command 510 with the write optional no data response 516.[0083] The read optional command 408 and/or the write optional command 510 may be commands specifically configured to indicate to the read optional terminal devices 402 and/or the write optional terminal devices 502 that the terminal devices 402, 502 may respond to the commands 408, 510 with the read optional no data response 410 and/or the write optional no data response 516. The read optional no data response 410 and/or the write optional no data response 516 may be architected responses specifically for use in response to read optional command 408 and/or the write optional command 510. The read optional no data response 410 and/or the write optional no data response 516 may be different from and in addition to error or failure signaling, in which the read optional terminal device 402 and/or write optional terminal device 502 may forward the read optional command 408 and/or the write optional command 510 and/or hold the command 408, 510 until the command 408, 510 can be implemented. The read optional no data response 410 and/or the write optional no data response 516 may indicate to the read optional request device 400 and/or write optional request device 500 that the read optional command 408 and/or the write optional command 510 is not implemented and the request device 400, 500.[0084] The read optional no data response 410 and/or the write optional no data response 516 may be an acceptable state, such as “OK” state, for the read optional request device 400 and/or write optional request device 500, rather than an error or fault state triggered by an error or fault signal for failure of a memory access request. Not implementing the read optional command 408 and/or the write optional command 510 may be an acceptable state in that the implementation of the command 408, 510 may not be required, for example, even when the implementing the command 408, 510 may be critical. As such, not implementing the read optional command 408 and/or the write optional command 510 may not be an error or fault state triggered by
an error or fault signal that may be configured to indicate no, incomplete, improper, etc. implementation of a required memory access.[0085] The read optional request device 400 and/or write optional request device 500 may be configured to retry and/or abandon the command 408, 510 in response. In some embodiments, the read optional request device 400 and/or write optional request device 500 may be preconfigured with how to respond to the read optional no data response 410 and/or the write optional no data response 516. In some embodiments, the read optional request device 400 and/or write optional request device 500 may determine how to respond to the read optional no data response 410 and/or the write optional no data response 516.[0086] The read optional no data response 410 and/or the write optional no data response 516 may be issued by any or specific read optional terminal devices 402 and/or write optional terminal devices 502 along a transaction path between the read optional request device 400 and/or write optional request device 500 and a main memory (e.g., memory 104 in FIG. 1, random access memory 228 in FIG. 2). The read optional no data response 410 and/or the write optional no data response 516 may be issued based on one or more conditions, as described further herein. In some embodiments, the read optional no data response 410 and/or the write optional no data response 516 may indicate to the read optional request device 400 and/or write optional request device 500 the read optional terminal devices 402 and/or the write optional terminal devices 502 issuing the no data response 410, 516.[0087] In some embodiments, the read optional no data response 410 and/or the write optional no data response 516 may indicate to the read optional request device 400 and/or write optional request device 500 the condition for the no data response 410, 516. In some embodiments, a response of the read optional request device 400 and/or write optional request device 500 to the read optional no data response 410 and/or the write optional no data response 516 may depend on the read optional terminal devices 402 and/or the write optional terminal devices 502 issuing the read optional no data
response 410 and/or the write optional no data response 516 and/or the condition for issuing the no data response 410, 516.[0088] In some embodiments, the read optional command 408 and the read optional no data response 410 may be used to avoid costly work. For example, the read optional command 408 and the read optional no data response 410 may be used to avoid infinite loops or livelocks for data that may be repeatedly fetched from main memory but repeatedly evicted before use. For a more specific example, a data prefetcher may receive the read optional command 408 and issue a read optional no data response 410 in response to requested data not being in a cache, rather than respond by fetching the missing data from main memory. In this manner, the prefetcher may not repeatedly fetch data from main memory that is evicted before it is used, and the read optional request device 400 may be configured to abandon the request or reissue the request in a timely manner to use the data. As another example, a data prefetcher may generate the read optional command 408 for fetching data. In some embodiments, the data prefetcher may generate the read optional command 408 by default and/or by an algorithm. For example, an algorithm for generating the read optional command 408 may be based on an expected probability of use of fetched data. The algorithm may prompt generating the read optional command 408 in response to a probability exceeding and/or falling short of a read optional command generation threshold. In some embodiments, the data prefetcher configured to generate a read optional command 408 may not be integrated with a cache targeted by the read optional command 408, and may send the read optional command 408 to the targeted cache.[0089] In some embodiments, the read optional command 408 may include a value in the read optional terminal device ID field configured to indicate to the prefetcher that the prefetcher may respond with the read optional no data response 410. In some embodiments, the read optional command 408 may include a value in the read optional no data response condition field to indicate to a read optional terminal device
402 to issue the read optional no data response 410 in response to requested data missing from the cache.[0090] For another example, the read optional command 408 and the read optional no data response 410 may be used to avoid excessive, incorrect, and/or disallowed data reads by being used as an out of bounds check, such as for an image buffer without knowing valid ranges. In such an example, a read optional command 408 may be issued to retrieve data from an image buffer. In response to the requested data being out of bounds for the image buffer, a read optional terminal device 402 having information of the buffer ranges may respond to the read optional command 408 with the read optional no data response 410, rather than retrieving the requested data from a device further down the transaction path. As such, costly repetitive and/or improper data reads may be avoided. The read optional terminal device 402 having information of the buffer ranges may include a memory controller, a system memory management unit (SMMU), a cache, and/or other specialized functional blocks. In some embodiments, the read optional command 408 may include a value in the read optional terminal device ID field configured to indicate to the read optional terminal device 402 that the read optional terminal device 402 may respond with the read optional no data response 410. In some embodiments, the read optional command 408 may include a value in the read optional no data response condition field to indicate to a read optional terminal device 402 to issue the read optional no data response 410 in response to requested data being out of bounds for the image buffer.[0091] Similarly, the write optional command 510 and the write optional no data response 516 may be used to avoid excessive, incorrect, and/or disallowed data writes by being used as an out of bounds check, such as for an image buffer without knowing valid ranges. In such an example, a write optional command 510 may be issued to write data to an image buffer. In response to the requested write location being out of bounds for the image buffer, a write optional terminal device 502 having information of the buffer ranges may respond to the write optional command 510 with the write
optional no data response 516, rather than writing the requested data to a location of a memory out of bounds for the image buffer. As such, costly and/or improper writes may be avoided. The write optional terminal device 502 having information of the buffer ranges may include a memory controller, a system memory management unit (SMMU), a cache, and/or other specialized functional blocks. In some embodiments, the write optional command 510 may include a value in the write optional terminal device ID field configured to indicate to the write optional terminal device 502 that the write optional terminal device 502 may respond with the write optional no data response 516. In some embodiments, the write optional command 510 may include a value in the write optional no data response condition field to indicate to a write optional terminal device 502 to issue the write optional no data response 516 in response to a requested data write location being out of bounds for the image buffer.[0092] As another example, the read optional command 408 and the read optional no data response 410 may be used to avoid costly work based on the cost of the work. For example, certain operations may have associated costs and an individual cost of an operation and/or a cumulative cost of operations for implementing the read optional command 408 may exceed a cost threshold. Such operations may include read operations, fetch operations, memory management operations, memory coherency operations, etc. Cost may be measured, for example, on a basis of any number and combination of power, time, cycles, bandwidth, resource requirement, effect on latency for other operations, etc. The read optional terminal device 402 may be configured to determine a cost for implementing the read optional command 408 and compare the cost to the cost threshold. The read optional terminal device 402 may implement the read optional command 408 in response to the cost of implementing the read optional command 408 does not exceed the cost threshold, and not implement the read optional command 408 in response to the cost of implementing the read optional command 408 exceeding the cost threshold. In some embodiments, the cost threshold for the read optional command 408 may be inherent for the read optional command 408. In some embodiments, the read optional command 408 may include a value in
the read optional no data response condition field to indicate to a read optional terminal device 402 to issue the read optional no data response 410 in response to the cost of implementing the read optional command 408 exceeding the cost threshold. In some embodiments, the read optional command 408 may include a value in the read optional no data response condition field to indicate to a read optional terminal device 402 a value of the cost threshold.[0093] Similarly, the write optional command 510 and the write optional no data response 516 may be used to avoid costly work based on the cost of the work. For example, certain operations may have associated costs and an individual cost of an operation and/or a cumulative cost of operations for implementing the write optional command 510 may exceed a cost threshold. Such operations may include write operations, memory management operations, memory coherency operations, etc. Cost may be measured, for example, on a basis of any number and combination of power, time, cycles, bandwidth, resource requirement, effect on latency for other operations, etc. The write optional terminal device 502 may be configured to determine a cost for implementing the write optional command 510 and compare the cost to the cost threshold. The write optional terminal device 502 may implement the write optional command 510 in response to the cost of implementing the write optional command 510 does not exceed the cost threshold, and not implement the write optional command 510 in response to the cost of implementing the write optional command 510 exceeding the cost threshold. In some embodiments, the cost threshold for the write optional command 510 may be inherent for the write optional command 510. In some embodiments, the write optional command 510 may include a value in the write optional no data response condition field to indicate to a write optional terminal device 502 to issue the write optional no data response 516 in response to the cost of implementing the write optional command 510 exceeding the cost threshold. In some embodiments, the write optional command 510 may include a value in the write optional no data response condition field to indicate to a write optional terminal device 502 a value of the cost threshold.
[0094] In some embodiments, the read optional command 408 and the read optional no data response 410 may be used to avoid exceptions, errors, and/or faults. The read optional terminal device 402 may determine that implementing the read optional command 408 may result in an exception, error, and/or fault. In response to determining that implementing the read optional command 408 may result in an exception, error, and/or fault, the read optional terminal device 402 may not implement the read optional command 408 and return a read optional no data response 410. Not implementing the read optional command 408 may avoid an exception, error, and/or fault resulting from implementing the read optional command 408. For example, the read optional terminal device 402 may determine that implementing the read optional command 408 may result in a page fault. In response to determining that implementing the read optional command 408 may result in a page fault, the read optional terminal device 402 may not implement the read optional command 408 and return a read optional no data response 410. In some embodiments, the read optional command 408 may include a value in the read optional no data response condition field to indicate to a read optional terminal device 402 to issue the read optional no data response 410 in response to determining that implementing the read optional command 408 may result in an exception, error, and/or fault.[0095] Similarly, in some embodiments, the write optional command 510 and the write optional no data response 516 may be used to avoid exceptions, errors, and/or faults. The write optional terminal device 502 may determine that implementing the write optional command 510 may result in an exception, error, and/or fault. In response to determining that implementing the write optional command 510 may result in an exception, error, and/or fault, the write optional terminal device 502 may not implement the write optional command 510 and return a write optional no data response 516. Not implementing the write optional command 510 may avoid an exception, error, and/or fault resulting from implementing the write optional command 510. For example, the write optional terminal device 502 may determine that implementing the write optional command 510 may result in a page fault. In response
to determining that implementing the write optional command 510 may result in a page fault, the write optional terminal device 502 may not implement the write optional command 510 and return a write optional no data response 516. In some embodiments, the write optional command 510 may include a value in the write optional no data response condition field to indicate to a write optional terminal device 502 to issue the write optional no data response 516 in response to determining that implementing the write optional command 510 may result in an exception, error, and/or fault.[0096] In some embodiments, the read optional command 408 and the read optional no data response 410 may be used to free resources that may otherwise be held until a process completes. For example, a GPU may issue a read optional command 408 to request image data from a memory that may be configured to load partially resident textures. The memory may include a cache, a random access memory (RAM), such as a double data rate (DDR) RAM, etc. In response to the requested data missing from the memory, a memory controller may respond to the read optional command 408 with the read optional no data response 410, rather than holding the request until the data is retrieved and written to the memory. In such an example, the read optional command 408 may complete and resources may be released, such as threads, cores, memories, NoC bandwidth, transaction storage/buffering and tracking capacity, etc. that were to be used process the requested data, and allow other processes to be implemented. In addition, the memory controller may fetch the requested data and the GPU may reissue the read optional command 408 to request the data at a later time. As such, resources may be freed for use while missing data is retrieved, rather than stalling execution of other processes while waiting for the data to be retrieved.[0097] Similarly, in some embodiments, the write optional command 510 and the write optional no data response 516 may be used to free resources that would otherwise be held until a process completes. For example, a GPU may issue a write optional command 510 to request writing image data to a memory that may be
configured to store partially resident textures. The memory may include a cache, a RAM, such as DDR RAM, etc. In response to the memory being full, a memory controller may respond to the write optional command 510 with the write optional no data response 516, rather than holding the request until the space is available in the memory to write the data and the data is written to the memory. In such an example, the write optional command 510 may complete and resources may be released, such as threads, cores, memories, NoC bandwidth, etc. that were to be used to process the data write, and allow other processes to be implemented. In addition, the memory controller may clear space in the memory and the GPU may reissue the write optional command 510 to request to write the data at a later time. As such, resources may be freed for use while space in the memory is unavailable, rather than stalling execution of other processes while waiting for space in the memory to become available.[0098] In some embodiments, the read optional command 408 may include a value in the read optional terminal device ID field configured to indicate to the memory controller that the memory controller may respond with the read optional no data response 410. In some embodiments, the read optional command 408 may include a value in the read optional no data response condition field to indicate to a read optional terminal device 402 to issue the read optional no data response 410 in response to requested data missing from the cache.[0099] In some embodiments, the read optional command 408 and the read optional no data response 410 and/or the write optional command 510 and the write optional no data response 516 may be used to improve data streaming between consumer and procedure processing devices by avoiding writes to main memory and synchronization operations between processing devices. For example, processing devices may share streamed data by writing and reading the streamed data in a shared buffer, such as in a cache. A consumer processing device issuing a read optional command 408 for streamed data in the shared buffer may receive a read optional no data response 410 from a memory controller in response to the data not yet being written to the cache by
the producer processing device. The consumer processing device may retry the read optional command 408 for streamed data until the data appears in the shared buffer. As such, the memory controller avoids having to issue read commands to the main memory for a miss of data in the shared buffer in response to the consumer processor’s read optional command 408. A producer processing device issuing a write optional command 510 for streamed data in the shared buffer may receive a write optional data response 516 from a memory controller in response to the shared buffer and/or cache being full. The producer processing device may retry the write optional command 510 for streamed data until the data is added to the shared buffer. As such, the memory controller avoids having to evict data from the shared buffer and issue write commands to the main memory for the data in the shared buffer in response to the producer processor’s write optional command 510.[00100] In some embodiments, the shared buffer memory controller may provide additional data to the consumer processor to improve timing of the reissue of the read optional command 408, such as by including a last written address, a delta to a current address, or timing information. In some embodiments, the consumer producer may have a smaller synchronization granularity - slice-based rather than frame-based communication - and the consumer processor may not have to synchronize at frame boundaries. In some embodiments, data streaming could be part of page table attributes and a memory controller may convert normal reads commands from a consumer processor to read optional commands 408, and perform retries of the read optional commands 408 until the data is available in the shared buffer.[00101] In some embodiments, the read optional command 408 and the read optional no data response 410 and/or the write optional command 510 and the write optional no data response 516 may be used to implement secure data transactions. For example, where hardware and/or software does not know whether it is operating securely, the read optional command 408 and/or the write optional command 510 could be used to access possibly secure buffers and return the read optional no data response 410
and/or the write optional no data response 516 without faulting. For another example, not implementing the read optional command 408 and/or the write optional command 510 may be based on permissions not allowing the read optional request device 400, write optional request device 500, a user, a processes, an application, etc. issuing the read optional command 408 and/or the write optional command 510 access to read and/or write. In some embodiments, the read optional command 408 may include a value in the read optional no data response condition field to indicate to a read optional terminal device 402 to issue the read optional no data response 410 in response to being denied access to a requested location and/or target data. In some embodiments, the write optional command 510 may include a value in the write optional no data response condition field to indicate to a write optional terminal device 502 to issue the write optional no data response 516 in response to being denied access the requested location.[00102] The example in FIG. 4 illustrates a read optional command 408 and read optional no data response 410 transaction. The read optional request device 400 may transmit the read optional command 408 to the read optional terminal device 402. In some embodiments, the read optional command 408 may be transmitted via an existing read address and/or command channel 404 of an existing bus protocol. In some embodiments, the read optional command 408 may include the read optional terminal device ID field. In some embodiments, the read optional command 408 may include the read optional no data response condition field. The read optional terminal device 402 may receive the read optional command 408.[00103] The read optional terminal device 402 may determine that it is a read optional terminal device 402 for the read optional command 408. In some embodiments, the read optional terminal device 402 may be configured as a read optional terminal device 402 for the read optional command 408. In some embodiments, the read optional terminal device 402 may interpret that it is a read optional terminal device 402 for the read optional command 408 from the read optional terminal device ID field. The read optional terminal device 402 may determine that it may terminate the
read optional command 408 based on a condition for terminating the read optional command 408. In some embodiments, the read optional terminal device 402 may be configured to determine that it may terminate the read optional command 408 based on a condition for terminating the read optional command 408. In some embodiments, the read optional terminal device 402 may interpret and determine that it may terminate the read optional command 408 based on a condition for terminating the read optional command 408 from the read optional no data response condition field. The read optional terminal device 402 may terminate the read optional command 408 and prevent the read optional command 408 from being forwarded to the main memory.[00104] The read optional terminal device 402 may generate and transmit the read optional no data response 410, having the read optional no data response field, to the read optional request device 400. In some embodiments, the read optional no data response 410 may be transmitted via an existing read data channel 406 of an existing bus protocol. In some embodiments, the read optional no data response 410 may include the read optional no data response terminal device ID field. In some embodiments, the read optional no data response 410 may include the read optional no data response condition field. The read optional request device 400 may receive the read optional no data response 410.[00105] The example in FIG. 5 illustrates a write optional command 510 and write optional no data response 516 transaction. The write optional request device 500 may transmit the write optional command 510 to the write optional terminal device 502. In some embodiments, the write optional command 510 may be transmitted via an existing write address and/or command channel 504 of an existing bus protocol. In some embodiments, the write optional command 510 may include the write optional terminal device ID field. In some embodiments, the write optional command 510 may include the write optional no data response condition field. The write optional request device 500 may transmit write data 514 on a write data channel 506. The write
optional terminal device 502 may receive the write optional command 510 and the write data 514.[00106] The write optional terminal device 502 may determine that it is a write optional terminal device 502 for the write optional command 510. In some embodiments, the write optional terminal device 502 may be configured as a write optional terminal device 502 for the write optional command 510. In some embodiments, the write optional terminal device 502 may interpret that it is a write optional terminal device 502 for the write optional command 510 from the write optional terminal device ID field. The write optional terminal device 502 may determine that it may terminate the write optional command 510 based on a condition for terminating the write optional command 510. In some embodiments, the write optional terminal device 502 may be configured to determine that it may terminate the write optional command 510 based on a condition for terminating the write optional command 510. In some embodiments, the write optional terminal device 502 may interpret and determine that it may terminate the write optional command 510 based on a condition for terminating the write optional command 510 from the write optional no data response condition field. The write optional terminal device 502 may terminate the write optional command 510.[00107] The write optional terminal device 502 may generate and transmit the write optional no data response 516, having the write optional no data response field, to the write optional request device 500. In some embodiments, the write optional no data response 516 may be transmitted via an existing write response channel 508 of an existing bus protocol. In some embodiments, the write optional no data response 516 may include the write optional no data response terminal device ID field. In some embodiments, the write optional no data response 516 may include the write optional no data response condition field. The write optional request device 500 may receive the write optional no data response 516.
[00108] FIG. 6 illustrates an example of a read optional command used in avoiding livelock suitable for implementing various embodiments. With reference to FIGS. 1- 6, the read optional request device 400 may transmit a read request 600 for data “X” from the read optional terminal device 402. The read optional terminal device 402 may receive the read request 600 for data “X”. The read optional terminal device 402 may load data “X” and “Y” in operation 602 in response to the read request 600 for data “X”. The read optional terminal device 402 may respond to the read request 600 by sending data “X” and a request to prompt a read request 604 of data “Y” to the read optional request device 400. The read optional request device 400 may receive the sent data “X” and the request to prompt a read command of data “ Y” 604. The read optional terminal device 402 may evict data “X” in operation 606 and later evict data “Y” 608.[00109] After the eviction of data “Y” in operation 608, the read optional request device 400 may send a read optional command 610 (e.g., read optional command 408 in FIG. 4) for data “Y”. The read optional terminal device 402 may receive the read optional command 610 for data “Y” and respond by sending a read optional no data response 612 (read optional no data response 410 in FIG. 4). In some embodiments, the response to receiving the read optional command for data “Y” 610 by sending the read optional no data response 612 may differ from a response to a common read command for data “ Y”, in which the read optional terminal device 402 may forward the read command for data “Y” to a main memory, receive the data “X” and data “Y” from the main memory, and respond to the common read command for data “ Y” by sending the data “Y” and a request to prompt a read command of data “X”.[00110] The read optional request device 400 may receive the read optional no data response 612 and abandon the requesting data “Y” in operation 614. In some embodiments, the abandonment of the request for data “Y” in operation 614 in response to receiving the read optional no data response 612 may differ from a response to receiving the data “ Y” and a request to prompt a read command of data
“X” in that the read optional request device 400 and the read optional terminal device 402 may repeat signals and operations 600-608, send a common read command for data “ Y”, forward the read command for data “ Y” to a main memory, receive the data “X” and data “ Y” from the main memory, and respond to the common read command for data “ Y” by sending the data “ Y” and a request to prompt a read command of data “X”. This may cause a livelock loop where the requests for data are repeatedly sent after eviction of the data, causing data to be repeatedly retrieved from the main memory. However, sending the read optional command 610 for data “Y”, sending the read optional no data response 612, and abandoning the request for data “Y” in operation 614 avoids repeated implementation of the signals and operations 600-608.[00111] FIG. 7 illustrates an example of a read optional command suitable for implementing various embodiments. With reference to FIGS. 1-7, in this example, the read optional request device 400 may issue a read optional command (e.g., read optional command 408 in FIG. 4) 700 for data at an address of the read optional terminal device 402. As a non-limiting example, the read optional terminal device 402 may be a cache.[00112] The read optional command 700 may miss in the read optional terminal device 402 when the requested data is not located at the address of the read optional terminal device 402 specified by the read optional command 700. In response to the miss for the data in the read optional terminal device 402, the read optional terminal device 402 may respond by sending a read optional no data response (read optional no data response 410 in FIG. 4) 702. The read optional request device 400 may receive the read optional no data response 702, and in response, may abandon or retry the read optional command 700. Unlike a traditional read command, the read optional terminal device 402 may not forward the read optional command 700 to the main memory 704 (e.g., memory 104 in FIG. 1, random access memory 228 in FIG. 2). As such, the read optional command 700 and the read optional no data response 702 may use fewer resources and have lower latency than forwarding a traditional read command to the
main memory 704, retrieving the data from the main memory 704, writing the data to the read optional terminal device 402, and returning the data to the read optional request device 400.[00113] FIG. 8 illustrates an example of a read optional command for data streamlining suitable for implementing various embodiments. With reference to FIGS. 1-8, data streaming may be implemented between a producer and consumer pair of a producer write optional request device 500 and a consumer read optional request device 400. In some embodiments, the write optional request device 500 may issue a write optional command (e.g., write optional command 510 in FIG. 5) 802 to a read/write optional terminal device 800 (e.g., read optional terminal device 402 in FIG. 4, write optional terminal device in FIG. 5). As a non-limiting example, the read/write optional terminal device 800 may be a cache. In the example illustrated in FIG. 8, the write optional command 802 may be successful and may result in writing data to the read/write optional terminal device 800. However, in some embodiments, when no space is available to write the data to the read/write optional terminal device 800, the read/write optional terminal device 800 may return a write option no data response (e.g., write optional no data response 516 in FIG. 5). In some embodiments, the data of the write optional command 802 may be marked non-dirty or nonwriteback.[00114] The read optional request device 400 may issue a read optional command (e.g., read optional command 408 in FIG. 4) 700 for data at an address of the read/write optional terminal device 800. The read optional command 700 may miss in the read optional terminal device 402 when the requested data is not located at the address of the read optional terminal device 402 specified by the read optional command 700. For example, the read optional command 700 may be issued before the write optional command 802 for the same data. In response to the miss for the data in the read/write optional terminal device 800, the read/write optional terminal device 800 may respond by sending a read optional no data response (read optional no
data response 410 in FIG. 4) 702. The read optional request device 400 may receive the read optional no data response 702, and in response, may abandon or retry the read optional command 700. In some embodiments, the data of the read optional command 700 may be marked non-dirty or non-writeback. In streaming data using re-used addresses in the read/write optional terminal device 800, reads may require that a read line be invalidated so that the read line may accept a next write. Invalidating the read line may be done as part of the read optional command 700 or automatically by the cache.[00115] Unlike a traditional data streaming using traditional read and write commands, there may be no need for a data ready signal from the producer to the consumer a data synchronization mechanism to ensure the producer has written data a cache before a consumer attempts to read the data from the cache. Rather, a read optional no data response 702 may be used to inform the consumer read optional request device 400 that the data is not located in the read/write optional terminal device 800, and the read optional request device 400 may retry the read optional command 700 until it results in return of the requested data. Further, the read/write optional terminal device 800 may forgo writing back data to the main memory 704, which traditionally results from faster writes to than reads from a cache that overflow cache space, and from dirty data that has been read by the consumer that won’t be addressed again. The write optional command 802 by the producer write optional request device 500 may be terminated by the read/write optional terminal device 800 based on a lack of space in the read/write optional terminal device 800. The data may not be written to the read/write optional terminal device 800 until the consumer read optional request device 400 has read and invalidated data in the read/write optional terminal device 800 creating space for the write optional request device 500 to write more data to the read/write optional terminal device 800.[00116] FIG. 9 illustrates a method 900 for read optional and/or write optional commands according to an embodiment. With reference to FIGS. 1-9, the method 900
may be implemented in a computing device (e.g., computing device 100 in FIG. 1), in hardware, in software executing in a processor, or in a combination of a software- configured processor and dedicated hardware (e.g., CPU 104, memory 106, communication interface 108, memory interface 110, peripheral device interface 120, processor 124 in FIG. 1, system hub 200, system cache 202, cache controller 204, protocol converter 208, processors 206, 210, 212, 214, memory interface 216, subsystems 218, 220, 222, 232, 234, NoC 224, memory controller 226 in FIG. 2, processor 300, processor cores 302, 304, 306, 308, processor core caches 310, 312, 314, 316, shared processor core caches 320, 322, processor shared cache 320, shared system cache 340 in FIG 3, read optional request device 400 in FIGS. 4 and 6-8, read optional terminal device 402 in FIGS. 4, 6, and 7, write optional request device 500 in FIGS 5 and 8, write optional terminal device 502 in FIG. 5, read/write optional terminal device 800 in FIG. 8). In order to encompass the alternative configurations enabled in various embodiments, the hardware implementing the method 900 is referred to herein as an “optional command device.”[00117] In block 902, the optional command device may generate a read optional command (e.g., read optional command 408 in FIG. 4) and/or write optional command (e.g., write optional command 510 in FIG. 5). In some embodiments, an optional command request device may generate an optional command in block 902. In some embodiments a read optional request device may generate a read optional command in block 902. In some embodiments, a write optional request device may generate a write optional command in block 902.[00118] As described herein, the read optional command may include an address field configured to indicate a target data and/or location for the read optional command. The read optional command may include a read optional command field configured to indicate the type of the command. In some embodiments, the read optional command may include a read optional terminal device ID field configured to indicate which one or more of read optional terminal devices may respond to the read optional command
with a read optional no data response (e.g., read optional no data response 410 in FIG. 4). In some embodiments, the read optional command may include a read optional no data response condition field configured to indicate one or more conditions for which the read optional terminal devices may respond to the read optional command with the read optional no data response. In some embodiments the read optional terminal device ID and/or the read optional no data response condition may be inherent to the read optional command and may not need to be explicitly added as values in the read optional terminal device ID field and/or the read optional no data response condition field.[00119] As described herein, the write optional command may include an address field configured to indicate a target data and/or location for the write optional command. The write optional command may include a write optional command field configured to indicate the type of the command. In some embodiments, the write optional command may include a write optional terminal device ID field configured to indicate which one or combination of write optional terminal devices may respond to the write optional command with a write optional no data response (e.g., write optional no data response 516 in FIG. 5). In some embodiments, the write optional command may include a write optional no data response condition field configured to indicate one or more conditions for which the write optional terminal devices may respond to the write optional command with the write optional no data response. In some embodiments the write optional terminal device ID and/or the write optional no data response condition may be inherent to the write optional command and may not need to be explicitly added as values in the write optional terminal device ID field and/or the write optional no data response condition field.[00120] In block 904, the optional command device may transmit the read optional command and/or the write optional command. In some embodiments, the optional command request device may transmit the optional command in block 904. In some embodiments the read optional request device may transmit the read optional
command in block 904. In some embodiments, the write optional request device may transmit the write optional command in block 904. The optional command device may transmit the read optional command to any device along a transaction path for the read optional command between the optional command device and a main memory (e.g., memory 104 in FIG. 1, random access memory 228 in FIG. 2, main memory 704 in FIGS. 7 and 8), such as a read optional terminal device. The optional command device may transmit the write optional command to any device along a transaction path for the write optional command between the optional command device and the main memory, such as a write optional terminal device.[00121] In block 906, the optional command device may receive the read optional command and/or the write optional command. In some embodiments, an optional command terminal device may receive the optional command in block 906. In some embodiments the read optional terminal device may receive the read optional command in block 906. In some embodiments, the write optional terminal device may receive the write optional command in block 906.[00122] In determination block 908, the optional command device may determine whether the read optional command and/or the write optional command may be implemented. In some embodiments, the optional command terminal device may determine whether the optional command may be implemented in determination block 908. In some embodiments the read optional terminal device may determine whether the read optional command may be implemented in determination block 908. In some embodiments, the write optional terminal device may determine whether the write optional command may be implemented in determination block 908. In some embodiments, the optional command device may determine whether the read optional command may be implemented based on whether target data of the read optional command is located by the optional command device. In some embodiments, the optional command device may determine whether the read optional command may be implemented based on a condition indicated in the read optional no data response
condition field of the read optional command is met. In some embodiments, the optional command device may determine whether the read optional command may be implemented based on a condition inherent to the read optional command. For example, the condition indicated in the read optional no data response condition field and/or the condition inherent to the read optional command may include not locating requested data by the optional command device, requested data being out of bounds for a buffer, a cost of implementing the read optional command exceeding a cost threshold, implementation of the read optional command resulting in an exception, error, and/or fault, being denied access to a requested location and/or target data, etc.[00123] In some embodiments, the optional command device may check the address field of the read optional command configured to indicate target data and/or a location for the read optional command. The optional command device may determine whether the target data is at the location. In response to determining that the target data is not at the location, the optional command device may determine that the read optional command may not be implemented. In some embodiments, the optional command device may determine not to implement the read optional command based on the condition of the read optional no data response condition field being that the target data is not at the location.[00124] In some embodiments, the optional command device may check the address field of the read optional command configured to indicate a target data and/or location for the read optional command. The optional command device may determine whether the location of the target data is within boundaries of a buffer. In response to determining that the location of the target data is not within the boundaries of the buffer, the optional command device may determine that the read optional command may not be implemented. In some embodiments, the optional command device may determine not to implement the read optional command based on the condition of the read optional no data response condition field being that the location of the target data is not within the boundaries of the buffer.
[00125] In some embodiments, the optional command device may calculate a cost for implementing the read optional command. The cost may be calculated based on a cost of any number and combination of operations for implementing the read optional command. Such operations may include read operations, fetch operations, memory management operations, memory coherency operations, etc. Cost may be measured, for example, on a basis of any number and combination of power, time, cycles, bandwidth, resource requirement, effect on latency for other operations, etc. The optional command device may compare the calculated cost for implementing the read optional command to a cost threshold. The optional command device may determine whether the cost for implementing the read optional command exceeds the cost threshold. In response to determining that the cost for implementing the read optional command exceeds the cost threshold, the optional command device may determine that the read optional command may not be implemented. In some embodiments, the optional command device may determine not to implement the read optional command based on the condition of the read optional no data response condition field being that the cost for implementing the read optional command exceeds the cost threshold. In some embodiments, the condition of the read optional no data response condition field may include a value indicating a value of the cost threshold.[00126] In some embodiments, the optional command device may determine that implementing the read optional command may result in an exception, error, and/or fault. In response to determining that implementing the read optional command may result in an exception, error, and/or fault, the optional command device determine that the read optional command may not be implemented. Not implementing the read optional command may avoid an exception, error, and/or fault resulting from implementing the read optional command. In some embodiments, the optional command device may determine not to implement the read optional command based on the condition of the read optional no data response condition field being that implementing the read optional command may result in an exception, error, and/or fault.
[00127] In some embodiments, the optional command device may determine that implementing the read optional command may violate a security protocol. For example, the optional command device may determine that implementing the read optional command may violate a security protocol based on denial of access to a target data and/or location. In response to determining that implementing the read optional command may violate a security protocol, the optional command device determine that the read optional command may not be implemented. In some embodiments, the optional command device may determine not to implement the read optional command based on the condition of the read optional no data response condition field being being denied access the requested target data and/or location.[00128] In some embodiments, the optional command device may determine whether the write optional command may be implemented based on whether there is space to write the target data of the write optional command in the optional command device. In some embodiments, the optional command device may determine whether the write optional command may be implemented based on a condition indicated in the write optional no data response condition field of the write optional command is met. In some embodiments, the optional command device may determine whether the write optional command may be implemented based on a condition inherent to the write optional command. For example, the condition indicated in the write optional no data response condition field and/or the condition inherent to the write optional command may include a requested data write location not being available for writing by the optional command device, a requested data write location being out of bounds for a buffer, a cost of implementing the write optional command exceeding a cost threshold, implementation of the write optional command resulting in an exception, error, and/or fault, being denied access to a requested location, etc.[00129] In some embodiments, the optional command device may check the address field of the write optional command configured to indicate a target location for the write optional command. The optional command device may determine whether the
target location is available to be written. In response to determining that the target location is not available to be written, the optional command device may determine that the write optional command may not be implemented. In some embodiments, the optional command device may determine not to implement the write optional command based on the condition of the write optional no data response condition field being that the target location is not available to be written.[00130] In some embodiments, the optional command device may check the address field of the write optional command configured to indicate a target location for the write optional command. The optional command device may determine whether the target location is within boundaries of a buffer. In response to determining that the target location is not within the boundaries of the buffer, the optional command device may determine that the write optional command may not be implemented. In some embodiments, the optional command device may determine not to implement the write optional command based on the condition of the write optional no data response condition field being that the target location is not within the boundaries of the buffer.[00131] In some embodiments, the optional command device may calculate a cost for implementing the write optional command. The cost may be calculated based on a cost of any number and combination of operations for implementing the write optional command. Such operations may include write operations, memory management operations, memory coherency operations, etc. Cost may be measured, for example, on a basis of any number and combination of power, time, cycles, bandwidth, resource requirement, effect on latency for other operations, etc. The optional command device may compare the calculated cost for implementing the write optional command to a cost threshold. The optional command device may determine whether the cost for implementing the write optional command exceeds the cost threshold. In response to determining that the cost for implementing the write optional command exceeds the cost threshold, the optional command device may determine that the write optional command may not be implemented. In some embodiments, the optional command
device may determine not to implement the write optional command based on the condition of the write optional no data response condition field being that the cost for implementing the write optional command exceeds the cost threshold. In some embodiments, the condition of the write optional no data response condition field may include a value indicating a value of the cost threshold.[00132] In some embodiments, the optional command device may determine that implementing the write optional command may result in an exception, error, and/or fault. In response to determining that implementing the write optional command may result in an exception, error, and/or fault, the optional command device may determine that the write optional command may not be implemented. Not implementing the write optional command may avoid an exception, error, and/or fault resulting from implementing the write optional command. In some embodiments, the optional command device may determine not to implement the write optional command based on the condition of the write optional no data response condition field being that implementing the write optional command may result in an exception, error, and/or fault.[00133] In some embodiments, the optional command device may determine that implementing the write optional command may violate a security protocol. For example, the optional command device may determine that implementing the write optional command may violate a security protocol based on denial of access to a target location. In response to determining that implementing the write optional command may violate a security protocol, the optional command device may determine that the write optional command may not be implemented. In some embodiments, the optional command device may determine not to implement the write optional command based on the condition of the write optional no data response condition field being being denied access the requested target data and/or location.[00134] In response to determining that the read optional command and/or the write optional command may not be implemented (i.e., determination block 908 = “No”),
the optional command device may manage the unsuccessful read optional command and/or the unsuccessful write optional command in block 910. In some embodiments, the optional command terminal device may manage the unsuccessful optional command in block 910. In some embodiments the read optional terminal device may manage the unsuccessful read optional command in block 910. In some embodiments, the write optional terminal device may manage the unsuccessful write optional command in block 910. Managing the unsuccessful read optional command and/or the unsuccessful write optional command is described further herein with reference to the method 1000 of FIG. 10.[00135] In block 912, the optional command device may transmit a read optional no data response (e.g., read optional no data response 410 in FIG. 4) and/or a write optional no data response (e.g., write optional no data response 516 in FIG. 5). In some embodiments, the optional command terminal device may transmit the optional no data response in block 912. In some embodiments the read optional terminal device may transmit the read optional no data response in block 912. In some embodiments, the write optional terminal device may transmit the write optional no data response in block 912. The optional command device may transmit the read optional no data response to the optional command device, such as the read optional request device. The optional command device may transmit the write optional no data response to the optional command device, such as the write optional request device.[00136] In block 914, the optional command device may receive the read optional no data response and/or the write optional no data response. In some embodiments, the optional command request device may receive the read optional no data response in block 914. In some embodiments the read optional request device may receive the read optional no data response in block 914. In some embodiments, the write optional request device may receive the write optional no data response in block 914.[00137] In block 916, the optional command device may manage the read optional no data response and/or the write optional no data response. In some embodiments, the
optional command request device may manage the read optional no data response in block 916. In some embodiments the read optional request device may manage the read optional no data response in block 916. In some embodiments, the write optional request device may manage the write optional no data response in block 916. Managing the read optional no data response and/or the write optional no data response is described further herein with reference to the method 1100a of FIG. 11A and the method 1100b of FIG. 1 IB.[00138] In response to determining that the read optional command and/or the write optional command may be implemented (i.e., determination block 908 = “Yes”), the optional command device may implement the successful read optional command and/or the successful write optional command in block 918. In some embodiments, the optional command terminal device may implement the successful optional command in block 918. In some embodiments the read optional terminal device may implement the successful read optional command in block 918. In some embodiments, the write optional terminal device may implement the successful write optional command in block 918. In some embodiments, implementing the successful read optional command may include returning the target data of the read optional command to the optional command device. In some embodiments, implementing the successful write optional command may include writing the target data of the write optional command to the optional command device or other device to which the optional command device may pass the write optional command.[00139] FIG. 10 illustrates a method 1000 for managing unsuccessful read optional and/or write optional commands according to an embodiment. With reference to FIGS. 1-10, the method 1000 may be implemented in a computing device (e.g., computing device 100 in FIG. 1), in hardware, in software executing in a processor, or in a combination of a software-configured processor and dedicated hardware (e.g., CPU 104, memory 106, communication interface 108, memory interface 110, peripheral device interface 120, processor 124 in FIG. 1, system hub 200, system
cache 202, cache controller 204, protocol converter 208, processors 206, 210, 212, 214, memory interface 216, subsystems 218, 220, 222, 232, 234, NoC 224, memory controller 226 in FIG. 2, processor 300, processor cores 302, 304, 306, 308, processor core caches 310, 312, 314, 316, shared processor core caches 320, 322, processor shared cache 320, shared system cache 340 in FIG 3, read optional request device 400 in FIGS. 4 and 6-8, read optional terminal device 402 in FIGS. 4, 6, and 7, write optional request device 500 in FIGS 5 and 8, write optional terminal device 502 in FIG. 5, read/write optional terminal device 800 in FIG. 8). In order to encompass the alternative configurations enabled in various embodiments, the hardware implementing the method 1000 is referred to herein as an “optional command device.” In some embodiments, the method 1000 may be implemented as part of block 910 in the method of FIG. 9. In the example illustrated in FIG. 10, blocks and lines shown using broken lines are portions of the method 1000 that may be optionally included in any combination.[00140] In optional block 1002, the optional command device may interpret a read optional terminal device ID, a read optional no data response condition, a write optional terminal device ID, and/or a write optional no data response condition of a read optional command (e.g., read optional command 408 in FIG. 4) and/or a write optional command (e.g., write optional command 510 in FIG. 5). In some embodiments, an optional command terminal device may interpret the optional terminal device ID and/or the optional no data response condition of the read optional command in block 1002. In some embodiments a read optional terminal device may interpret the read optional terminal device ID and/or the read optional no data response condition of the read optional command in block 1002. In some embodiments, a write optional terminal device may interpret the write optional terminal device ID and/or the write optional no data response condition of the write optional command in block 1002. As described herein, in some embodiments, the read optional command may include a read optional terminal device ID field configured to indicate which one or more of read optional terminal devices may
respond to the read optional command with a read optional no data response (e.g., read optional no data response 410 in FIG. 4). In some embodiments, the read optional command may include a read optional no data response condition field configured to indicate one or more conditions for which the read optional terminal devices may respond to the read optional command with the read optional no data response. In some embodiments the read optional terminal device ID and/or the read optional no data response condition may be inherent to the read optional command and may not need to be explicitly added as values in the read optional terminal device ID field and/or the read optional no data response condition field. For example, the condition indicated in the read optional no data response condition field and/or the condition inherent to the read optional command may include not locating requested data by the optional command device, requested data being out of bounds for a buffer, a cost of implementing the read optional command exceeding a cost threshold, implementation of the read optional command resulting in an exception, error, and/or fault, being denied access to a requested location and/or target data, etc.[00141] As described herein, in some embodiments, the write optional command may include a write optional terminal device ID field configured to indicate which one or combination of write optional terminal devices may respond to the write optional command with a write optional no data response (e.g., write optional no data response 516 in FIG. 5). In some embodiments, the write optional command may include a write optional no data response condition field configured to indicate one or more conditions for which the write optional terminal devices may respond to the write optional command with the write optional no data response. In some embodiments the write optional terminal device ID and/or the write optional no data response condition may be inherent to the write optional command and may not need to be explicitly added as values in the write optional terminal device ID field and/or the write optional no data response condition field. For example, the condition indicated in the write optional no data response condition field and/or the condition inherent to the write optional command may include a requested data write location not being
available for writing by the optional command device, a requested data write location being out of bounds for a buffer, a cost of implementing the write optional command exceeding a cost threshold, implementation of the write optional command resulting in an exception, error, and/or fault, being denied access to a requested location, etc.[00142] In optional determination block 1004, the optional command device may determine whether the optional command device is a read optional command terminal device and/or a write optional command terminal device. In some embodiments, the optional command terminal device may determine whether it is an optional terminal device in optional determination block 1004. In some embodiments, the read optional terminal device may determine whether it is a read optional terminal device in optional determination block 1004. In some embodiments, the write optional terminal device may determine whether it is a write optional terminal device in optional determination block 1004. In some embodiments the optional command device may determine whether it is a read optional command terminal device and/or a write optional command terminal device based on the interpretation of the read optional terminal device ID and/or the write optional terminal device ID in block 1002. In some embodiments optional determination block 1004 may be implemented in response to determining that the read optional command and/or the write optional command may not be implemented (i.e., determination block 908 = “No” in the method of FIG. 9). In some embodiments optional determination block 1004 may be implemented following interpreting a read optional terminal device ID, a read optional no data response condition, a write optional terminal device ID, and/or a write optional no data response condition of a read optional command and/or a write optional command in optional block 1002. In some embodiments optional determination block 1004 may be implemented in response to determining that a read optional no data response condition and/or a write optional no data response condition is met (e.g., optional determination block 1006 = “Yes”) as described below.
[00143] In optional determination block 1006, the optional command device may determine whether a read optional no data response condition and/or a write optional no data response condition is met. In some embodiments, the optional command terminal device may determine whether an optional no data response condition is met in optional determination block 1006. In some embodiments, the read optional terminal device may determine whether a read optional no data response condition is met in optional determination block 1006. In some embodiments, the write optional terminal device may determine whether a write optional terminal no data response condition is met in optional determination block 1006. In some embodiments the optional command device may determine whether a read optional no data response condition and/or a write optional no data response condition is met based on the interpretation of the read optional no data response condition and/or the write optional no data response condition in block 1002. In some embodiments optional determination block 1006 may be implemented in response to determining that the read optional command and/or the write optional command may not be implemented (i.e., determination block 908 = “No” in the method of FIG. 9). In some embodiments optional determination block 1006 may be implemented following interpreting a read optional terminal device ID, a read optional no data response condition, a write optional terminal device ID, and/or a write optional no data response condition of a read optional command and/or a write optional command in optional block 1002. In some embodiments, optional determination block 1004 may be implemented in response to determining that the optional command device is a read optional command terminal device and/or a write optional command terminal device (i.e., optional determination block 1004 = “Yes”).[00144] In some embodiments, the optional command device may check an address field of the read optional command configured to indicate a target data and/or location for the read optional command. The optional command device may determine whether the target data is at the indicated location. In response to determining that the target data is not at the indicated location, the optional command device may
determine that the read optional no data response condition is met. In some embodiments, the optional command device may determine that the read optional no data response condition is met based on the condition of the read optional no data response condition field being that the target data is not at the location.[00145] In some embodiments, the optional command device may check the address field of the read optional command configured to indicate a target data and/or location for the read optional command. The optional command device may determine whether the indicated location of the target data is within boundaries of a buffer. In response to determining that the indicated location of the target data is not within the boundaries of the buffer, the optional command device may determine that the read optional no data response condition is met. In some embodiments, the optional command device may determine that the read optional no data response condition is met based on the condition of the read optional no data response condition field being that the indicated location of the target data is not within the boundaries of the buffer.[00146] In some embodiments, the optional command device may calculate a cost for implementing the read optional command. The cost may be calculated based on a cost of any number and combination of operations for implementing the read optional command. Such operations may include read operations, fetch operations, memory management operations, memory coherency operations, etc. Cost may be measured, for example, on a basis of any number and combination of power, time, cycles, bandwidth, resource requirement, effect on latency for other operations, etc. The optional command device may compare the calculated cost for implementing the read optional command to a cost threshold. The optional command device may determine whether the cost for implementing the read optional command exceeds the cost threshold. In response to determining that the cost for implementing the read optional command exceeds the cost threshold, the optional command device may determine that the read optional no data response condition is met. In some embodiments, the optional command device may determine that the read optional no data response
condition is met based on the condition of the read optional no data response condition field being that the cost for implementing the read optional command exceeds the cost threshold. In some embodiments, the condition of the read optional no data response condition field may include a value indicating a value of the cost threshold.[00147] In some embodiments, the optional command device may determine that implementing the read optional command may result in an exception, error, and/or fault. In response to determining that implementing the read optional command may result in an exception, error, and/or fault, the optional command device may determine that the read optional no data response condition is met. In some embodiments, the optional command device may determine that the read optional no data response condition is met based on the condition of the read optional no data response condition field being that implementing the read optional command may result in an exception, error, and/or fault.[00148] In some embodiments, the optional command device may determine that implementing the read optional command may violate a security protocol. For example, the optional command device may determine that implementing the read optional command may violate a security protocol based on denial of access to target data and/or a location in memory. In response to determining that implementing the read optional command may violate a security protocol, the optional command device may determine that the read optional no data response condition is met. In some embodiments, the optional command device may determine that the read optional no data response condition is met based on the condition of the read optional no data response condition field being being denied access the requested target data and/or location.[00149] In some embodiments, the optional command device may check an address field of the write optional command configured to indicate a target location for the write optional command. The optional command device may determine whether the target location is available to be written. In response to determining that the target
location is not available to be written, the optional command device may determine that the write optional no data response condition is met. In some embodiments, the optional command device may determine that the write optional no data response condition is met based on the condition of the write optional no data response condition field being that the target location is not available to be written.[00150] In some embodiments, the optional command device may check the address field of the write optional command configured to indicate a target location for the write optional command. The optional command device may determine whether the target location is within boundaries of a buffer. In response to determining that the target location is not within the boundaries of the buffer, the optional command device may determine that the write optional no data response condition is met. In some embodiments, the optional command device may determine that the write optional no data response condition is met based on the condition of the write optional no data response condition field being that the target location is not within the boundaries of the buffer.[00151] In some embodiments, the optional command device may calculate a cost for implementing the write optional command. The cost may be calculated based on a cost of any number and combination of operations for implementing the write optional command. Such operations may include write operations, memory management operations, memory coherency operations, etc. Cost may be measured, for example, on a basis of any number and combination of power, time, cycles, bandwidth, resource requirement, effect on latency for other operations, etc. The optional command device may compare the calculated cost for implementing the write optional command to a cost threshold. The optional command device may determine whether the cost for implementing the write optional command exceeds the cost threshold. In response to determining that the cost for implementing the write optional command exceeds the cost threshold, the optional command device may determine that the write optional no data response condition is met. In some embodiments, the optional command device
may determine that the write optional no data response condition is met based on the condition of the write optional no data response condition field being that the cost for implementing the write optional command exceeds the cost threshold. In some embodiments, the condition of the write optional no data response condition field may include a value indicating a value of the cost threshold.[00152] In some embodiments, the optional command device may determine that implementing the write optional command may result in an exception, error, and/or fault. In response to determining that implementing the write optional command may result in an exception, error, and/or fault, the optional command device may determine that the write optional no data response condition is met. In some embodiments, the optional command device may determine that the write optional no data response condition is met based on the condition of the write optional no data response condition field being that implementing the write optional command may result in an exception, error, and/or fault.[00153] In some embodiments, the optional command device may determine that implementing the write optional command may violate a security protocol. For example, the optional command device may determine that implementing the write optional command may violate a security protocol based on denial of access to a target location. In response to determining that implementing the write optional command may violate a security protocol, the optional command device may determine that the write optional no data response condition is met. In some embodiments, the optional command device may determine that the write optional no data response condition is met based on the condition of the write optional no data response condition field being being denied access the requested target data and/or location.[00154] In block 1008, the optional command device may generate a read optional no data response (e.g., read optional no data response 410 in FIG. 4) and/or a write optional no data response (write optional no data response 516 in FIG. 5). In some embodiments, the optional command terminal device may generate an optional no
data response in block 1008. In some embodiments the read optional terminal device may generate a read optional no data response in block 1008. In some embodiments the write optional terminal device may generate a write optional no data response in block 1008. As described herein, the read optional no data response may include a read optional no data response field configured to indicate the type of the response. In some embodiments, the read optional no data response may include a read optional no data response terminal device ID field configured to indicate which read optional terminal device responds to the read optional command with the read optional no data response. In some embodiments, the read optional no data response may include a read optional no data response condition field configured to indicate a condition for which the read optional terminal device responds to the read optional command with the read optional no data response. In some embodiments, the operations in block 1008 may be implemented in response to determining that the read optional command and/or the write optional command may not be implemented (i.e., determination block 908 = “No” in the method 900 described with reference to FIG. 9). In some embodiments, the operations in block 1008 may be implemented in response to determining that the optional command device is a read optional command terminal device and/or a write optional command terminal device (i.e., optional determination block 1004 = “Yes”), or in response to determining that a read optional no data response condition and/or a write optional no data response condition is met (e.g., optional determination block 1006 = “Yes”).[00155] As described herein, the write optional no data response may include a write optional no data response field configured to indicate the type of the response. In some embodiments, the write optional no data response may include a write optional no data response terminal device ID field configured to indicate which write optional terminal device responds to the write optional command with the write optional no data response. In some embodiments, the write optional no data response may include a write optional no data response condition field configured to indicate a condition for which the write optional terminal device responds to the write optional command with
the write optional no data response. The optional command device may transmit the read optional no data response and/or the write optional no data response in block 912 of the method 900 in FIG. 9. In some embodiments, as part of generating a read optional no data response and/or a write optional no data response in block 1008 and/or transmitting the read optional no data response and/or the write optional no data response in block 912, the optional command device may terminate the read optional command and/or write optional command. By terminating the read optional command and/or write optional command, the optional command device may prevent the optional command from being forwarded to a device along a transaction path.[00156] In response to determining that the optional command device is not a read optional command terminal device and/or a write optional command terminal device (i.e., optional determination block 1004 = “No”), or in response to determining that a read optional no data response condition and/or a write optional no data response condition is not met (e.g., optional determination block 1006 = “No”), the optional command device may convert the read optional command and/or the write optional command to a read command and/or a write command in optional block 1010. In some embodiments the read command and/or a write command may be a conventional read command and/or a conventional write command. To convert the read optional command and/or the write optional command, the optional command device may use information from the read optional command and/or the write optional command corresponding to data needed for the read command and/or the write command to generate the read command and/or the write command. The optional block 1010 may be implemented by the optional terminal device, for example, for devices further along the transaction path for which the read optional command and/or the write optional command may not apply, may not be supported, etc. In some embodiments, as part of converting the read optional command and/or the write optional command to a read command and/or a write command in optional block 1010, the optional command device may terminate the read optional command and/or write optional command. By terminating the read optional command and/or write optional
command, the optional command device may prevent the optional command from being forwarded to a device along a transaction path. In some embodiments, the optional command terminal device may convert the optional command to a command in optional block 1010. In some embodiments the read optional terminal device may convert the read optional command to a read command in optional block 1010. In some embodiments, the write optional terminal device may convert the write optional command to a write command in optional in block 1010.[00157] In optional block 1012, the optional command device may forward a read command and/or a write command along the transaction path of the read optional command and/or the write optional command in block 1010. In some embodiments the read command and/or the write command may be the read optional command and/or the write optional command. In some embodiments, the read command and/or the write command may be the conventional read command and/or the conventional write command to which the read optional command and/or the write optional command was converted in optional block 1010. In some embodiments, the optional command terminal device may forward the command along the transaction path of the optional command in block 1012. In some embodiments the read optional terminal device may forward the read command along the transaction path of the read optional command in block 1012. In some embodiments the write optional terminal device may forward the write command along the transaction path of the write optional command in block 1012. In some embodiments, optional block 1012 may be implemented in response to determining that the optional command device is not a read optional command terminal device and/or a write optional command terminal device (i.e., optional determination block 1004 = “No”). In some embodiments, optional block 1012 may be implemented in response to determining that a read optional no data response condition and/or a write optional no data response condition is not met (e.g., optional determination block 1006 = “No”). In some embodiments, optional block 1012 may be implemented after converting the read optional command and/or the write optional command to a read command and/or a write command in
optional block 1010. In some embodiments, the optional command device may optionally receive the read optional command and/or the write optional command in block 906 of the method 900 of FIG. 9. In some embodiments, a device further along the transaction path may receive the conventional read command and/or the conventional write command and respond to receiving the conventional read command and/or the conventional write command in a known manner.[00158] FIGS. 11A and 11B illustrate methods 1100a, 1100b for managing read optional and/or write optional no data responses according to an embodiment. With reference to FIGS. 1-1 IB, the methods 1100a, 1100b may be implemented in a computing device (e.g., computing device 100 in FIG. 1), in hardware, in software executing in a processor, or in a combination of a software-configured processor and dedicated hardware (e.g., CPU 104, memory 106, communication interface 108, memory interface 110, peripheral device interface 120, processor 124 in FIG. 1, system hub 200, system cache 202, cache controller 204, protocol converter 208, processors 206, 210, 212, 214, memory interface 216, subsystems 218, 220, 222, 232, 234, NoC 224, memory controller 226 in FIG. 2, processor 300, processor cores 302, 304, 306, 308, processor core caches 310, 312, 314, 316, shared processor core caches 320, 322, processor shared cache 320, shared system cache 340 in FIG 3, read optional request device 400 in FIGS. 4 and 6-8, read optional terminal device 402 in FIGS. 4, 6, and 7, write optional request device 500 in FIGS 5 and 8, write optional terminal device 502 in FIG. 5, read/write optional terminal device 800 in FIG. 8). In order to encompass the alternative configurations enabled in various embodiments, the hardware implementing the methods 1100a, 1100b is referred to herein as an “optional command device.” In some embodiments, the methods 1100a, 1100b may be implemented as part of block 916 in the method of FIG. 9. In the example illustrated in FIG. 11 A, blocks and lines shown using broken lines are portions of the method 1100a that may be optionally included in any combination.
[00159] With reference to the method 1100a, in block 1102, the optional command device may interpret a read optional no data response (e.g., read optional no data response 410 in FIG. 4) and/or a write optional no data response (e.g., write optional no data response 516 in FIG. 5). In some embodiments, an optional command request device may interpret the optional no data response in block 1102. In some embodiments, a read optional request device may interpret the read optional no data response in block 1102. In some embodiments, a write optional request device may interpret the write optional no data response in block 1102. As described herein, the read optional no data response may be an acceptable state, such as “OK” state, for the optional command device, rather than an error or fault state triggered by an error or fault signal for failure of a memory access request. Not implementing a read optional command (e.g., read optional command 408 in FIG. 4) may be an acceptable state in that the implementation of the read optional command may not be required, even, for example, when the implementing the read optional command may be critical. As such, not implementing the read optional command may not be error or fault state triggered by an error or fault signal that may be configured to indicate no, incomplete, improper, etc. implementation of a required memory access. The optional command device may be configured to retry and/or abandon the read optional command in response. In some embodiments, the optional command device may be preconfigured with how to respond to the read optional command no data response. In some embodiments, the optional command request device may determine how to respond to the optional command no data response. The read optional no data response may include a read optional no data response field configured to indicate the type of the response. In some embodiments, the read optional no data response may include a read optional no data response terminal device ID field configured to indicate which read optional terminal device responds to the read optional command with the read optional no data response. In some embodiments, the read optional no data response may include a read optional no data response condition field configured to indicate a
condition for which the read optional terminal device responds to the read optional command with the read optional no data response.[00160] As described herein, the write optional no data response may be an acceptable state, such as “OK” state, for the optional command device, rather than an error or fault state triggered by an error or fault signal for failure of a memory access request. Not implementing a write optional command (e.g., write optional command 510 in FIG. 5) may be an acceptable state in that the implementation of the write optional command may not be required, even, for example, when the implementing the write optional command may be critical. As such, not implementing the write optional command may not be error or fault state triggered by an error or fault signal that may be configured to indicate no, incomplete, improper, etc. implementation of a required memory access. The optional command device may be configured to retry and/or abandon the write optional command in response. In some embodiments, the optional command device may be preconfigured with how to respond to the write optional command no data response. In some embodiments, the optional command request device may determine how to respond to the write optional command no data response. The write optional no data response may include a write optional no data response field configured to indicate the type of the response. In some embodiments, the write optional no data response may include a write optional no data response terminal device ID field configured to indicate which write optional terminal device responds to the write optional command with the write optional no data response. In some embodiments, the write optional no data response may include a write optional no data response condition field configured to indicate a condition for which the write optional terminal device responds to the write optional command with the write optional no data response.[00161] In optional block 1104, the optional command device may wait to reissue a read optional command (e.g., read optional command 408 in FIG. 4) and/or a write optional command (e.g., write optional command 510 in FIG. 5). In some
embodiments, the optional command request device may wait to reissue the optional command in block 1104. In some embodiments, the read optional request device may wait to reissue the read optional command in block 1104. In some embodiments, the write optional request device may wait to reissue the write optional command in block 1104. In some embodiments, the units and amount of units to wait may be predetermined. In some embodiments, the units and amount of units to wait may be determined from the interpretation of the read optional no data response and/or the write optional no data response in block 1102. The optional command device may generate the read optional command and/or the write optional command in block 902 of the method 900 in FIG. 9.[00162] With reference to the method 1100b, in block 1102, the optional command device may interpret a read optional no data response (e.g., read optional no data response 410 in FIG. 4) and/or a write optional no data response (e.g., write optional no data response 516 in FIG. 5), as described herein with reference to block 1102 of the method 1100a in FIG. 11A. In block 1106, the optional command device may abandon the read optional command (e.g., read optional command 408 in FIG. 4) and/or the write optional command (e.g., write optional command 510 in FIG. 5). In some embodiments, the optional command request device may abandon the optional command in block 1106. In some embodiments, the read optional request device may abandon the read optional command in block 1106. In some embodiments, the write optional command device may abandon the write optional command in block 1106. In some embodiments, the optional command device may abandon the read optional command and/or the write optional command through inaction, such as by not reissuing the read optional command and/or the write optional command. In some embodiments, the optional command device may abandon the read optional command and/or the write optional command through affirmative action, such as by removing the read optional command and/or the write optional command from a schedule and/or queue. In some embodiments, for a read optional command and/or the write optional command generated on behalf of an instruction, that instruction may record the read
optional no data response and/or a write optional no data according to the ISA, which may include writing a value to a register, status bit, or other signaling mechanism.[00163] A read optional command and/or write optional command system in accordance with the various embodiments (including, but not limited to, embodiments described above with reference to FIGs. 1-1 IB) may be implemented in a wide variety of computing systems including mobile computing devices, an example of which suitable for use with the various embodiments is illustrated in FIG. 12. The mobile computing device 1200 may include a processor 1202 coupled to a touchscreen controller 1204 and an internal memory 1206. The processor 1202 may be one or more multicore integrated circuits designated for general or specific processing tasks. The internal memory 1206 may be volatile or non-volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof. Examples of memory types that can be leveraged include but are not limited to DDR, LPDDR, GDDR, WIDER), RAM, SRAM, DRAM, P-RAM, R- RAM, M-RAM, STT-RAM, and embedded DRAM. The touchscreen controller 1204 and the processor 1202 may also be coupled to a touchscreen panel 1212, such as a resistive-sensing touchscreen, capacitive-sensing touchscreen, infrared sensing touchscreen, etc. Additionally, the display of the mobile computing device 1200 need not have touch screen capability.[00164] The mobile computing device 1200 may have one or more radio signal transceivers 1208 (e.g., Peanut, Bluetooth, ZigBee, Wi-Fi, RF radio) and antennae 1210, for sending and receiving communications, coupled to each other and/or to the processor 1202. The transceivers 1208 and antennae 1210 may be used with the above-mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The mobile computing device 1200 may include a cellular network wireless modem chip 1216 that enables communication via a cellular network and is coupled to the processor.
[00165] The mobile computing device 1200 may include a peripheral device connection interface 1218 coupled to the processor 1202. The peripheral device connection interface 1218 may be singularly configured to accept one type of connection, or may be configured to accept various types of physical and communication connections, common or proprietary, such as Universal Serial Bus (USB), FireWire, Thunderbolt, or PCIe. The peripheral device connection interface 1218 may also be coupled to a similarly configured peripheral device connection port (not shown).[00166] The mobile computing device 1200 may also include speakers 1214 for providing audio outputs. The mobile computing device 1200 may also include a housing 1224, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components described herein. The mobile computing device 1200 may include a power source 1222 coupled to the processor 1202, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile computing device 1200. The mobile computing device 1200 may also include a physical button 1224 for receiving user inputs. The mobile computing device 1200 may also include a power button 1226 for turning the mobile computing device 1200 on and off.[00167] A read optional command and/or write optional command system in accordance with the various embodiments (including, but not limited to, embodiments described above with reference to FIGs. 1-1 IB) may be implemented in a wide variety of computing systems include a laptop computer 1300 an example of which is illustrated in FIG. 13. Many laptop computers include a touchpad touch surface 1317 that serves as the computer’s pointing device, and thus may receive drag, scroll, and flick gestures similar to those implemented on computing devices equipped with a touch screen display and described above. A laptop computer 1300 will typically include a processor 1302 coupled to volatile memory 1312 and a large capacity
nonvolatile memory, such as a disk drive 1313 of Flash memory. Additionally, the laptop computer 1300 may have one or more antenna 1308 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 1316 coupled to the processor 1302. The computer 1300 may also include a floppy disc drive 1314 and a compact disc (CD) drive 1315 coupled to the processor 1302. In a notebook configuration, the computer housing includes the touchpad 1317, the keyboard 1318, and the display 1319 all coupled to the processor 1302. Other configurations of the computing device may include a computer mouse or trackball coupled to the processor (e.g., via a USB input) as are well known, which may also be used in conjunction with the various embodiments.[00168] A read optional command and/or write optional command system in accordance with the various embodiments (including, but not limited to, embodiments described above with reference to FIGs. 1-1 IB) may also be implemented in fixed computing systems, such as any of a variety of commercially available servers. An example server 1400 is illustrated in FIG. 14. Such a server 1400 typically includes one or more multicore processor assemblies 1401 coupled to volatile memory 1402 and a large capacity nonvolatile memory, such as a disk drive 1404. As illustrated in FIG. 14, multicore processor assemblies 1401 may be added to the server 1400 by inserting them into the racks of the assembly. The server 1400 may also include a floppy disc drive, compact disc (CD) or digital versatile disc (DVD) disc drive 1406 coupled to the processor 1401. The server 1400 may also include network access ports 1403 coupled to the multicore processor assemblies 1401 for establishing network interface connections with a network 1405, such as a local area network coupled to other broadcast system computers and servers, the Internet, the public switched telephone network, and/or a cellular data network (e.g., CDMA, TDMA, GSM, PCS, 3G, 4G, LTE, 5G or any other type of cellular data network).[00169] Computer program code or “program code” for execution on a programmable processor for carrying out operations of the various embodiments may be written in a
high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact-SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used in this application may refer to machine language code (such as object code) whose format is understandable by a processor.[00170] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular.[00171] The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the various embodiments may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the claims.[00172] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital
signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[00173] In one or more embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non- transitory computer-readable medium or a non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module that may reside on a non-transitory computer- readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a
method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.[00174] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and implementations without departing from the scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments and implementations described herein, but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
Integrated circuits and methods of manufacturing such circuits are disclosed herein that feature metal line-via matrix insertion after place and route processes are performed and/or completed for the integrated circuit's layout. The metal line-via matrix consists of one or more additional metal lines and one or more additional vias that are inserted into the integrated circuit's layout at a specific point to lower the current and current density through a first conductive path that has been determined to suffer from electromigration, IR-voltage drop, and/or jitter. Specifically, the metal line-via matrix provides one or more auxiliary conductive paths to divert and carry a portion of the current that would otherwise flow through the first conductive path. This mitigates electromigration issues and IR-voltage drop along the first conductive path. It may also help alleviate problems due to jitter along the path. |
CLAIMS1. A method of manufacturing an integrated circuit, the method comprising:performing routing of the integrated circuit to generate a plurality of conductive paths across a plurality of metal layers;identifying a first conductive path of the plurality of conductive paths having a current and a current density, the first conductive path including at least a first metal line within a first metal layer; andafter performing the steps of routing and identifying, forming an auxiliary conductive path that includes forming a first via, a second metal line, and a second via, the first via electrically coupled to the second metal line that is in turn electrically coupled to the second via, the second metal line positioned within a second metal layer that is different than the first metal layer, the first and second vias positioned between the first metal layer and the second metal layer, and wherein the first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces the current and the current density of the first conductive path by diverting a portion of the current flowing through the first conductive path.2. The method of claim 1, wherein a path length of the auxiliary conductive path is different than a path length of the first conductive path.3. The method of claim 2, wherein the path length of the auxiliary conductive path is greater than the path length of the first conductive path.4. The method of claim 1, further comprising:after performing the steps of routing and identifying, forming a second auxiliary conductive path that includes forming a third metal line, a fourth metal line, and a fifth metal line, the third metal line electrically coupled to the fourth metal line that is in turn electrically coupled to the fifth metal line, the third, fourth, and fifth metal lines all positioned within the second metal layer, and the third and fifth metal lines electrically couple the fourth metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.5. The method of claim 4, wherein a path length for each of the first conductive path, the auxiliary conductive path, and the second auxiliary conductive path are different from one another.6. The method of claim 1, further comprising:after performing the steps of routing and identifying, forming a second auxiliary conductive path that includes forming a third via, a third metal line, and a fourth via, the third via electrically coupled to the third metal line that is in turn electrically coupled to the fourth via, the third metal line positioned within a third metal layer that is different than the first and second metal layers, the third and fourth vias positioned between the second and third metal layers, and the third and fourth vias electrically couple the third metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.7. The method of claim 6, wherein a path length for each of the first conductive path, the auxiliary conductive path, and the second auxiliary conductive path are different from one another.8. The method of claim 1, wherein forming the auxiliary conductive path further includes a third and fourth metal line in the second metal layer, a first end of the second metal line electrically coupled to the first via through the third metal line, and a second end of the second metal line electrically coupled to the second via through the fourth metal line.9. The method of claim 1, wherein an end of the second metal line includes a metal extension piece that extends beyond a juncture where the second metal line coupled to a via, the extension piece adapted to collect atoms and/or form a void due to electromigration.10. The method of claim 1, wherein the auxiliary conductive path is formed by inserting the first and second vias and the second metal line into a layout design of the integrated circuit after placing and routing of the integrated circuit has been performed.1 1. An integrated circuit comprising:a first conductive path that includes at least a first metal line within a first metal layer; andat least one auxiliary conductive path that includes a first via, a second metal line, and a second via, the first via electrically coupled to the second metal line that is in turn electrically coupled to the second via, the second metal line positioned within a second metal layer that is different than the first metal layer, the first and second vias positioned between the first metal layer and the second metal layer, and wherein the first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces the current and the current density of the first conductive path by diverting a portion of the current flowing through the first conductive path.12. The integrated circuit of claim 1 1, wherein the auxiliary conductive path is formed by inserting the first and second vias and the second metal line into a layout design of the integrated circuit after placing and routing of the integrated circuit has been performed.13. The integrated circuit of claim 11, wherein a path length of the auxiliary conductive path is different than a path length of the first conductive path.14. The integrated circuit of claim 13, wherein the path length of the auxiliary conductive path is greater than the path length of the first conductive path.15. The integrated circuit of claim 11, further comprising:a second auxiliary conductive path that includes a third metal line, a fourth metal line, and fifth metal line, the third metal line electrically coupled to the fourth metal line that is in turn electrically coupled to the fifth metal line, the third, fourth, and fifth metal lines all positioned within the second metal layer, and the third and fifth metal lines electrically couple the fourth metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.16. The integrated circuit of claim 15, wherein a path length for each of the first conductive path, the auxiliary conductive path, and the second auxiliary conductive path are different from one another.17. The integrated circuit of claim 11, further comprising:a second auxiliary conductive path that includes a third via, a third metal line, and a fourth via, the third via electrically coupled to the third metal line that is in turn electrically coupled to the fourth via, the third metal line positioned within a third metal layer that is different than the first and second metal layers, the third and fourth vias positioned between the second and third metal layers, and the third and fourth vias electrically couple the third metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.18. The integrated circuit of claim 17, wherein a path length for each of the first conductive path, the auxiliary conductive path, and the second auxiliary conductive path are different from one another.19. The integrated circuit of claim 11, wherein the auxiliary conductive path further includes a third and fourth metal line in the second metal layer, a first end of the second metal line electrically coupled to the first via through the third metal line, and a second end of the second metal line electrically coupled to the second via through the fourth metal line.20. The integrated circuit of claim 11 , wherein an end of the second metal line includes a metal extension piece that extends beyond a juncture where the second metal line coupled to a via, the extension piece adapted to collect atoms and/or form a void due to electromigration.21. An integrated circuit prepared by the process comprising:performing routing of the integrated circuit to generate a plurality of conductive paths across a plurality of metal layers;identifying a first conductive path of the plurality of conductive paths having a current and a current density, the first conductive path including at least a first metal line within a first metal layer; andafter performing the steps of routing and identifying, forming an auxiliary conductive path that includes forming a first via, a second metal line, and a second via, the first via electrically coupled to the second metal line that is in turn electrically coupled to the second via, the second metal line positioned within a second metal layer that is different than the first metal layer, the first and second vias positioned between the first metal layer and the second metal layer, and wherein the first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces the current and the current density of the first conductive path by diverting a portion of the current flowing through the first conductive path.22. The integrated circuit of claim 21, wherein a path length of the auxiliary conductive path is different than a path length of the first conductive path.23. The integrated circuit of claim 22, wherein the path length of the auxiliary conductive path is greater than the path length of the first conductive path.24. The integrated circuit of claim 21, the process further comprising:after performing the steps of routing and identifying, forming a second auxiliary conductive path that includes forming a third metal line, a fourth metal line, and a fifth metal line, the third metal line electrically coupled to the fourth metal line that is in turn electrically coupled to the fifth metal line, the third, fourth, and fifth metal lines all positioned within the second metal layer, and the third and fifth metal lines electrically couple the fourth metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.25. The integrated circuit of claim 24, wherein a path length for each of the first conductive path, the auxiliary conductive path, and the second auxiliary conductive path are different from one another.26. The integrated circuit of claim 21, the process further comprising:after performing the steps of routing and identifying, forming a second auxiliary conductive path that includes forming a third via, a third metal line, and a fourth via, the third via electrically coupled to the third metal line that is in turn electrically coupled to the fourth via, the third metal line positioned within a third metal layer that is different than the first and second metal layers, the third and fourth vias positioned between the second and third metal layers, and the third and fourth vias electrically couple the third metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.27. The integrated circuit of claim 26, wherein a path length for each of the first conductive path, the auxiliary conductive path, and the second auxiliary conductive path are different from one another.28. The integrated circuit of claim 21, wherein forming the auxiliary conductive path further includes a third and fourth metal line in the second metal layer, a first end of the second metal line electrically coupled to the first via through the third metal line, and a second end of the second metal line electrically coupled to the second via through the fourth metal line.29. The integrated circuit of claim 21, wherein an end of the second metal line includes a metal extension piece that extends beyond a juncture where the second metal line coupled to a via, the extension piece adapted to collect atoms and/or form a void due to electromigration.30. The integrated circuit of claim 21, wherein the auxiliary conductive path is formed by inserting the first and second vias and the second metal line into a layout design of the integrated circuit after placing and routing of the integrated circuit has been performed. |
MITIGATING ELEC TROMIGRATION, IN-RUSH CURRENT EFFECTS, IR-VOLTAGE DROP, AND JITTER THROUGH METALLINE AND VIA MATRIX INSERTIONCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims priority and the benefit of U.S. Non-Provisional Application No. 14/340,381, filed in the United States Patent and Trademark Office on July 24, 2014, the entire content of which is incorporated herein by reference.BACKGROUNDField[0002] Various features generally relate to integrated circuits (IC), and more particularly to ICs and methods of manufacturing the same that feature metal line and via matrix insertion to reduce and/or mitigate electromigration, in-rush current effects including IR-voltage drop, and jitter.Background[0003] Electromigration is the transport of material caused by the movement of ions in a conductor due to the momentum transfer between conducting electrons and diffusing metal atoms. A conductor, such as a wire line or interconnect in an IC, is especially susceptible to electromigration when current densities through the conductor are relatively high. Electromigration decreases the reliability of ICs because it may result in voids (i.e., open circuit) and/or shorts along conductive paths within the IC, which may ultimately cause the IC to fail. As IC dimensions continue to decrease in size, electromigration increases in effect and significance.[0004] In-rush current is the maximum, instantaneous input current drawn by an electrical device or circuit when turned ON or otherwise activated in some way. For dynamically saving power, clock-gating is widely used on modern ICs. Consequently however, in-rush current issues result when large currents flow into a circuit when the clock-gating is turned OFF, which may cause considerable IR-voltage drop. The resulting IR-voltage drop may cause operational status changes in transistors, such as turning ON a transistor that is supposed to be OFF. Moreover, in-rush current issues are typical near power switches of the IC, which often makes it a location specific issue. However, chip area at such locations may be very limited due to the IC's design, and thus the amount of chip area occupied by a proposed solution to the in-rush current issue should be as small as possible.[0005] Jitter is the frequency deviation from the static periodicity of a periodic signal. The sources of jitter include power supply noise, data path noise, phase distortion on the circuit (e.g., caused by phase-lock-loops), etc. Jitter can be quite problematic for ICs related to many different applications.[0006] Very commonly, ICs of the prior art employ decoupling capacitors (e.g., "de- caps") to mitigate the above undesirable effects of electromigration, IR-voltage drop caused by in-rush currents, and jitter. Specifically, de-caps are inserted at strategic points in a circuit where one or more of the above problems are anticipated. However, de-caps have distinct drawbacks. First, they consume large chip areas, which in some locations of the IC (e.g., near a power switch) makes their use very impractical or difficult. Second, some de-caps consume significant power since they may include one or more transistors. Third, de-caps have a frequency derived impedance that is selected based on the anticipated operating frequency of the circuit. Problematically, changes to the operating frequency of the circuit (e.g., when the IC enters a lower power state) may negatively affect the performance of the de-cap, which may have to be re-tuned to re- optimize performance.[0007] There is a need for methods and devices that mitigate the problems associated with electromigration, in-rush current based IR-voltage drop, and jitter that consume less power, consume less chip area, and are robust to changes in the operating frequency of the IC.SUMMARY[0008] One feature provides a method of manufacturing an integrated circuit. The method comprises performing routing of the integrated circuit to generate a plurality of conductive paths across a plurality of metal layers, identifying a first conductive path of the plurality of conductive paths having a current and a current density, the first conductive path including at least a first metal line within a first metal layer, and after performing the steps of routing and identifying, forming an auxiliary conductive path that includes forming a first via, a second metal line, and a second via. The first via electrically couples to the second metal line that is in turn electrically coupled to the second via. The second metal line is positioned within a second metal layer that is different than the first metal layer, and the first and second vias are positioned between the first metal layer and the second metal layer. The first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces the current and the current density of the first conductive path by diverting a portion of the current flowing through the first conductive path. According to one aspect, a path length of the auxiliary conductive path is different than a path length of the first conductive path. According to another aspect, the path length of the auxiliary conductive path is greater than the path length of the first conductive path.[0009] According to one aspect, the method further comprises, after performing the steps of routing and identifying, forming a second auxiliary conductive path that includes a third metal line, a fourth metal line, and a fifth metal line, the third metal line electrically coupled to the fourth metal line that is in turn electrically coupled to the fifth metal line, the third, fourth, and fifth metal lines all positioned within the second metal layer, and the third and fifth metal lines electrically couple the fourth metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive. According to another aspect, a path length for each of the first conductive path, the auxiliary conductive path, and the second auxiliary conductive path are different from one another.[0010] According to one aspect, the method further comprises, after performing the steps of routing and identifying, forming a second auxiliary conductive path that includes forming a third via, a third metal line, and a fourth via, the third via electrically coupled to the third metal line that is in turn electrically coupled to the fourth via, the third metal line positioned within a third metal layer that is different than the first and second metal layers, the third and fourth vias positioned between the second and third metal layers, and the third and fourth vias electrically couple the third metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive. According to another aspect, forming the auxiliary conductive path further includes a third and fourth metal line in the second metal layer, a first end of the second metal line electrically coupled to the first via through the third metal line, and a second end of the second metal line electrically coupled to the second via through the fourth metal line.[0011] According to one aspect, an end of the second metal line includes a metal extension piece that extends beyond a juncture where the second metal line coupled to a via, the extension piece adapted to collect atoms and/or form a void due to electromigration. According to another aspect, the auxiliary conductive path is formed by inserting the first and second vias and the second metal line into a layout design of the integrated circuit after placing and routing of the integrated circuit has been performed.[0012] Another feature provides an integrated circuit comprising a first conductive path that includes at least a first metal line within a first metal layer, and at least one auxiliary conductive path that includes a first via, a second metal line, and a second via. The first via is electrically coupled to the second metal line that is in turn electrically coupled to the second via, and the second metal line is positioned within a second metal layer that is different than the first metal layer. The first and second vias are positioned between the first metal layer and the second metal layer, and wherein the first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces the current and the current density of the first conductive path by diverting a portion of the current flowing through the first conductive path.[0013] According to one aspect, the integrated circuit further comprises a second auxiliary conductive path that includes a third metal line, a fourth metal line, and a fifth metal line, the third metal line electrically coupled to the fourth metal line that is in turn electrically coupled to the fifth metal line. The third, fourth, and fifth metal lines are all positioned within the second metal layer, and the third and fifth metal lines electrically couple the fourth metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.[0014] According to one aspect, the integrated circuit further comprises a second auxiliary conductive path that includes a third via, a third metal line, and a fourth via, the third via electrically coupled to the third metal line that is in turn electrically coupled to the fourth via, the third metal line positioned within a third metal layer that is different than the first and second metal layers, the third and fourth vias positioned between the second and third metal layers, and the third and fourth vias electrically couple the third metal line to the second metal line such that the second auxiliary conductive path further reduces the current and the current density of the first conductive path by diverting an additional portion of the current flowing through the first conductive.[0015] Another feature provides an integrated circuit prepared by the process comprising performing routing of the integrated circuit to generate a plurality of conductive paths across a plurality of metal layers, identifying a first conductive path of the plurality of conductive paths having a current and a current density, the first conductive path including at least a first metal line within a first metal layer, and after performing the steps of routing and identifying, forming an auxiliary conductive path that includes forming a first via, a second metal line, and a second via, the first via electrically coupled to the second metal line that is in turn electrically coupled to the second via, the second metal line positioned within a second metal layer that is different than the first metal layer, the first and second vias positioned between the first metal layer and the second metal layer, and wherein the first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces the current and the current density of the first conductive path by diverting a portion of the current flowing through the first conductive path.BRIEF DESCRIPTION OF THE DRAWINGS[0016] FIG. 1 illustrates a perspective view of an exemplary integrated circuit (IC) featuring metal line and via matrix insertion.[0017] FIG. 2 illustrates a schematic, cross-sectional view of the IC along the line 2-2.[0018] FIG. 3 illustrates a conceptual, perspective view of a first conductive path in the IC.[0019] FIG. 4 illustrates a conceptual, perspective view of a second conductive path within the IC featuring metal line-via matrix insertion.[0020] FIG. 5 illustrates a multi-stage buffer path.[0021] FIG. 6 illustrates the multi-stage buffer path after metal line-via matrix insertion. [0022] FIG. 7 illustrates the relative IR-voltage drop versus time for stages A, B, and C of the buffer path shown in FIGS. 5 and 6.[0023] FIG. 8 illustrates a conceptual, perspective view of a third conductive path within the IC featuring metal line-via matrix insertion.[0024] FIG. 9 illustrates a conceptual, perspective view of a fourth conductive path within the IC featuring metal line-via matrix insertion.[0025] FIG. 10 illustrates a conceptual, perspective view of a fifth conductive path within the IC featuring a metal line-via matrix insertion.[0026] FIG. 11 illustrates a flowchart for a method manufacturing an integrated circuit.DETAILED DESCRIPTION[0027] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.[0028] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage, or mode of operation. As used herein, the term "electrically coupled" refers to the direct or indirect coupling between two objects that allows for the flow of electrical current to take place between the two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered electrically coupled to one another— even if they do not directly physically touch each other— if object B is a conductor that allows for the flow of electrical current to take place from object A to object C and/or from object C to object A.Overview [0029] Integrated circuits and methods of manufacturing such circuits are disclosed herein that feature metal line-via matrix insertion after place and route processes are performed and/or completed for the integrated circuit's layout. The metal line-via matrix consists of one or more additional metal lines and one or more additional vias that are inserted into the integrated circuit's layout at a specific point to lower the current and current density through a first conductive path that has been determined to suffer from electromigration, IR- voltage drop, and/or jitter. Specifically, the metal line- via matrix provides one or more auxiliary conductive paths to divert and carry a portion of the current that would otherwise flow through the first conductive path. This mitigates electromigration issues and IR-voltage drop along the first conductive path. It may also help alleviate problems due to jitter along the path.Exemplary ICs Featuring Metal Line and Via Matrix Insertion[0030] FIG. 1 illustrates a perspective view of an exemplary integrated circuit (IC) 100 featuring metal line and via matrix insertion according to one aspect of the disclosure. The IC 100 may be any type of IC including, but not limited to, a processor, a processing circuit within a processor, a memory circuit, etc. The IC 100 may be found in any electronic device including electronic communication devices such as, but not limited, to mobile phones, computers, tablets, watches, glasses, etc. In the illustrated example, the IC 100 is a "flip-chip" IC. However, the methods and devices described herein equally apply to any other type of IC including a wire-bonded ICs.[0031] FIG. 2 illustrates a schematic, cross-sectional view of the IC 100 along the line 2-2 (see FIG. 1). The IC 100 includes a plurality of metal layers (e.g., MA, MB, Mc, MD, etc.) having metal lines/traces 201 , 202, 203 that may be electrically coupled together through conductive vias (VA, VB, VC, etc.). The network of metal lines 201 , 202, 203 and vias 204, 205 may, for example, electrically couple a transistor 206 or other circuit element(s) to other portions of the IC 100 such as other devices, power networks, ground networks, etc. by providing a conductive path. One or more of these metal lines 201 , 202, 203 and/or vias 204, 205 may be susceptible to electromigration, IR-voltage drop caused by in-rush currents, and/or jitter, and thus the methods and devices for reducing these problematic effects can be applied to such an IC 100. For example, current Ij flowing through the metal lines 201 , 203 and via 205 may have a relatively high current density and cause electromigration, IR-voltage drop, and/or jitter problems. As described in greater detail below, the IC 100 includes metal line and via insertion to reduce the current density and current Ij thereby alleviating electromigration, IR- voltage drop, and jitter. In the example shown, four (4) metal layers are depicted. However, in practice the methods and devices described herein apply to an IC having any plurality of metal and via layers.[0032] FIG. 3 illustrates, according to one non-limiting example, a conceptual, perspective view of a conductive path 300 in the IC 100. The conductive path 300 includes a first conductive path 302 that extends from point A to point B and includes the metal lines 201 , 203 and via 205. The first conductive path 302 carries the current Ij that flows according to the dashed, directional arrows shown in FIG. 3. Thus, current Ii flows: (1 ) along the first metal line 201 in a direction starting from the positive X-axis towards the negative X-axis; (2) then down the via 205 (i.e., from the positive Y-axis to the negative Y-axis); and (3) then through the second metal line 203 in a direction from the negative Z-axis to the positive Z-axis. The magnitude of the current I A entering point A is equal to the magnitude of the current h leaving point B. Since the first conductive path 302 shown is the only available path for the current IA to flow into and the current h to flow out of, the magnitude of the current Ii is equal to the magnitude of the currents and h. Thus, \ \ = \ \ = \ \-[0033] In the illustrated example, the first metal line 201 may be in a higher metal layer (e.g. metal layer Mc) than the second metal line 203 (e.g., in metal layer MB), and the via 205 may be in via layer VB. However, this is merely an example. The first metal line 201 may be in any metal layer that is different than the second metal line 203, and one or more vias 205 may electrically couple the two lines 201 , 203 together. Similarly, the direction of the currents I A, and h may be reversed.[0034] The conductive paths 300, 302 shown in FIG. 3 are generated after placing and routing of the IC 100 (or at least a portion of the IC 100 that includes the conductive paths 300, 302) is performed/completed. After the place and route design stage is performed it may be determined (e.g., through simulation/testing) that the first conductive path 302 is susceptible to electromigration due to high current density and in-rush current induced IR-voltage drop due to the large current flowing through it. The conductive path 302 may also, or in the alternative, be susceptible to jitter issues. As discussed below, inserting one or more additional conductive paths composed of metal lines and vias into the layout design of the IC may reduce the current density and current of the first conductive path 302, and consequently alleviate electromigration, IR- voltage drop, and/or jitter problems.[0035] FIG. 4 illustrates a conceptual, perspective view of a conductive path 400 within the IC 100 featuring metal line-via matrix insertion according to one non-limiting example. The conductive path 400 extends from point A to point B and includes the first conductive path 302 (e.g., may be referred to as "main conductive path") discussed above with respect to FIG. 3, and also includes an auxiliary conductive path 402 (e.g., may be referred to as "second conductive path") formed by the insertion of additional metal lines 410, 412 and vias 420, 422, 424. (Inserted metal lines and vias, such as metal lines 410, 412 and vias 420, 422, 424, may be referred to herein as a "metal line- via matrix.") The first inserted metal line 410 may be in a different (e.g., lower) metal layer than the second inserted metal line 412. The first inserted metal line 410 and second inserted metal line 412 may be in the same metal layer as the second metal line 203 and first metal line 201 of the first conductive path 302, respectively. The inserted vias 420, 422, 424 may be in the same via layer as the via 205 of the first conductive path 302. The metal line-via matrix that comprises the auxiliary conductive path 402 is inserted into the layout design of the IC 100 after the place and route stage of the IC 100 (or some portion of the IC 100 thereof that includes the first conductive path 302) has been performed.[0036] Similar to the conductive path 300 described above with respect to FIG. 3, the magnitude of the current 1A in FIG. 4 entering point A is equal to the magnitude of the current h leaving point B. However, unlike in FIG. 3, a portion of the current I A shown in FIG. 4 flows along the first conductive path 302 as current Ij and another portion of the current 1A flows along the auxiliary conductive path 402 as current along the dashed, directional arrows. Thus, in the example shown, the current I2flows: (1) down through a first inserted via 420 in a direction from the positive Y-axis to the negative Y- axis; (2) through a first inserted metal line 410 in a direction from the negative Z-axis to the positive Z-axis; (3) up through a second inserted via 422 in a direction from the negative Y-axis to the positive Y-axis; (4) through a second inserted metal line 412 in a direction from the positive X-axis to the negative X-axis; and (5) then back down a third inserted via 424 in a direction from the positive Y-axis to the negative Y-axis where it rejoins the first conductive path's 302 current Ij to form current IB that flows out from point B. Thus, \Ii + \ = \h\ = \ \- [0037] In effect, the auxiliary conductive path 402 diverts a portion of the current that would otherwise ordinarily flow through the first conductive path 302. By diverting this current through the auxiliary conductive path 402 (e.g., generating current I 2), the current density of the first conductive path 302 (e.g., current I ) is reduced, and consequently any existing electromigration issues along the first conductive path 302 may also be reduced. Similarly, the amount of current (e.g., which may be an in-rush current) flowing through the first conductive path 302 is also reduced resulting in reduced IR-voltage drop. Insertion of the metal line-via matrix may also help reduce jitter along the first conductive path 302.[0038] Moreover, besides reducing the amount of current flowing through the first conductive path 302, the metal line-via matrix provides additional features/properties that reduce in-rush current induced IR-voltage drops. The auxiliary conductive path 402 of the metal line-via matrix has a different length than the first conductive path 302 and consequently it takes a different amount of time for its current I 2 to flow from point A to point B than the current In the non-limiting example shown in FIG. 4, the second conductive path 402 is longer than the first conductive path 302 and thus it takes a longer period of time for its current I2 to flow from point A to point B than the current Since the latencies of the auxiliary conductive path 402 and first conductive path 302 are different, in-rush current flowing through the general conductive path 400 (which includes paths 302, 402) is distributed across a longer time interval. This significantly reduces the impact (e.g., reduces IR-voltage drop) of a sudden influx of current. The optimal length of the auxiliary conductive path 402 (or optimal lengths of each auxiliary path in the case of multiple auxiliary paths (see e.g., FIGS. 8 and 10)) may be decided by the resistor-capacitor delay (i.e., RC delay) associated with the conductive path 400 and the clock frequency of the circuit (e.g., clock frequency of IC 100) having the conductive path 400.[0039] FIGS. 5 - 7 together illustrate how the differing latencies (i.e., signal path delays) of the auxiliary conductive path 402 and the first conductive path 302 help distribute the in-rush current across a longer time interval to reduce the maximum inrush current induced IR-voltage drop.[0040] FIG. 5 illustrates a multi-stage buffer path 500 according to one aspect of the disclosure. The buffer path 500 includes a first buffer 502, a second 504, a third buffer 506, and a fourth buffer 508. Additional buffers (not shown) may follow the fourth buffer 508. The portion of the buffer path 500 between the first buffer 502 and the second buffer 504 may be considered the Stage A, the portion between the second buffer 504 and the third buffer 506 stage B, and the portion between the third buffer 506 and the fourth buffer 508 stage C. An in-rush current, designated by the dashed arrow in FIG. 5, flows through the buffers path 500. In FIG. 5, the conductive path coupling the first buffer 502 to the second buffer 504 is the conductive path 300 of FIG. 3, which includes the first conductive path 302.[0041] FIG. 6 illustrates the multi-stage buffer path 500 after metal line-via matrix insertion where the conductive path coupling the first buffer 502 to the second buffer 504 is now the conductive path 400 of FIG. 4, which not only includes the first conductive path 302, but also includes the auxiliary conductive path 402. As discussed above, the auxiliary conductive path 402 has a different (e.g., longer) path delay than the first conductive path 302.[0042] FIG. 7 illustrates the relative IR-voltage drop versus time for stages A, B, and C of the buffer path 500 shown in FIGS. 5 and 6 according to the specific conductive path placed between the first and second buffers 502, 504 (e.g., either conductive path 300 of FIG. 3 or the conductive path 400 of FIG. 4). The top third of FIG. 7 shows the in-rush current induced IR-voltage drop of the buffer path 500 when the conductive path 300 of FIG. 3, which includes only the first conductive path 302, electrically couples the first and second buffers 502, 504 to each other. The observed maximum IR-voltage drop at the first conductive path 302 causes the voltage at stage A to drop to voltage Vi, which is relatively low. This may cause circuit malfunction if, for example, the low voltage level causes some transistors to turn ON when they should be OFF or turn OFF when they should be on, among other issues.[0043] The middle third of FIG. 7 shows the in-rush current induced IR-voltage drop of the buffer path 500 when the conductive path 400 of FIG. 4, which includes both the first conductive path 302 and the auxiliary conductive path 402, electrically couples the first and second buffers 502, 504 to each other. The observed maximum IR-voltage drop at the first conductive path 302 and the auxiliary conductive path 402 causes the voltage level at stage A of each of these paths 302, 402 to drop to about voltage V2(where V2 is greater than Vj) and to be time shifted with respect to each other because of their different path delays. Thus, the middle third of FIG. 7 shows the individual IR-voltage drop contribution of each conductive path 302, 402. [0044] The bottom third of FIG. 7 shows the in-rush current induced IR- voltage drop of the buffer path 500 when again the conductive path 400 of FIG. 4 electrically couples the first and second buffers 502, 504 to each other. The curve shown here represents the composite maximum IR-voltage drop at stage A, which causes the voltage level of the conductive path 400 to drop to voltage V3. Since V3 is greater than Vi, inserting the metal line-via matrix reduces the maximum in-rush induced IR voltage drop of the first conductive path 302 by an amount V3 - Vj. The longer signal path delay associated with the auxiliary conductive path 402 causes the overall in-rush current to flow through the conductive path 400 over a greater period of time causing the IR-voltage drop to lengthen in time from to to ¾ instead of from to to ti. The later stages (e.g., stages B and C) of the buffer path 500 exhibit very similarly shaped curves as stage A except with less pronounced (i.e., less magnitude) in-rush current induced IR-voltage drop due to the effect of the buffers 504, 506.[0045] As mentioned above, the example shown in FIG. 4 of the auxiliary conductive path 402 is merely exemplary. Generally, a metal line-via matrix comprising one or more auxiliary conductive paths of any size and shape may be inserted after place and route of the IC 500 is performed and problematic conductive paths are identified that are prone to electromigration, IR-voltage drop, and/or jitter. The metal line-via matrix inserted may generally comprise a first conductive path that includes at least a first metal line within a first metal layer, and at least one auxiliary conductive path that includes: a second metal line within a second metal layer; a first via between the first metal layer and the second metal layer; and a second via between the first metal layer and the second metal layer. The first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces a current and a current density of the first conductive path by sharing/diverting a portion of a current flowing through the first conductive path. Below are some additional non- limiting examples of metal line-via matrices according to different aspects that provide auxiliary conductive paths to reduce electromigration, IR-voltage drop, and/or jitter of a first conductive path.[0046] FIG. 8 illustrates a conceptual, perspective view of a conductive path 800 within the IC 100 featuring metal line-via matrix insertion according to another non- limiting example. Similar to the conductive path 400 shown in FIG. 4, the conductive path 800 of FIG. 8 extends from point A to point B and includes the first conductive path 302. The conductive path 800 also includes additional auxiliary current paths formed by the insertion of a metal line-via matrix that includes metal lines 410, 412, 810, 812, 814, 816 and vias 420, 422, 424. Some inserted metal lines 410, 814, 816 may be in a different (e.g., lower) metal layer than other inserted metal lines 412, 810, 812. Some inserted metal lines 410, 814, 816 may be in the same metal layer as the second metal line 203 of the first conductive path 302, and other inserted metal lines 412, 810, 812 may be in the same metal layer as the first metal line 201 of the first conductive path 302. The inserted vias 420, 422, 424 may be in the same via layer as the via 205 of the first conductive path 302. The metal line-via matrix shown in FIG. 8 is inserted after the place and route stage of the IC 100 (or some portion of the IC 100 thereof that includes the first conductive path 302) has been performed.[0047] Similar to the conductive path 300 described above with respect to FIG. 3, the magnitude of the current IA in FIG. 8 entering point A is equal to the magnitude of the current IB leaving point B. However, unlike in FIG. 3, only a portion of the current I A shown in FIG. 8 flows along the first conductive path 302 as current Other significant portions of the current IA flow along additional auxiliary conductive paths of the metal line-via matrix represented by the currents h, , h, , h, h, h, ,and ho- The auxiliary conductive path currents h, h, h, h, h, h, h, ,and ho generally flow in a direction from point A to point B as shown with the dashed, directional arrows.[0048] In effect, the inserted metal line-via matrix diverts a portion of the current that would otherwise ordinarily flow through the first conductive path 302. By diverting this current through the metal line-via matrix (e.g., currents h, h, U, , h, h, h, h, and ho), the current density and current of the first conductive path 302 is reduced, and consequently any existing electromigration, in-rush current induced IR-voltage drop, and/or jitter issues along the first conductive path 302 may also be reduced.[0049] FIG. 9 illustrates a conceptual, perspective view of a conductive path 900 within the IC 100 featuring metal line-via matrix insertion according to another non- limiting example. The conductive path 900 shown in FIG. 9 is very similar to the conductive path 400 shown in FIG. 4, except that the first metal line 201 of the first conductive path 302 and the first inserted metal line 410 of the metal line-via matrix include metal extension pieces 902, 904. That is, the first metal line 201 is extended to form the first extension piece 902, and the second inserted metal line 410 is formed to be longer so that it includes the second extension piece 904. The extension pieces 902, 904 extend beyond the juncture where the metal lines 201, 410 couple to the vias 205, 422. The extension pieces 902, 904 act as atom and/or hole (i.e., void) collection pools that buffer the effects of atom and/or hole build up from electromigration. The extension pieces 902, 904 are carefully formed so that they do not extend out too close to other conductive paths of the IC 100 where atom build up from electromigration may cause a short. Similarly, since the ends 906, 908 of the extension pieces 902, 904 do not electrically couple to anything (i.e., they lead nowhere), if an open circuit along the extension pieces 902, 904 forms due to electromigration induced voids, the extension pieces 902, 904 will not cause failure of the conductive path 900. The extension pieces 902, 904 formed at the metal lines 201, 410 shown in FIG. 9 are merely examples. Extension pieces may be formed along any metal line and/or via of a conductive path (e.g., first conductive path and/or auxiliary conductive path) of the IC 100.[0050] The metal line-via matrix inserted can be of any size and shape (assuming no design rule check (DRC) violations). As such, FIG. 10 illustrates a conceptual, perspective view of a conductive path 1000 within the IC 100 featuring a metal line-via matrix insertion according to another non-limiting example. A first conductive path 1002 that only included a first metal line 1004 is the original conductive path that was prone to electromigration, in-rush current induced IR-voltage drop, and/or jitter. Consequently, the remaining metal lines 1010 and vias 1020 shown (not all are labeled in FIG. 10 for clarity) are inserted after place and route of the IC 100 to create auxiliary conductive paths (labeled ux, not all are labeled for clarity) to reduce the current density and current through the first conductive path 1002.[0051] Whereas the metal line-via matrix of FIG. 8 may be considered a "2x2" matrix, the one illustrated in FIG. 10 may be considered a "3x3" matrix since it includes inserted metal lines in three different metal layers (e.g., MA, MB, Mc, etc.), and vias there between. Other non-limiting metal line-via matrix sizes include 4x4, 2x4, 4x2, 2x3, 3x2, 1x2, 2x1, etc.[0052] FIG. 1 1 illustrates a flowchart 1 100 for a method manufacturing an integrated circuit according to one aspect of the disclosure. First, routing of the integrated circuit is performed to generate a plurality of conductive paths across a plurality of metal layers 1 102. Next, a first conductive path of the plurality of conductive paths is identified having a current and a current density, where the first conductive path includes at least a first metal line within a first metal layer 1 104. Then, after performing the steps of routing and identifying, an auxiliary conductive path is formed that includes a first via electrically coupled to a second metal line that is electrically coupled to a second via. The second metal line is positioned within a second metal layer that is different than the first metal layer. The first and second vias are positioned between the first metal layer and the second metal layer. Moreover, the first and second vias electrically couple the first metal line to the second metal line such that the auxiliary conductive path reduces the current and the current density of the first conductive path by diverting a portion of the current flowing through the first conductive path 1 106.[0053] Compared to de-coupling capacitors, utilizing metal line-via matrix insertion to combat electromigration, IR-voltage drop, and jitter as described above consumes significantly less power. Moreover, metal line-via matrices take up very little space compared to traditional de-caps.[0054] One or more of the components, steps, features, and/or functions illustrated in FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11 may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the invention. The algorithms described herein may also be efficiently implemented in software and/or embedded in hardware.[0055] Also, it is noted that the aspects of the present disclosure may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.[0056] Moreover, a storage medium may represent one or more devices for storing data, including read-only memory (ROM), random access memory (RAM), magnetic disk storage mediums, optical storage mediums, flash memory devices and/or other machine-readable mediums and, processor-readable mediums, and/or computer- readable mediums for storing information. The terms "machine-readable medium", "computer-readable medium", and/or "processor-readable medium" may include, but are not limited to non-transitory mediums such as portable or fixed storage devices, optical storage devices, and various other mediums capable of storing or containing instruction(s) and/or data. Thus, the various methods described herein may be fully or partially implemented by instructions and/or data that may be stored in a "machine- readable medium", "computer-readable medium", and/or "processor-readable medium" and executed by one or more processors, machines and/or devices.[0057] Furthermore, aspects of the disclosure may be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium or other storage(s). A processor may perform the necessary tasks. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.[0058] The various illustrative logical blocks, modules, circuits, elements, and/or components described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a number of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0059] The methods or algorithms described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executable by a processor, or in a combination of both, in the form of processing unit, programming instructions, or other directions, and may be contained in a single device or distributed across multiple devices. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.[0060] Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.[0061] The various features of the invention described herein can be implemented in different systems without departing from the invention. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the invention. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art. |
Reducing memory fragmentation. Memory is allocated during a preboot phase of a computer system, wherein the memory is allocated based on a plurality of memory types. Fragmentation of memory is determined, wherein a fragment includes a contiguous block of memory of the same type. At least a portion of memory allocated to a firmware module is coalesced based on the plurality of memory types if the fragmentation is greater than a threshold. An operating system is booted by the computer system. |
1.A method for reducing memory segmentation includes:The firmware module is allocated memory during the pre-boot phase of the computer system, wherein the memory is allocated based on multiple memory types;Determine the segment of the memory, where the segment includes adjacent blocks of the same type of memory;If the segmentation is greater than a threshold, coalesce at least a portion of the memory allocated to the firmware module based on the plurality of memory types; andThe operating system is booted by the computer system.2.The method of claim 1, wherein the computer system is not reset between coalescing memory allocated to the firmware module and booting the operating system.3.The method of claim 1, further comprising registering the firmware module with a scheduler if the firmware module supports coalescence, the scheduler tracking firmware modules supporting coalescence.4.The method of claim 3, wherein the coalescing is initiated by one of the firmware module and the scheduler.5.The method of claim 1, wherein coalescing at least a portion of the memory allocated to the firmware module includes:The coalescing is initiated by the firmware memory manager; andThe firmware memory manager notifies the firmware module of the new memory location of the at least a portion of the memory allocated to the firmware module as a result of the coalescence.6.The method of claim 1, wherein the plurality of memory types include advanced configuration and power interface memory types.7.The method of claim 1, wherein coalescing at least a portion of memory allocated to the firmware module includes coalescing memory used by the firmware module as a data buffer.8.A device for reducing memory segmentation, including:Allocation means for allocating memory to the firmware module by the firmware memory manager during the pre-boot phase of the computer system, wherein the memory is allocated based on multiple memory types;The component for starting the retrieval segmentation protocol is used to start the retrieval segmentation protocol to determine the memory segment described by the pre-boot memory image, where the segment includes adjacent blocks of the same type of memory;A component for initiating a coalescence protocol, for initiating a coalescence protocol if the segmentation is greater than a threshold to coalesce at least a portion of memory allocated to the firmware module based on the plurality of memory types; andA component for booting an operating system on the computer system.9.The apparatus of claim 8, wherein the computer system is not reset between starting the coalescence protocol and booting the operating system.10.The apparatus of claim 8, wherein the firmware module includes an instruction part and a data buffer part, and the coalescing protocol coalesces the data buffer part.11.The device of claim 8, wherein the device further comprises:A registration component for registering the firmware module with a scheduler if the firmware module is coded to support the coalescence protocol.12.The apparatus of claim 8, wherein initiating the coalescence protocol includes:The coalescing protocol is initiated by the firmware memory manager; andThe firmware memory manager notifies the firmware module of the new memory location of the at least part of the memory allocated to the firmware module as a result of the coalescence, wherein the firmware memory manager uses the slave firmware module The received firmware module interface notifies the firmware module of the new memory location.13.The apparatus of claim 8, wherein the plurality of memory types include advanced configuration and power interface memory types.14.The device of claim 8, wherein the device complies with the Extensible Firmware Interface Specification.15.A system for reducing memory segmentation includes:Allocation means for allocating synchronous dynamic random access memory to the firmware module during the pre-boot phase of the computer system, wherein the synchronous dynamic random access memory is allocated based on multiple advanced configurations and power interface memory types;A component for initiating a retrieval segmentation protocol to initiate a retrieval segmentation protocol to determine the synchronous dynamic random access memory segment described by the pre-boot memory image, where the segment includes the same advanced configuration and power interface memory type Contiguous blocks of synchronous dynamic random access memory;A component for initiating a coalescence protocol, for initiating a coalescence protocol if the segment is greater than a threshold, to coalesce at least a portion of the firmware modules allocated to the firmware module based on the multiple advanced configurations and power interface memory type synchronization dynamic random Access to memory; andA component for booting an operating system on the computer system.16.The system of claim 15, wherein the computer system is not reset between starting the coalescence protocol and booting the operating system.17.The system of claim 15, wherein initiating the coalescence protocol includes:The coalescing protocol is initiated by the firmware memory manager; andNotifying the firmware module of the new memory location of the synchronous dynamic random access memory allocated to the firmware module as a result of coalescing by the coalescing protocol. |
Method, equipment and system for reducing memory segmentationTechnical fieldEmbodiments of the present invention relate to the field of computer systems, and more specifically, but not exclusively, to reducing memory segmentation.Background techniqueIn a typical computer architecture, the initialization and configuration of a computer system by a basic input / output system (BIOS) is usually called the pre-boot phase. The pre-boot phase is generally defined as the firmware running between the processor reset and the operating system (OS) loader. At the beginning of the pre-boot, it is the code in the firmware that initializes the system to the point where the operating system downloaded from a medium such as a hard disk can take over. Starting OS loading begins a period commonly referred to as OS runtime. During the OS runtime, the firmware can serve as an interface between the software and hardware components of the computer system and handle tasks related to the system. As computer systems have become more complex, the operating environment between the OS level and the hardware level is generally referred to as firmware or firmware environment.During pre-boot, a pre-boot memory image is formed, which is passed to the operating system. The pre-boot memory image indicates the memory address reserved for the system and the address available to the OS. The OS uses this pre-boot memory image to form its own memory management scheme. As the pre-boot phase becomes increasingly complex, the pre-boot memory image often has many segmented parts. Many current operating systems cannot support pre-boot memory images with too many segments.BRIEF DESCRIPTIONNon-limiting and non-exclusive embodiments of the present invention are described with reference to the following drawings, wherein unless otherwise specified, the same reference numbers refer to the same parts in all views.FIG. 1 is a block diagram showing an embodiment of reducing memory segmentation according to the teaching of the present invention.Figure 2 is a block diagram illustrating one embodiment of an environment that supports reducing memory fragmentation in accordance with the teachings of the present invention.FIG. 3 is a flowchart illustrating one embodiment of the logic and operation for reducing memory segmentation according to the teachings of the present invention.4 is a flowchart illustrating one embodiment of the logic and operation to reduce memory segmentation according to the teachings of the present invention.FIG. 5 is a flowchart illustrating one embodiment of the logic and operation to reduce memory segmentation according to the teachings of the present invention.6 is a block diagram showing an embodiment of a computer system that implements an embodiment of the present invention.detailed descriptionIn the following description, many specific details are explained in order to provide a thorough understanding of the embodiments of the present invention. However, those skilled in the art will understand that, without one or more of these specific details, or by other methods, components, materials, etc., the embodiments of the present invention may also be implemented. In other cases, well-known structures, materials, or operations are not shown or described in detail, so as not to obscure the understanding of the present invention.Reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Therefore, the phrases "in one embodiment" or "in an embodiment" appearing in various places throughout the specification do not necessarily refer to the same embodiment. Moreover, the specific characteristics, structures, or characteristics may be combined in one or more embodiments in any suitable manner.The firmware environment embodiments described in this article can basically be based on Extensible Firmware Interface (EFI) (Extensible Firmware Interface Specification, Version 1.10, December 1, 2002, available at http://developer.intel.com/technology/efi) achieve. EFI enables firmware in the form of firmware modules such as drives to be loaded from a variety of resources, including flash memory devices, optional ROM (read only memory), other storage devices such as hard disks, CD-ROM (compact disk-read only memory), etc. Or from one or more computer systems on the computer network. An example of implementing the EFI specification is described in Intel @ PlatformInnovationFramework for EFI Architecture-Specification-Draft for Review (Version 0.9, September 16, 2003), which is referred to below as the "framework" (available at www.intel.com/technology/ framework). It should be understood that embodiments of the present invention are not limited to "framework" or implementation in accordance with EFI specifications.Referring now to FIG. 1, an embodiment of reducing memory segmentation is shown. The pre-boot memory image 102 illustrates the allocation of memory addresses 0 to X of the physical memory during the pre-boot phase, shown as 110. In one embodiment, the pre-boot memory image 102 includes an Advanced Configuration and Power Interface (ACPI) E820 table (Advanced Configuration and Power Interface Specification, version 2.0b, October 11, 2002). In general, the E820 table indicates what physical memory OS is available, what is reserved for the system, or does not exist.During pre-boot, various memory types are allocated to the firmware modules 130-134. In one embodiment, the firmware module may include a set of related instructions, commonly referred to as code, and / or data buffers. In FIG. 1, the firmware module 130 includes an instruction 130A and a data buffer 130B. The data buffer 130B can be used by the instructions 130A to store and retrieve various data for various purposes. The instruction and data buffers are not necessarily assigned to contiguous memory addresses, but can be in separate memory address blocks. The firmware module may include drivers, such as EFI drivers, function calls, protocols, and so on. Some embodiments of the firmware module may not last until OS runtime, while other firmware modules are available during both pre-boot and OS runtime.During the firmware module loading process, the firmware module may request the firmware memory manager 106 for memory allocation of one or more memory types of various sizes. In one embodiment, the firmware memory manager 106 may continue until OS runtime. The firmware memory manager 106 may assign a firmware module memory address corresponding to the requested memory type.This memory allocation may result in fragmentation of the pre-boot memory image 102. Segmentation as used herein includes contiguous blocks of some type of memory. In the embodiment of FIG. 1, the pre-boot memory image 102 is segmented into 4 types of memory, types 1-4. For example, segment 120 is composed of memory type 2.After coalescing as shown at 103, the pre-boot memory image 102 is coalesced as shown at 101. The coalescence is based on memory types, so that the same memory type is moved together to form adjacent blocks. In one embodiment, all memory segments can be coalesced. In another embodiment, the pre-boot memory images are coalesced, one firmware module at a time, until the memory segmentation is below a predetermined threshold level (discussed below). For example, in FIG. 1, segment 120 and segment 121 of type 2 memory are coalesced into segment 122 of type 2 memory. Therefore, the two segments have been coalesced into one segment.In one embodiment, the memory type includes the type described in the ACPI specification. These ACPI memory types are shown in Table 1 below.Table 1 ACPI type mnemonic description 1 AddressRangeMemory 1 OS available 2 AddressRangeReserved 2 reserved for system use (not for OS use) 3 AddressRange ACPI 3 OS available after OS reads ACPI table 4 OS available for use VS 4 addressRangeIn ACPI-compliant systems, during pre-boot, the firmware reports the E820 table to the operating system guided configuration and power management (OSPM). In short, OSPM is the interface between the operating system and ACPI functions. When the OS takes control, the OS will use the E820 table to build its own memory image.In ACPI-compliant systems, there are various ways for firmware to transfer memory images to OSPM. In one method, the INT 15 BIOS interface is used in the Intel Architecture (IA) system to transfer the pre-boot memory image to the OS. In another method, if memory resources can be dynamically added or removed, the memory device is defined in the ACPI Namespace, which transmits resource information described by the memory resources.In EFI-compliant systems, the EFI boot service function (GetMemoryMap ()) can be used to transfer the pre-boot memory image to the OS loader. The image is then transferred to OSPM by the OS loader. GetMemoryMap () returns various EFI memory descriptors. These EFI memory descriptors define a system memory image of all installed random access memory (RAM) and the range of physical memory reserved by the firmware. Table 2 below shows samples of EFI memory types and their corresponding ACPI address range types.Table 2 EFI type Mnemonic description ACPI type 1 EFI LoaderCode 1 for OS loader and / or OS 2 2 EFI LoaderData 1 for OS loader and / or OS 1 3 EFI BootServices Code 1 General 4 Data Service 1 General 4 Service 1 5 EFI RuntimeServiceCode OS and OS loader must be saved 2The embodiments herein provide for reducing memory fragmentation. For example, Red Hat Linux and Suse Linux may not support pre-boot memory images with more than 32 segments. In the ACPI system, this corresponds to 32 E820 entries. In another example, the Windows server operating system may not support pre-boot memory images with more than 64 segments. As the platform complexity increases and the firmware environment becomes more robust, the segments may exceed 32 segments.The embodiments herein provide for dynamic pre-boot memory image defragmentation. Under the static scheme, various "buckets" of memory types can be pre-allocated. During pre-boot, the usage profile of each memory type may be stored in non-volatile storage based on the initial boot. Then, reboot the system, and redistribute the “buckets” proportionally based on the usage profile.However, this static solution is impractical. First, this static solution does not provide changes to the system configuration, and this change can also change the segmentation of the pre-boot memory image.Moreover, some platforms, such as servers, must meet the "uptime" requirements of 5-9 '. 5-9 ’is a standard customer requirement that the server must be available 99.999% of the time. This 5-9 ’availability is equivalent to an acceptable downtime of only a few minutes a year. This requirement cannot tolerate an excessively long reboot process to coalesce the pre-boot memory image, such as in the case of static solutions. The embodiments herein allow defragmentation of the pre-boot memory image without requiring a reboot of the computer system.Turning to FIG. 2, an embodiment of a computer system 200 supporting reduced memory segmentation is shown. The firmware 204 is stacked on the hardware 202. The operating system 206 is stacked on the firmware 204.The hardware 202 includes a processor 210, a flash memory 212, and a memory 208. In one embodiment, machine-accessible instructions for reducing memory fragmentation as described herein are stored in flash memory 212. In another embodiment, instructions to reduce memory segmentation substantially in accordance with the EFI specification are stored in the flash memory 212. In alternative embodiments, instead of or in addition to flash memory 212, other types of non-volatile storage, such as read only memory (ROM), may be used.The firmware layer 204 supports the pre-boot memory image 102, the firmware memory manager 106, the firmware modules 130-134, the scheduler 220, and the threshold 240. In the embodiment of FIG. 2, the firmware memory manager 106 provides the coalescing protocol 216 and the retrieval segmentation protocol 218. In one embodiment, instructions supporting firmware memory manager 106, coalescing protocol 216, and retrieval segmentation protocol 218 are all stored in flash memory 212. The firmware memory manager 106, coalescing protocol 216, and retrieval segmentation protocol 218 will be described in detail below.In one embodiment, the threshold 240 is used to determine whether the memory described by the pre-boot memory image 102 needs to be coalesced. In one embodiment, the threshold 240 corresponds to 32 segments, while in another embodiment, the threshold 240 corresponds to 64 segments. In yet another embodiment, the threshold 240 is stored in the non-volatile memory of the computer system 200 along with other system configuration information. Such non-volatile storage may include flash memory 212, or non-volatile random access memory (NVRAM) with stored EFI variables.In one embodiment, the scheduler 220 may be used with a reduced-segment “opt-in” implementation scheme described below in conjunction with FIG. 4. In one embodiment of the "framework", the scheduler 220 includes a boot device selection (BDS) protocol.In general, the scheduler 220 receives a "handle" from the firmware module, which reports the opt-in to the coalescing protocol. The handle provides a means for the scheduler 220 to identify and contact the firmware module. The scheduler 220 registers firmware modules that support coalescing.Referring to FIG. 3, an embodiment of a flowchart 300 for reducing memory fragmentation is shown. Beginning at block 302, the computer system is started or reset. Proceeding to block 304, during pre-boot, the memory is allocated to the firmware (F / W) module by the firmware memory manager. When the firmware module is started, the firmware module requests the firmware memory manager to allocate a memory type. The firmware memory manager assigns the firmware module a range of memory addresses of the requested type.Proceeding to decision block 306, the logic determines whether the firmware is ready to boot the operating system. In an embodiment according to EFI, the ReadyToBoot event is sent to the system. In an EFI system, firmware entities, such as firmware modules, will register to pay attention to a given set of events. When an event is triggered, such as a ReadyToBoot event, an entity will be notified, and the entity will perform various tasks in response, such as clearing. In one embodiment, these tasks may include coalescence as described herein.If the answer to decision block 306 is "no", the logic returns to block 304 and continues to allocate memory to the firmware module. If the answer to decision block 306 is yes, then the logic continues to block 308.At block 308, the memory segment is retrieved. In one embodiment, the retrieval segmentation protocol is invoked, and the retrieval segmentation protocol returns the number of segments in the pre-boot memory image.Proceeding to block 310, the logic determines whether the segmentation of the pre-boot memory image exceeds a threshold. If the answer is "no", the logic proceeds to block 312 to boot the operating system. If the answer is yes, then the logic proceeds to block 314 to coalesce the memory allocated to the firmware module.In one embodiment, coalescing maps a portion of firmware modules, such as data buffers of firmware modules, into contiguous blocks of similar memory type, thereby reducing memory fragmentation. For example, in this particular embodiment, the instruction portion of the firmware module may remain in the originally allocated memory location, but the data buffer portion may be moved to a new location as part of the coalescence. In another embodiment, the entire firmware module will be coalesced.In one embodiment, the coalescing protocol is invoked to coalesce the firmware module. The coalescing protocol can return a pointer to the new memory location of the firmware module. It should be understood that the method of defragmenting a storage medium such as a memory or a hard disk is well known to those skilled in the art.Proceeding to decision block 316, the logic determines whether there are more firmware modules to coalesce. If the answer is yes, the logic returns to block 308. If the answer to decision block 316 is "no", the logic proceeds to block 312 to boot the OS.Turning now to FIG. 4, a flowchart 400 illustrates one embodiment of reducing memory fragmentation. Flowchart 400 shows the firmware module opt-in scheme, where the firmware module registers whether they are encoded to support coalescence.Beginning at block 402, the computer system is reset / started. Continuing to block 404, the firmware module announces the coalescing protocol to the firmware module. In one embodiment, in order to announce the coalescing protocol, the firmware memory manager transmits a pointer to the firmware module indicating the memory address of the coalescing protocol.Continuing to block 405, the memory is allocated by the firmware memory manager to the firmware module loaded during pre-boot. Continuing to block 406, if the firmware module supports coalescing, the information is registered with the scheduler.Proceeding to decision block 408, the logic determines whether the system is ready to boot the operating system. If the answer to decision block 408 is "no", the logic returns to block 405. If the answer to decision block 408 is yes, then the logic continues to block 410.At block 410, the segmentation of memory is retrieved using the retrieval segmentation protocol. Proceeding to decision block 412, the logic determines whether the segment is above the threshold. If the answer is "no", the logic continues to block 414 to boot the OS. If the answer to decision block 412 is "yes", the logic continues to block 416 to initiate the coalescing protocol of the firmware module. In one embodiment, the firmware module invokes the coalescing protocol. In another embodiment, the scheduler invokes the coalescing protocol.Proceeding to decision block 418, the logic determines whether more firmware modules support the coalescing protocol. In one embodiment, consult a list of registered firmware modules maintained by the scheduler that support coalescing.If the answer to decision block 418 is "no", the logic proceeds to block 414 to boot the OS. If the answer to decision block 418 is yes, then the logic returns to block 410 to retrieve the memory segment.In the embodiment of FIG. 4, it should be understood that the firmware module is encoded to utilize coalescence, as described herein. The embodiments herein can be used for a set of firmware modules that are a mixture of coalescing participants and non-participants. The opt-in scheme of flowchart 400 allows the firmware module provider to utilize coalescing as needed.Turning to FIG. 5, a flowchart 500 shows one embodiment of reducing memory segmentation. Flowchart 500 shows an embodiment where coalescing is directed by a firmware memory manager. In this embodiment, the firmware memory manager utilizes its all-round knowledge of memory allocation.Beginning at block 502, the computer system is started / reset. Continuing to block 504, the firmware memory manager allocates memory to the firmware module. Continuing to block 505, the firmware memory manager receives the firmware module interface from the firmware module, which enables the firmware memory manager to notify the firmware module of its new memory location after coalescence.Proceeding to decision block 506, the logic determines whether the system is ready to boot the OS. If the answer to decision block 506 is "no", the logic returns to block 504. If the answer to decision block 506 is yes, then the logic proceeds to block 508.At block 508, the segmentation of memory is retrieved by the firmware memory manager using the retrieval segmentation protocol. Proceeding to decision block 510, the logic determines whether the segment is greater than the threshold. If the answer is "no", the OS is booted, as shown in block 512. If the answer is yes, the logic continues to block 514.At block 514, the firmware memory manager initiates the coalescing protocol for the firmware module. Proceeding to decision block 516, the logic determines whether there are more firmware modules that have not passed the coalescing process. If the answer to decision block 516 is yes, then the logic returns to block 508.If the answer to decision block 516 is "no", the logic proceeds to block 518 where the firmware memory manager notifies the coalesced firmware module of their new memory location. In one embodiment, the firmware memory manager transmits a pointer to the new location of their data buffer to the firmware modules. In one embodiment, the firmware memory manager may use the firmware module interface communicated to the firmware memory manager in block 505 to notify the firmware module.The embodiments herein provide for dynamically reducing memory segmentation without requiring a computer system reboot. Some IA32 and Extended Memory 64 Technology (EM64T) operating systems do not support more than 32 E820 entries. In short, EM64T enables 32-bit systems to address memory above 4 gigabyte rows. In EFI-compliant systems, the E820 table is essentially a transformation of the EFI memory image, so the number of E820 table entries is proportional to the degree of segmentation of the EFI firmware module.FIG. 6 shows one embodiment of an exemplary computer system 600 on which embodiments of the invention can be implemented. The computer system 600 includes a processor 602 and a memory 604 coupled to the chipset 606. Storage 612, non-volatile storage (NVS) 605, network interface 614, and input / output (I / O) device 618 may also be coupled to chipset 606.Examples of computer system 600 include, but are not limited to: desktop computers, notebook computers, servers, personal digital assistants, network workstations, and so on. In one embodiment, the computer system 600 includes at least a processor 602 coupled to the memory 604, and the processor 602 executes instructions stored in the memory 60.The processor 602 may include, but is not limited to: Intel Corporation x86, Pentium, Xeonor Itaniumseries processors, and the like. In one embodiment, the computer system 600 may include multiple processors. The memory 604 may include, but is not limited to: dynamic random access memory (DRAM), static random access memory (SRAM), synchronous dynamic random access memory (SDRAM), Rambus dynamic random access memory (RDRAM), and the like.The chipset 606 may include a memory controller hub (MCH), an input / output controller hub (ICH), and so on. The chipset 606 may also include system clock support, power management support, audio support, graphics support, and so on. In one embodiment, the chipset 606 is coupled to a board that includes sockets for the processor 602 and memory 604.The components of the computer system 600 can be connected by various buses, including peripheral component interconnect (PCI) bus, system management bus (SMBUS), small pin count (LPC) bus, serial peripheral interface (SPI) bus, and accelerated graphics port AGP) interface and so on. The I / O device 618 may include a keyboard, a mouse, a display, a printer, a scanner, and so on.The computer system 600 can interface with external systems through the network interface 614. The network interface 614 may include, but is not limited to, a modem, a network interface card (NIC), or other interfaces for coupling computer systems to other computer systems. The carrier signal 623 is received / transmitted by the network interface 614. In the embodiment shown in FIG. 6, carrier signal 623 is used to interface computer system 600 with network 624, such as a local area network (LAN), a wide area network (WAN), the Internet, or any combination thereof. In one embodiment, the network 624 is also coupled to the computer system 625 so that the computer system 600 and the computer system 625 can communicate on the network 624.The computer system 600 also includes non-volatile storage 605 on which firmware and / or data can be stored. Non-volatile storage devices include but are not limited to: read-only memory (ROM), flash memory, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), non-volatile random Access memory (NVRAM), etc. The storage 612 includes, but is not limited to: a magnetic hard disk, a magnetic tape, an optical disk, and so on. It should be understood that instructions executable by the processor 602 may reside in the storage 612, the memory 604, and the non-volatile storage 605, or may be transmitted or received via the network interface 614.It should be understood that in one embodiment, the computer system 600 may execute operating system (OS) software. For example, one embodiment of the present invention uses Microsoft Windowsas the operating system of the computer system 600. Other operating systems that can also be used in the computer system 600 include, but are not limited to: Apple Macintosh operating system, Linux operating system, Unix operating system, and so on.For the purposes of this specification, machine-accessible media includes providing (ie, storing and / or transmitting) machines (eg, computers, network devices, personal digital assistants, manufacturing tools, any device with a set of one or more processors Etc.) Any organization that can read or access information in a form. For example, machine-accessible media include, but are not limited to, recordable / non-recordable media (such as read-only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). In addition, machine-accessible media may include propagated signals, such as electrical, optical, acoustic, or other forms of propagated signals (eg, carrier waves, infrared signals, digital signals, etc.).This document describes embodiments of various operations of the present invention. These operations can be implemented by the machine using a processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), and so on. In one embodiment, the one or more operations described may constitute instructions stored on a machine-accessible medium, which when executed by the machine will cause the machine to perform the operations. Explaining the order of some or all operations should not be considered as a sequence of operations. Those skilled in the art may also use alternative rankings that have the benefits of this specification. Moreover, it should be understood that not all operations are necessarily present in each embodiment of the present invention.The above description of the illustrated embodiments of the present invention, including those described in the abstract, are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Although for the purpose of illustration, specific embodiments and examples of the present invention are described herein, various equivalent modifications are possible, which will be understood by those skilled in the art. Based on the above detailed description, these modifications can be made to the embodiments of the present invention. The terms used in the following claims should not be construed as limiting the invention to the specific embodiments disclosed in the specification. Rather, the following claims should be interpreted in accordance with the established doctrine of claim interpretation. |
A method, apparatus, and system are described for rasterizing a triangle. Pixel parameter values are interpolated by adding or subtracting a vertical delta and/or by adding or subtracting a horizontal delta within a 4x4 tile of 16 pixels. |
What is claimed is: 1. A method of rasterization using a tile-based digital differential analyzer, comprising:determining a step-down value for each primitive edge, each step-down value comprising a value by which a horizontal coordinate will change when stepping down the corresponding edge; for each scan line, calculating a leftmost horizontal pixel sample coordinate and a rightmost horizontal pixel sample coordinate based on the step down value for the appropriate primitive edge; and for each scan line region comprising at least one scan line on a raster display; determining the rightmost tile based on corresponding rightmost horizontal coordinates for the scan line region; and determining leftmost tile by: determining the bottom-most of the leftmost horizontal coordinates for the scan line region; and determining the tile number comprising the bottom-most of the leftmost horizontal coordinates for the scan line region. 2. The method of claim 1, wherein the bottom-most of the leftmost horizontal coordinates for the scan line region comprises the lowest leftmost horizontal coordinate in the scan line region.3. The method of claim 1, wherein said determining the tile number comprising the bottom-most of the leftmost horizontal coordinates for the scan line region comprises:determining binary coordinates comprising the bottom-most of the leftmost horizontal coordinates for the scan line region; and zeroing out the lowest two bits of the binary coordinates. 4. The method of claim 1, wherein the determining the rightmost tile based on corresponding rightmost horizontal coordinates for the scan line region comprises:determining the bottom-most of the rightmost horizontal coordinates for the scan line region; and determining the tile number comprising the bottom-most of the rightmost horizontal coordinates for the scan line region. 5. The method of claim 4, wherein the bottom-most of the rightmost horizontal coordinates for the scan line region comprises the highest rightmost horizontal coordinate in the scan line region.6. The method of claim 4, wherein the determining the tile number comprising the bottom-most of the rightmost horizontal coordinates for the scan line region comprises:determining binary coordinates comprising the bottom-most of the rightmost horizontal coordinates for the scan line region; and zeroing out the lowest two bits of the binary coordinates. 7. A system, comprising:a memory comprising primitive data; a setup processor coupled to the memory, the setup processor to access the primitive data in the memory to determine a step-down value for each edge of the primitive, the step-down value comprising a value by which the horizontal coordinate will change when stepping down the corresponding edge; an edge interpolator coupled to the setup processor to calculate, for each scan line region, a leftmost horizontal coordinate and a rightmost horizontal coordinate based on the step down value for the appropriate primitive edge; a left FIFO (first-in-first-out) unit coupled to the edge interpolator to store the leftmost horizontal coordinates for each scan line; a right FIFO unit coupled to the edge interpolator to store rightmost horizontal coordinates for each scan line; and a tile generator to access the left edge FIFO unit and the right edge FIFO unit to determine for each scan line region comprising at least one scan line: the rightmost tile based on the rightmost horizontal coordinates corresponding to a given scan line region; and the leftmost tile by: determining the bottom-most of the leftmost horizontal coordinates for the scan line region; and determining the tile number comprising the bottom-most of the leftmost horizontal coordinates for the scan line region. 8. The system of claim 7, wherein the tile generator determines the bottom-most of the leftmost horizontal coordinates for the scan line region by determining the lowest leftmost horizontal coordinate in the scan line region.9. The system of claim 7, wherein the tile generator determines the tile number comprising the bottom-most of the leftmost horizontal coordinates for the scan line region by:determining binary coordinates comprising the bottom-most of the leftmost horizontal coordinates for the scan line region; and zeroing out the lowest two bits of the binary coordinates. 10. The system of claim 7, wherein the tile generator determining the rightmost tile based on corresponding rightmost horizontal coordinates for the scan line region by:determining the bottom-most of the rightmost horizontal coordinates for the scan line region; and determining the tile number comprising the bottom-most of the rightmost horizontal coordinates for the scan line region. 11. The system of claim 10, wherein the tile generator determines the bottom-most of the rightmost horizontal coordinates for the scan line region by determining the highest rightmost horizontal coordinate in the scan line region.12. The system of claim 10, wherein the tile generator determines the tile number comprising the bottom-most of the rightmost horizontal coordinates for the scan line region by:determining binary coordinates comprising the bottom-most of the rightmost horizontal coordinates for the scan line region; and zeroing out the lowest two bits of the binary coordinates. 13. A machine-readable medium having stored thereon data representing sequences of instructions, the sequences of instructions which, when executed by a machine, cause the machine to:determine a step-down value for each primitive edge, each step-down value comprising a value by which a horizontal coordinate will change when stepping down the corresponding edge; for each scan line, calculate a leftmost horizontal pixel sample coordinate and a rightmost horizontal pixel sample coordinate based on the step down value for the appropriate primitive edge; and for each scan line region comprising at least one scan line on a raster display; determine the rightmost tile based on corresponding rightmost horizontal coordinates for the scan line region; and determine leftmost tile by: determine the bottom-most of the leftmost horizontal coordinates for the scan line region; and determine the tile number comprising the bottom-most of the leftmost horizontal coordinates for the scan line region. 14. The machine-readable medium of claim 13, wherein the bottom-most of the leftmost horizontal coordinates for the scan line region comprises the lowest leftmost horizontal coordinate in the scan line region.15. The machine-readable medium of claim 13, wherein the determining the tile number comprising the bottom-most of the leftmost horizontal coordinates for the scan line region comprises:determining binary coordinates comprising the bottom-most of the leftmost horizontal coordinates for the scan line region; and zeroing out the lowest two bits of the binary coordinates. 16. The machine-readable medium of claim 13, wherein the determining the rightmost tile based on corresponding rightmost horizontal coordinates for the scan line region comprises:determining the bottom-most of the rightmost horizontal coordinates for the scan line region; and determining the tile number comprising the bottom-most of the rightmost horizontal coordinates for the scan line region. 17. The machine-readable medium of claim 16, wherein the bottom-most of the rightmost horizontal coordinates for the scan line region comprises the highest rightmost horizontal coordinate in the scan line region.18. The machine-readable medium of claim 16, wherein the determining the tile number comprising the bottom-most of the rightmost horizontal coordinates for the scan line region comprises:determining binary coordinates comprising the bottom-most of the rightmost horizontal coordinates for the scan line region; and zeroing out the lowest two bits of the binary coordinates. |
The present application is a continuation-in-part (CIP) based on and claims priority from U.S. patent application Ser. No. 09/608,414, filed Jun. 30, 2000.COPYRIGHT NOTICEContained herein is material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent disclosure by any person as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights to the copyright whatsoever.FIELD OF THE INVENTIONThis invention relates to the field of computer graphics, and more specifically, to rasterization using a tile-based digital differential analyzer algorithm.BACKGROUND OF THE INVENTIONGenerally, the field of three-dimensional (3D) computer graphics is concerned with generating and displaying 3D objects in a two-dimensional (2D) space, such as a display screen. This is accomplished by converting information about 3D objects into a bit map that is displayed. This process is called rendering, a multi-part process by which a computer turns an application model description of an image into a screen image. The basic idea is that the processing of information in three-dimensional computer graphics occurs in a series of stages in a graphics pipeline, where each stage generates results for a successive stage.The process starts with an application model for describing an object using three-dimensional coordinates (x, y, z), where the object is defined by large numbers of basic geometrical shapes called primitives that define the shape of components of the object. Examples of primitives that make up an object include a triangle, line, dot, circle, ellipse, arc, text, polyline, and polygon. In addition to primitives, an application model stores object attributes such as size, color, line, width, and surface texture, as well as connectivity relationships and positioning data that describe how the components fit together.The application model for a given object is created by an application program, and stored in an application database. Using a graphics API (application programming interface), a series of graphics output commands that contain both a detailed geometric description of what is to be viewed and the attributes describing how the objects should appear, the application program converts the application model to a sequence of commands, which are then processed by a graphics pipeline to generate a view of the model. The graphics API typically consists of a set of output subroutines corresponding to the various primitives, attributes, and other elements, which are all collected in a graphics package that can be called from high-level languages such as C, Pascal, or LISP.The basic element of any graphics system is rasterization, the process by which a primitive is converted to a two-dimensional image on a raster device. A raster device, such as a computer monitor, comprises a raster, the rectangular area of a display screen actually used to display images. A raster is itself made up of pixels, short for picture elements, the smallest units by which a primitive can be represented on a display. Pixels are activated on a raster device as an electron beam sweeps across the device to generate a picture one scan line at a time.During rasterization, a primitive that is defined by 3D parameters in a three-dimensional (3D) representation is transformed into a two-dimensional raster of pixels. 3D parameters comprise x, y, and z coordinates, and may optionally comprise parameters such as color, and texture. During the transformation process, a 3D coordinate comprising an X, Y, and Z value is transformed into an X and Y screen coordinate used for positioning, and a Z, or depth value, that is treated as a parameter.During rasterization, a set of parameter values are given for each of the three triangle vertices. One of the problems to be solved during the rasterization process is computing the 3D parameters, such as the Z parameter, color parameter, and texture parameter, corresponding to the coordinates in order to most closely approximate the three-dimensional primitive. Rasterization, which is also known as scan conversion, makes these determinations by computing the parameter values at each pixel while scanning the horizontal or vertical scan lines of the pixel grid.The rasterization process can be a costly and inefficient process, sometimes requiring many multiplication computations. While several algorithms exist, the process is commonly the subject of optimization algorithms.BRIEF DESCRIPTION OF THE DRAWINGSThe embodiments of the present invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:FIG. 1 illustrates a triangle primitive and boundaries.FIG. 2 illustrates a section of the triangle primitive shown in FIG. 1.FIG. 3 shows components of tile-based DDA rasterization in preferred embodiments of the invention.FIG. 4 shows one tile in a raster grid.FIG. 5 shows a base pixel location within a tile.FIG. 6 shows movement between tiles in a raster grid.FIG. 7 illustrates computing a base parameter from the top vertex of a triangle primitive.FIG. 8 illustrates edge interpolation.FIG. 9 is a flowchart illustrating a method to determine triangle boundaries.FIG. 10 is a flowchart illustrating a method to compute pixel parameter values.FIG. 11 is a flowchart illustrating a method to rasterize a triangle.FIG. 12 is an example raster grid showing an interpolated triangle.DETAILED DESCRIPTION OF THE INVENTIONAccording to one embodiment of the present invention, a rasterization method to convert geometric values of a triangle to pixels is described. A triangle is rasterized by determining the boundaries of a triangle on a grid, finding valid pixel samples, and calculating their values by interpolating down the vertical axis of a triangle and across the horizontal axis of the triangle within a block of pixels.The embodiments of the present invention includes various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the steps. Alternatively, the steps may be performed by a combination of hardware and software.The embodiments of the present invention may be provided as a computer program product which may include a machine-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform a process according to the one embodiment of the present invention. The machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions. Moreover, the embodiments of the present invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection). Accordingly, herein, a carrier wave shall be regarded as comprising a machine-readable medium.IntroductionIn rasterization, x and y define the triangle in screen space, with all the other parameters being interpolated to find their values at a specific point in screen space. There are two major parts to a rasterization process: edge interpolation, where the x and y coordinates of valid sample points (i.e., pixels which fall inside of the primitive boundaries) are determined; and pixel parameter computation, where parameters corresponding to the x and y coordinates are interpolated.i740 Tile-Based RasterizationA scan conversion technique used in the Intel i740 graphics processor scan converts a primitive block, also known as a tile, at a time rather than a pixel at a time. Pixels are grouped into tiles, such as a 4*4 tile of 16 pixels. Starting from a first tile, valid pixels are found within the first tile. Once the tile is completed, a next tile is evaluated. Tile-based rasterization is computed as follows:1. Compute the bounding box of the triangle.2. Locate the top-most vertex, start span (ss), and reference point (the center) of the top left pixel in ss.3. Normalize the parameters.4. Compute the area of the triangle using the determinant of the matrix: 5. Perform face culling using a, where the triangle is culled if a≤0 for counterclockwise triangles, and a≥0 for clockwise triangles.6. Locate the top-most vertex, and rotate the vertices counterclockwise.7. Compute the three edge equations as follows:L0=Lx*X+Ly*YwhereLx=deltaX*manDistRecip;Ly=deltaY*manDistRecipdeltaX=X1-X0;deltaY=Y1-Y0;manDist=abs(deltaX)+abs(deltaY);manDistRecip=1.0/manDist.8. Compute the parameter plane equation as follows:P0=C0+C1*x0+C2*y0P1=C0+C1*x1+C2*y1P2=C0+C1*x2+C2*y2Using Cramer's rule, the coefficients for the plane equation can be computed as follows: Rather than generate all of the pixels to be rendered, a scan conversion unit first generates all of the blocks overlapped by the triangle as follows:1. Locate the block containing the top vertex, and mark this as the start position.2. Scan left until the termination condition is true.Termination condition means a) the bounding box of the triangle has been exceeded; b) the center of the block is more than k/2 negative units away from one of the edges (i.e., there is one edge whose distance function evaluated at the center of the block is less than -k/2).3. Go back to the start position and scan right until the termination condition is true.4. Set the new start position to the ideal place to start the scan in the next line, and continue processing at 1.Ideal place to start the scan on the next line is the block whose bottom center is closest to the center of the triangle. This is done by tracking the block with the maximum of minimum distance to all three edges.Once a block is generated, the distance functions for the edges are used to determine if a pixel is inside the triangle. All three distance functions have to be positive (or all three negative) for a pixel to lie inside the triangle. The distance functions can be evaluated using direct evaluation or interpolation The same thing applies to the generation of the parameters for the pixels. In i740, the parameter value for the top most pixel is directly evaluated using the plane equations for the parameter while the other pixels in the block are interpolated. To avoid having to scan through every pixel in the block (if any are visible), i740 stores multiple interpolation step sizes in both x and y. (If pixel n is visible, n+1 is not, and n+2 is, then the parameter value for n+2 can be interpolated by adding the stored two step value rather than require n+1 to be interpolated first).DDA RasterizationTraditionally, a triangle is rasterized by interpolating down the triangle edges to each scan line, finding the first valid pixel on that scan line, and determining that pixel's value (x, y, z, color, texture, etc.). The next pixel's value on that scan line is determined by stepping across the x axis one pixel at a time (step value in the x direction), and then computing each pixel's value on that scan line. This method is known as the digital differential analyzer (hereinafter "DDA") method. The step value for each parameter in the y direction down the longest edge of the triangle, and in the x direction are determined through setup calculations. In reference to FIG. 1, the setup computation in DDA is determined as follows:1. Calculate parameter deltas dpdy20 for stepping down the longest edge of the triangle.Determine the longest edge.Sort the given vertices in top to bottom order, and label them: V0, V1, V2Using the y coordinates of the vertices, calculate the vertical lengths between vertices, where Lab represents the length from vertex a to vertex b.L02=12.75L01=7.25L12=5.5In this example, the edge from vertex 2 to vertex 0 is the longest vertical edge of the triangle.Calculate parameter values at first scan line below V0 on the longest vertical edge of the triangle (p02 [i]), where i represents a given one of one or more parameters.p02[i]=v0.v[i]+y0ff0*dpdy20[i], where i={z, texture, color, etc.}Determine the parameter delta (dpdy20), i.e. the amount by which a parameter will change when stepping down the longest vertical edge of the triangle by one scan line.dx20=v2.v[PVERT_X]-v0.v[PVERT_X]; wherev2.v[PVERT_X] is the x coordinate at vertex 2, andv0.v[PVERT_X] is the x coordinate at vertex 0.dy20=v2.v[PVERT_Y]-v0.v[PVERT_Y];rdy20=1/dy20;dp20=v2.v[i]-v0.v[i];dpdy20[i]=dp20*rdy20;Determine X start point for longest edge.Determine offset (yoff) from y-coordinate at V0.yoff=v0.v[PVERT_Y]-y022. Calculate the parameter delta dpdx, i.e., the amount by which given parameter i will change when stepping across the horizontal axis.Compute distance across the widest part of the triangle (dx13).p3Ratio=dy21*rdy20;where dy21=v2.v[PVERT_Y]-v1.v[PVERT_Y] and rdy20=1/dy20;xAtP3=v2.v[PVERT_X]-p3Ratio*dx20;dx13=xAtP3-v1.v[PVERT_X]Determine if the triangle is scanned to the left or to the right.If dx13<0.0, then the triangle is scanned to the right.Otherwise, the triangle is scanned to the left.Divide the distance by the number of steps (pixels).rdx13=1.0f/dx13;pAtP3[i]=v2.v[i]-p3Ratio*dp20[i];dp13[i]=v1.v[i]-pAtP3[i];dpdx[i]=dp13[i]*rdx13;3. Determine triangle boundaries.Compute all the edge slopes. The other parameters are not needed for the other two edges, since it only needs to know when to stop stepping across the scan line.dx01=v0.v[PVERT_X]-v1.v[PVERT_X]; wherev0.v[PVERT_X] is the x coordinate at vertex 0, andv1.v[PVERT_X] is the x coordinate at vertex 1.dy01=v0.v[PVERT_Y]-v1.v[PVERT_Y];rdy01=1/dy01;dxdy01=dx01*rdy01;m01=1/dxdy01dx12=v1.v[PVERT_X]-v2.v[PVERT_X]; wherev1.v[PVERT_X] is the x coordinate at vertex 1, andv2.v [PVERT_X] is the x coordinate at vertex 2.dy12=v1.v[PVERT_Y]-v2.v[PVERT_Y];rdy12=1/dy12;dxdy12=dx12*rdy12;m12=1/dxdy12Compute initial X value for each of the edges, where the initial X value is the value at the first scan line below the top vertex.4. At the beginning of triangle processing, the starting values of the long edge, and the starting values of the first short edge are loaded. The step values for calculating the pixel values (dpdy20, dpdx) are also loaded. Starting at first scan line below the V0, step across current scan line and compute pixel values.Adjust p02[i] by distance on horizontal axis from point on edge of triangle to nearest pixel sample point.Multiply parameter values p02[i] on longest edge of the triangle on the first scan line by step value dpdx until the right edge of the triangle is reached (as determined by the slope of the right edge).5. Start next scan line by stepping down long edge of triangle by multiplying current parameter value by dpdy20.6. Finally, the count representing the remaining length of the opposite edge is decremented by one. If the result is greater than 0, processing continues as described above. If the result is 0, then the edge parameters and initial values for the second edge are loaded, and processing for the second edge is initiated as described above for the first edge. When the count reaches 0 for the second edge, the triangle has been rasterized, and another triangle can be loaded.While tile-based rasterization, such as that done by the i740 graphics chip, makes it easy to find the parameter values, the computations required to determine the boundaries of the triangle are rather complex. On the other hand, while the DDA method described above is simple, it requires a multiplication operation for each parameter on each scan line, requiring more processor power, and more hardware.In tile-based DDA rasterization, according to one embodiment of the present invention, valid pixels are determined by finding the boundaries of a triangle within a pre-specified scan line region, and then finding all the valid pixels one grid at a time within that region. Values for pixels are computed by interpolating down the vertical axis of a triangle to each scan line in the y direction (rather than down the longest edge of the triangle), and stepping across the x axis one pixel at a time in the x direction within a tile (rather than across an entire scan line), where a tile comprises a predetermined block of pixels. Depending on how much parallelism is in the tile rasterization hardware, either all pixel values within a tile can be computed first, and then the valid pixels selected, or pixels can first be checked for validity, and their values computed only if they are valid samples. In preferred embodiments, a scan line region comprises 4 scan lines, and a tile comprises a 4*4 block of 16 pixels.The step value for each parameter down the vertical axis of the triangle in the y direction, and across the horizontal axis of the triangle in the x direction are both determined through setup computations.Set-Up ComputationsThe setup computation in DDA is determined as follows:1. Calculate parameter deltas dpdy for stepping down the vertical axis of the triangle (rather than calculating dpdy20 for stepping down the longest edge of the triangle). As shown in FIG. 2, dpdy20 represents one step down the long edge of the triangle for a distance of 1.0 in y. The value dxdy20 represents the change in X of the long edge for a distance of 1.0 in the y direction. This ratio of 1.0/dxdy20 is the slope of the edge. All of the dpdy20 values differ from the pure vertical dpdy by this slope.Determine the slope (m20) of the longest edge.dx20=v2.v[PVERT_X]-v0.v[PVERT_X]; wherev2. v[PVERT_X] is the x coordinate at vertex 2, andv0.v[PVERT_X] is the x coordinate at vertex 0.dy20=v2.v[PVERT_Y]-v0.v[PVERT_Y];rdy20=1/dy20;dxdy20=dx20*rdy20;m20=1/dxdy20Calculate the parameter delta dpdx for stepping across the horizontal axis.Compute distance across the widest part of the triangle.Divide the distance by the number of steps (pixels).Subtracting the slope ratio of the horizontal parameter component (dxdy20*dpdx) from the edge delta (dpdy20) produces the vertical delta values at a cost of one add and one multiply per parameter (rather than a multiply per parameter per scan line):dpdy=dpdy20-dxdy20*dpdx2. Compute the edge slopes of the two shorter edges for determining the triangle boundaries.3. Calculate a base parameter value.Find the tile comprising the top vertex, V0. As illustrated in FIG. 7, the parameters at V0 are then adjusted by the x coordinate distance and the y coordinate distance from V0 to the base position. The equation to determine the base parameter for a given parameter, p, is as follows:pBase=p0-Dx*dpdx-Dy*dpdyAll subsequent parameter value computations are available using simple additions of dpdx and dpdy, or two or four times those values.OverviewAs shown in FIG. 3, an edge interpolator 300 determines the bounds of a triangle by storing leftmost points from each scan line in the Left Edge FIFO 302, and the rightmost points from each scan line in the Right Edge FIFO 304. Using these points, a tile generator 306 determines the leftmost tile and the rightmost tile to process from each scan line region, which in preferred embodiments is a region comprising four scan lines. A sample generator 308 then determines pixel values.A triangle is rasterized in subsets of pixels called tiles. In preferred embodiments, a tile comprises a 4*4 block of 16 pixels. FIG. 4 illustrates a given tile in a raster, where the circles represent positions of valid sample points for pixels within the tile. In this illustration, a pixel is a valid sample point if the pixel's center falls on the (x.0, y.0) position of a grid, where x and y are integers. In other embodiments, however, the pixel center may fall on other positions of a grid (i.e., x.5, y.5) without departing from the scope of the embodiments of the present invention. A sample value between 0.0 and 4.0, including 0.0, belongs to this tile -(0.0, 0.0), (1.0, 0.0), (2.0, 0.0), (3.0, 0.0), (0.0, 1.0), (1.0, 1.0), (2.0, 1.0), (3.0, 1.0), (0.0, 2.0), (1.0, 2.0), (2.0, 2.0), (3.0, 2.0), (0.0, 3.0), (1.0, 3.0), (2.0, 3.0), (3.0, 3.0)-, and a sample value of exactly 4.0 belongs to the next tile (0.0, 4.0), (1.0, 4.0), (2.0, 4.0), (3.0, 4.0), (4.0, 0.0), (4.0, 1.0), (4.0, 2.0), (4.0, 3.0), (4.0, 4.0).Parameter values are computed at a base position within a tile as shown in FIG. 5. The base position is a position within a given tile from which other valid sample points may be computed by adding or subtracting the parameter delta. As shown in FIG. 6, to step from one tile to the next requires adding 4 times the dpdy base position to go down to a next tile, or adding or subtracting 4 times the dpdx base position to go left or right of a given tile. In preferred embodiments, the base position is the grid position (1.0, 1.0) within a tile.Determining Triangle BoundariesAn edge interpolator steps down the triangle edges and stores the leftmost and rightmost x values into the Left Edge FIFO (First-In-First-Out) Unit and the Right Edge FIFO Unit for each step down the triangle edges. Each x value is obtained by adding or subtracting dxdy20. dxdy20 is the change in x for a step down the longest edge of the triangle. Consequently, we step across by dxdy20 in either direction until the bottom vertex V2 is reached.LeftEdgeFIFO: V0->V2This can be expressed in the following equation:xstore20=xcurrent+dxdy20LEFTEDGEFIFO(a)=(int)xstore20+1; or LEFTEDGEFIFO(a)=int(xstore20+1)where a=an integer that represents the current scan line, or y value.Starting at the scan line below the top vertex, step down the longest edge by adding dxdy20 to the current x value. Add the resulting x (xstore20) value to 1, and take the integer value of the resulting x (or take the integer value of the total of the resulting x+1). Store the result in the LeftEdgeFIFO. We stop once the current y value is greater than or equal to the y value of V2 (or once a predetermined count reaches a stop count, such as 0).In reference to FIG. 12: The Left Edge FIFO unit stores the leftmost x values for the edge defined by V0 to V2 as follows:In this example, we start at scan line y=2, where x=10.25. We store x=11.At y=3, we take 10.25 and add -0.68 and end up with x=9.57. We store x=10.At y=4, we take 9.57 and add -0.68 and end up with x=8.89. We store x=9.At y=5, we take 8.9 and add -0.68 and end up with x=8.21. We store x=9.At y=6, we take 8.2 and add -0.68 and end up with x=7.53. We store x=8.At y=7, we take 7.5 and add -0.68 and end up with x=6.85. We store x=7.At y=8, we take 6.8 and add -0.68 and end up with x=6.17. We store x=7.At y=9, we take 6.1 and add -0.68 and end up with x=5.49. We store x=6.At y=10, we take 5.4 and add -0.68 and end up with x=4.81. We store x=5.At y=11, we take 4.7 and add -0.68 and end up with x=4.13. We store x=5.At y=12, we take 4.0 and add -0.68 and end up with x=3.45. We store x=4.At y=13, we take 3.3 and add -0.68 and end up with x=2.77. We store x=3.Since the current y=14.00 value is equal to v2.v[PVERT_Y]=14.00, we stop here.RightEdgeFIFO: V0->V1This can be expressed in the following equation:xstore10=xcurrent+dxdy10RIGHTEDGEFIFO10(a)=(int) xstore10; orwhere a=an integer that represents the current scan line, or y value.Starting at the scan line below the top vertex, step down the first shorter edge, V0 to V1, by adding dxdy10 to the current x value. Take the integer value of the resulting x, and store the result in the RightEdgeFIFO10 for the appropriate scan line. We stop once the current y value is greater than or equal to the y value of V1 (or once a predetermined count reaches a stop count, such as 0).In reference to FIG. 12: The Right Edge FIFO unit stores the rightmost x values for the edge defined by V0 to V1 as follows:In this example, we start at scan line y=2, where x=11.065. We store x=11.At y=3, we take 11 and add 0.42 and end up with x=11.42. We store x=11.At y=4, we take 9.57 and add 0.42 and end up with x=11.84. We store x=11.At y=5, we take 8.9 and add 0.42 and end up with x=12.26. We store x=12.At y=6, we take 9.57 and add 0.42 and end up with x=12.68. We store x=12.At y=7, we take 9.57 and add 0.42 and end up with x=13.10. We store x=13.At y=8, we take 9.57 and add 0.42 and end up with x=13.52. We store x=13.Since the current y=9 value is greater than v1.v[PVERT_Y]=8.5, we stop here.RightEdgeFIFO: V1->V2This can be expressed in the following equation:xstore12=xcurrent+dxdy10RIGHTEDGEFIFO12(a)=(int) xstore12; orwhere a=an integer that represents the current scan line, or y value.Starting at the scan line below the top vertex, step down the second shorter edge, V1 to V2, by adding dxdy12 to the current x value. Take the integer value of the resulting x, and store the result in the RightEdgeFIFO12. We stop once the current y value is greater than or equal to the y value of V2 (or once a predetermined count reaches a stop count, such as 0).In our example: The Right Edge FIFO unit stores the rightmost x values for the edge defined by V2 to V1 as follows:In this example, we start at scan line y=9, where x=12.705. We store x=12.At y=10, we take 12.75 and add -2.3 and end up with x=10.66. We store x=10.At y=11, we take 10.66 and add -2.3 and end up with x=8.57. We store x=8.At y=12, we take 8.57 and add -2.3 and end up with x=6.48. We store x=6.At y=13, we take 6.48 and add -2.3 and end up with x=4.39. We store x=4.Since the current y=14.00 value is equal to v1.v[PVERT_Y]=14.00, we stop here. These coordinates all correspond to the black dots on FIG. 12.For each four line scan region (i.e., in FIG. 1, lines 0, 1, 2, 3; lines 4, 5, 6, 7, etc. on the y axis), a leftmost tile to be accessed for that region and a rightmost tile to be accessed for that region is determined by the tile generator.For each four line scan region (i.e., in FIG. 12, lines 0, 1, 2, 3; lines 4, 5, 6, 7, etc. on the y axis), a leftmost tile to be accessed for that region and a rightmost tile to be accessed for that region is determined by the tile generator.For the right edge, a positive edge slope (x increases as we move down the edge) indicates that the bottom scan line contains the rightmost valid sample, and a negative edge slope (x decreases as we move down the edge) indicates that the top scan line contains the rightmost valid sample.For the four-line region comprising the middle vertex, the same rules above apply if both edges have positive slopes or both edges have negative slopes. However, if one edge has a positive slope, and the other edge has a negative slope, then the middle vertex itself is the rightmost or the leftmost point.Any tiles that exist to the left of the leftmost tile in a four scan line region are not visited, and any tiles that exist to the right of the rightmost tile in a four scan line region are not visited.Tile GenerationSince the first tile base position may not always comprise the top-most vertex, we start at the base position of the tile comprising the top-most vertex to save processing time. To begin rasterizing the triangle, a tile generation unit starts interpolating values from the base parameter values in the tile containing the top-most vertex.Finding The Top-Most VertexThe tile comprising the top vertex is determined by taking the integer value of the vertex coordinates, and zeroing out the two low order bits of the binary x and y value. In the example of FIG. 12, the top vertex comprises (10.75, 1.25). We take the integer value to obtain (10.00, 1.00), which is represented as (1010, 0001) in binary. We then zero out the two low order bits to obtain (1000, 0000), which is (8, 0) in decimal. We will call this tile T(8, 0).Starting at tile T(8,0), all valid pixel samples within the tile can be determined first, and then their values computed. In this embodiment, it is preferable that 8 sample values are computed per cycle for a rate of 2 cycles per tile, and then the valid pixels from the 8 computed values are selected. Alternatively, and preferably, all valid pixels are determined first, and then their values computed.In preferred embodiments, therefore, we first find the valid pixel samples by retrieving the leftmost x value and a rightmost x value for each scan line, which are both valid pixel samples, and then interpolating values for those samples, as well as samples in between, for filling the triangle a tile at a time, based on the base position (1.0, 1.0) within a tile.Since the values for the base parameters have already been determined previously during setup, it is just a matter of adding or subtracting dpdx and dpdy values to determine the value of one pixel to another. In other words, determining the value of the next pixel merely requires adding dpdy if the next pixel is below, or subtracting dpdy if the next pixel is above. Likewise, determining values from one pixel to a left pixel is calculated by subtracting dpdx, and determining values from one pixel to a right pixel is calculated by adding dpdx.FIG. 7 illustrates the base position B in relation to the top vertex, V0. FIG. 8 illustrates edge interpolation using valid pixel samples within tiles.Tile Comprising The Top-Most VertexFor the scan line region comprising the top-most vertex, we determine a leftmost tile and a rightmost tile so that we can process one tile at a time. To determine a leftmost tile in the region, x values for scan lines 2 and 3 for the longest edge are analyzed:at y=2, x=10.25at y=3, x=9.5Using the rules described above, since x decreases as we move down the edge, the bottom scan line (y=3) contains the leftmost valid sample. We obtain the coordinates at scan line 3 from the left edge FIFO, which are (10, 3), and we determine the tile it is in. Using the method described, supra, we find that it is in tile T(8, 0).To determine a rightmost tile in the region, x values for scan lines 2 and 3 for the first shortest edge are determined:at y=2, x=10.75at y=3, x=11.5For the right edge, since x increases as we move down the edge, the bottom scan line contains the rightmost valid sample. We obtain the coordinates at scan line 3 from the left edge FIFO, which are (11, 3), and we determine the tile it is in. Using the method described, supra, we find that it is also in tile T(8,0).Since the leftmost tile and the rightmost tile are the same, we process tile T(8, 0) as follows. As LEFTEDGEFIFO(2) (i.e., the x value at y=2) is equal to 11, and RIGHTEDGEFIFO(2) is equal to 11, there is only one valid sample for scan line=y=2. Using the two low order bits of the integer pixel address in x and y, the raster coordinates (11, 2) are calculated to be equal to the grid position (3.0, 2.0).Using pixel values from the base position of the tile comprising the top-most vertex (1.0,1.0), we determine the pixel values for the grid position (3.0, 2.0) as follows:(1.0, 1.0): GIVEN(3.0, 2.0): add (2dpdx, dpdy)Next, the scan line is incremented by 1, so that y=3. LEFTEDGEFIFO(3) is equal to 10, and RIGHTEDGEFIFO(3) is equal to 11, and there are 2 valid samples for this scan line. The raster coordinates (10, 3) are equal to the grid position (2.0, 3.0). Using pixel values from the base position (1.0, 1.0), we determine the pixel values for the grid position (2.0, 3.0) as follows:(1.0, 1.0): GIVEN(2.0, 3.0): add (dpdx, 2dpdy)The raster coordinates (11, 3) are equal to the grid position (3.0, 3.0) Using pixel values from the base position (1.0, 1.0), we determine the pixel values for the grid position (3.0, 3.0) as follows:(1.0, 1.0): GIVEN(3.0, 3.0): add (2dpdx, 2dpdy)Steering To The Next TileTo steer to, or find, the next tile to process, we determine a leftmost tile and a rightmost tile for the next scan line region, which in our example comprises scan lines 4-7. Using a method, such as that described above, we determine that the bottom scan line, y=7 comprises both the leftmost and the rightmost tile. At y=7, we obtain the x value from the left edge FIFO, which gives us the coordinates (7, 7). Using a method such as that described above, we determine that (7, 7) belongs to tile T(4, 4), and that tile T(4, 4) is the leftmost tile. We then obtain the x value from the right edge FIFO, which gives us the coordinates (13, 7). We then find that this coordinate belongs to tile T(12, 4), and that tile T(12, 4) is the rightmost tile.Processing proceeds at the tile comprising the next scan line, and then progresses to the left and/or to the right in accordance with the leftmost tile and rightmost tile. The next scan line is y=4, where the coordinates are (9, 4), and the tile is T(8, 4). At the left edge FIFO, x=9, and at the right edge FIFO, x=11, so valid pixel samples comprise x={9, 10, 11}. For each valid pixel, we compute the pixel value as follows.To find the pixel values for the base position in this tile (we only know the base position pixel values for the tile comprising the top-most vertex), we make calculations based on the tile comprising the top-most vertex, T(8, 0). To get from tile T(8, 0) to tile T(8, 4), we use the dpdx pixel value at position (1.0, 1.0) in tile T(8, 0) (since the x tile position is the same), and we multiply the dpdy pixel value at position (1.0, 1.0) in tile T(8, 0) by 4 (since the current tile is 4* values more than the y value of the top-most vertex) to obtain the y pixel value. Using pixel values from the base position (1.0, 1.0) for the current tile, we determine the pixel values for:The raster coordinate (9, 4) having the grid position (1.0, 0.0), where 4 represents scan line 4 (y=4), and 9 is the leftmost position on this scan line.(1.0,1.0): GIVEN(1.0, 0.0): add (0, -dpdy)The raster coordinate (10, 4) having the grid position (2.0, 0.0), where 4 represents scan line 4 (y=4), and 10 is the next valid pixel sample on this scan line.(1.0,1.0): GIVEN(2.0, 0.0): add (dpdx, -dpdy)The raster coordinate (11, 4) having the grid position (3.0, 0.0), where 4 represents scan line 4 (y=4), and 11 is the rightmost position on this scan line.(1.0, 1.0): GIVEN(3.0, 0.0): add (2dpdx, -dpdy)Next, the scan line is incremented by 1, so that y=5. At scan line 5 in tile T(8, 4), valid samples comprise x={9, 10, 11}, and at scan line 6 in tile T(8, 4), valid samples comprise x={8, 9, 10, 11}. The scan lines are rasterized as described above.To steer to the next tile from tile T(8, 4), we subtract 4 from the x value of the tile to get tile T(4, 4). Since we know that tile T(4, 4) is the leftmost tile, we stop here after we obtain the pixel values. To steer to the next tile from T(8, 4), we add 4 from the x value of the tile to get tile T(12, 4). Since we know this is the rightmost tile, we stop here after we obtain the pixel values. We steer to subsequent tiles until all the values in the FIFO's have been visited.FIG. 9 is a flowchart illustrating a method to determine triangle boundaries in accordance with general embodiments of the invention. The method begins at block 900, and continues to block 902 where setup calculations, including a vertical parameter value delta, a horizontal parameter value delta, and initial base parameter values (base parameter values at tile containing topmost vertex) are determined. At block904, the leftmost and rightmost pixels for each scan line are determined, and stored in appropriate FIFO's. At block 906, the leftmost tile and rightmost tile for each scan line region are determined using the FIFO information. The method ends at block 908.FIG. 10 is a flowchart illustrating a method to compute pixel parameter values for a scan line within a tile in accordance with general embodiments of the invention. The method begins at block 1000, and continues to block 1002 where the base parameter values for the tile are determined based on the initial base parameter values and tile positions. At block 1004, the leftmost pixel and rightmost pixel for the scan line are determined using the FIFO's. This determines the first and last pixels to process on the scan line. At block 1006, parameter values for a first pixel are determined based on the grid position of the first pixel, and on the base parameter values for the tile. At block 1008, parameter values for a second pixel are determined based on the parameter values for the first pixel. The method ends at block 1010.FIG. 11 is a flowchart illustrating a method to rasterize a triangle. The method begins at block 1100, and continues to block 1102 where the tile containing the topmost vertex is determined. At block 1104, the tile is rasterized by computing pixel values for a scan line in the tile as illustrated in FIG. 10 (if this is a new tile to visit, then start at block 1002, otherwise start at block 1004). At block 1106, steer to the next scan line.At block 1108, it is determined if the next scan line is less than or equal to the largest scan line. If so, then at block 1110, it is determined if the next scan line is in the next scan line region. If the next scan line is in the next scan line region, then at block 1112, determine the leftmost tile and rightmost tile for the previous scan line region using the FIFO information. If at block 1114 it is determined If the leftmost and rightmost tile for the previous scan line region are the same, then proceed at block 1104.If it is determined at block 1108 that the next scan line is greater than the largest scan line, then the method ends at block 1118. If at block 1110 it is determined that the next scan line is not in the next scan line region, then processing proceeds at block 1104. If at block 1114 it is determined that the leftmost tile and rightmost tile are not the same, then at block 1116, steer to the next block in left and right directions until both the leftmost and rightmost tiles are processed. From block 1116, process each tile at block 1104.ConclusionIn the foregoing specification, the embodiments of the present invention have been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the embodiments of the present invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.For example, calculations shown in throughout the description are exemplary. There may be other ways of computing the variables needed, other than those shown, which are well-known to one skilled in the art. |
III-N semiconductor-on-silicon integrated circuit structures and techniques are disclosed. In some cases, the structure includes a first semiconductor layer formed on a nucleation layer, the first semiconductor layer including a 3-D GaN layer on the nucleation layer and having a plurality of 3-D semiconductor structures, and a 2-D GaN layer on the 3-D GaN layer. The structure also may include a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer includes AlGaN on the 2-D GaN layer and a GaN layer on the AlGaN layer. Another structure includes a first semiconductor layer formed on a nucleation layer, the first semiconductor layer comprising a 2-D GaN layer on the nucleation layer, and a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer includes AlGaN on the 2-D GaN layer and a GaN layer on the AlGaN layer. |
CLAIMS What is claimed is: 1. An integrated circuit comprising: a crystalline silicon substrate; a nucleation layer on the substrate; and a first semiconductor layer formed on the nucleation layer, the first semiconductor layer comprising: a three-dimensional gallium nitride (GaN) layer on the nucleation layer and having a plurality of three-dimensional semiconductor structures; and a two-dimensional GaN layer on the three-dimensional GaN layer. 2. The integrated circuit of claim 1, wherein the nucleation layer comprises at least one of aluminum nitride (A1N), aluminum gallium nitride (AlGaN), and/or a combination of any of the aforementioned, and wherein the integrated circuit further comprises a patterned insulator layer on the nucleation layer, the patterned insulator layer comprising at least one of silicon dioxide (S1O2), silicon nitride (SiN x), tungsten dinitride (WN 2), tungsten and titanium nitride, aluminum oxide (AI 2O 3), and/or a combination of any of the aforementioned. 3. The integrated circuit of claim 1 further comprising a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer comprises aluminum gallium nitride (AlGaN) on the two-dimensional GaN layer and a GaN layer on the AlGaN layer. 4. The integrated circuit of claim 3, wherein the second semiconductor layer includes multiple alternating layers of AlGaN and GaN. 5. The integrated circuit of claim 3, wherein the second semiconductor layer is within the two-dimensional GaN layer. 6. The integrated circuit of claim 1, wherein the three-dimensional GaN layer comprises at least one of a plurality of island-like semiconductor structures and/or a plurality of nanowires. 7. The integrated circuit of claim 1, wherein the substrate has a crystal orientation of [100]. 8. The integrated circuit of claim 1 further comprising a capping layer including at least one of AlGaN, aluminum indium nitride (AlInN), and/or indium gallium nitride (InGaN). 9. The integrated circuit of claim 1, wherein the integrated circuit exhibits at least one of a defect density of about 3 x l0 9/cm 2or less, a surface crack density of about 200 cracks/mm 2or fewer, and/or a root mean square (RMS) surface roughness of about 5 nm or less. 10. A system-on-chip comprising the integrated circuit of any of claims 1 through 9. 1 1. A mobile computing system comprising the integrated circuit of any of claims 1 through 9. 12. An integrated circuit comprising: a crystalline silicon substrate; a nucleation layer on the substrate; a first semiconductor layer formed on the nucleation layer, the first semiconductor layer comprising a two-dimensional gallium nitride (GaN) layer on the nucleation layer; and a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer comprises: an aluminum gallium nitride (AlGaN) layer on the two-dimensional GaN layer; and a GaN layer on the AlGaN layer. 13. The integrated circuit of claim 12, wherein the nucleation layer comprises at least one of aluminum nitride (AIN), aluminum gallium nitride (AlGaN), and/or a combination of any of the aforementioned. 14. The integrated circuit of claim 12, wherein the second semiconductor layer includes multiple alternating layers of AlGaN and GaN. 15. The integrated circuit of claim 12, wherein the second semiconductor layer is within the two-dimensional GaN layer. 16. The integrated circuit of claim 12, wherein the substrate has a crystal orientation of [100]. 17. The integrated circuit of claim 12 further comprising a capping layer including at least one of AlGaN, aluminum indium nitride (AlInN), and/or indium gallium nitride (InGaN). 18. The integrated circuit of claim 12, wherein the integrated circuit exhibits at least one of a defect density of about 3 x l0 9/cm 2or less, a surface crack density of about 200 cracks/mm 2or fewer, and/or a root mean square (RMS) surface roughness of about 5 nm or less. 19. A system-on-chip comprising the integrated circuit of any of claims 12 through 18. 20. A mobile computing system comprising the integrated circuit of any of claims 12 through 18. 21. A method of forming an integrated circuit, the method comprising: forming a nucleation layer on a crystalline silicon substrate; and forming a first semiconductor layer on the nucleation layer, the first semiconductor layer comprising either: a three-dimensional gallium nitride (GaN) layer on the nucleation layer and having a plurality of three-dimensional semiconductor structures and a two-dimensional GaN layer on the three-dimensional GaN layer; or a two-dimensional GaN layer on the nucleation layer; wherein in response to the first semiconductor layer including a two-dimensional GaN layer on the nucleation layer, the method further comprises forming a second semiconductor layer on or within the first semiconductor layer, wherein the second semiconductor layer comprises an aluminum gallium nitride (AlGaN) layer on the two-dimensional GaN layer and a GaN layer on the AlGaN layer. 22. The method of claim 21 further comprising forming a patterned insulator layer on the nucleation layer prior to forming the first semiconductor layer, wherein the patterned insulator layer comprises at least one of silicon dioxide (S1O 2), silicon nitride (SiN x), tungsten dinitride (WN 2), tungsten and titanium nitride, aluminum oxide (AI 2O 3), and/or a combination of any of the aforementioned. 23. The method of claim 21, wherein forming the first semiconductor layer comprises an in-situ patterning process. 24. The method of claim 21, wherein forming the first semiconductor layer comprises an ex-situ patterning process. 25. The method of claim 21, wherein at least one semiconductor layer is formed using at least one of a molecular beam epitaxy (MBE) process and/or a metalorganic vapor phase epitaxy (MOVPE) process. |
III-N SEMICONDUCTOR-ON-SILICON STRUCTURES AND TECHNIQUES BACKGROUND Integrated circuit (IC) design in the deep-submicron process nodes (e.g., 32 nm and beyond) involves a number of non-trivial challenges, and gallium nitride (GaN)-on-silicon (Si) devices have faced particular complications. Continued process scaling will tend to exacerbate such problems. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1A is a side cross-sectional view of an integrated circuit (IC) configured in accordance with an embodiment of the present invention. Figure IB is a side cross-sectional view of an IC configured in accordance with another embodiment of the present invention. Figure IC is a side cross-sectional view of an IC configured in accordance with another embodiment of the present invention. Figure ID is a side cross-sectional view of an IC configured in accordance with another embodiment of the present invention. Figure 2A is a cross-section view of an IC configured in accordance with an embodiment of the present invention. Figure 2B is a cross-section view of an IC configured in accordance with another embodiment of the present invention. Figure 3A is a cross-section view of an IC configured in accordance with an embodiment of the present invention. Figure 3B is a cross-section view of an IC configured in accordance with another embodiment of the present invention. Figure 4 illustrates a computing system implemented with integrated circuit structures or devices formed by one or more of the defect density and/or crack density reduction techniques disclosed herein, in accordance with an example embodiment of the present invention. As will be appreciated, the figures are not necessarily drawn to scale or intended to limit the claimed invention to the specific configurations shown. For instance, while some figures generally indicate straight lines, right angles, and smooth surfaces, an actual implementation of a given embodiment may have less than perfect straight lines, right angles, etc., and some features may have surface topology or otherwise be non-smooth, given real world limitations of integrated circuit (IC) fabrication. In short, the figures are provided merely to show example structures. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. These and other features of the present embodiments will be understood better by reading the following detailed description, taken together with the figures herein described. DETAILED DESCRIPTION III-N semiconductor-on-silicon integrated circuit structures and techniques are disclosed. In some cases, the structure includes a first semiconductor layer formed on a nucleation layer, the first semiconductor layer including a three-dimensional GaN layer on the nucleation layer and having a plurality of three-dimensional semiconductor structures, and a two-dimensional GaN layer on the three-dimensional GaN layer. The structure also may include a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer includes AlGaN on the two-dimensional GaN layer and a GaN layer on the AlGaN layer. Another structure includes a first semiconductor layer formed on a nucleation layer, the first semiconductor layer comprising a two-dimensional GaN layer on the nucleation layer, and a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer includes AlGaN on the two-dimensional GaN layer and a GaN layer on the AlGaN layer. Some example structures which may be formed using the disclosed techniques may include, but are not necessarily limited to, gallium nitride-on-silicon (GaN-on-Si), aluminum gallium nitride-on-silicon (AlGaN-on-Si), aluminum indium nitride-on- silicon (AlInN-on-Si), etc. In some cases, a given structure provided using the disclosed techniques may exhibit, for example: (1) reduced defect density; (2) reduced surface crack density; and/or (3) improved surface smoothness (e.g., of the top/active layer of the structure). In some cases, defect density may be reduced and surface smoothness improved or otherwise preserved while simultaneously eliminating surface cracks altogether. Numerous configurations and variations will be apparent in light of this disclosure. General Overview As previously indicated, there are a number of non-trivial issues that can arise which complicate gallium nitride (GaN)-on-silicon (Si) devices. For instance, one non-trivial issue pertains to the fact that there is a lattice mismatch of approximately 42% between GaN and Si(100) (that is, silicon having a crystal orientation of [100]). The dissimilar lattices of these materials produce threading dislocation defects which inhibit epitaxial growth of low defect density III-N materials on Si(100). Another non-trivial issue pertains to the fact that there is a thermal mismatch of approximately 116% between GaN and Si. This large thermal mismatch, coupled with the high growth temperatures for GaN, results in an undesirably high surface crack density for the top/active epitaxial layers, making them unsuitable for device fabrication. These example complications have precluded use of GaN on Si(100), for example, in system-on-chip (SoC) high-voltage and radio frequency (RF) devices as well as in complementary metal-oxide- semiconductor (CMOS) transistors, among other applications. One possible approach to addressing these non-trivial issues might utilize multiple aluminum nitride (A1N) layers inserted between GaN growth on Si(100). However, as will be appreciated in light of this disclosure, this approach may be unsuccessful in preventing defects such as threading dislocations from migrating to the top of the resultant stack (e.g., the active layer of the device) and can lead to defect densities in the range of 3 x l0 10/cm 2or greater (e.g., as measured by plan view transmission electron microscopy or PVTEM). Furthermore, the surface smoothness can be severely compromised with the use of such A1N layers, leading to top/active layers with undesirably rough and pitted surfaces which generally are not suitable for device fabrication. Thus, and in accordance with an embodiment of the present invention, techniques are disclosed herein for providing III-N semiconductor-on-silicon structures. In some cases, the disclosed techniques can be used to provide an integrated circuit (IC) structure which includes a three-dimensional layer of a III-N semiconductor material (e.g., gallium nitride or GaN; aluminum gallium nitride or AlGaN; aluminum indium nitride or AlInN; etc.) that is formed, in the aggregate, from a plurality of three-dimensional semiconductor structures (e.g., islands, nanowires, etc.). This layer of three-dimensional semiconductor structures can be formed using any of a wide variety of techniques (e.g., deposition or epitaxial growth in a three-dimensional growth mode; in-situ patterning; ex-situ patterning; etc.), as discussed below. Thereafter, a two- dimensional layer of a semiconductor material (e.g., GaN, AlGaN, AlInN, etc.) may be grown, layer by layer, over the three-dimensional semiconductor layer, for example, to recover a desired degree of surface smoothness. In some instances, additional layers of similar and/or different semiconductor materials may be provided on such two-dimensional semiconductor layer, for example, to alter the stress state of the total structure. In some further instances, a capping layer of a similar and/or different semiconductor material optionally may be included, as desired for a given application or end use (e.g., electronic devices, optoelectronic applications, etc.). Numerous configurations will be apparent in light of this disclosure. In some cases, structures provided using the disclosed techniques may exhibit, for example: (1) reduced defect density; (2) reduced surface crack density; and/or (3) improved surface smoothness (e.g., of the top/active layer of the structure). Some structures provided using the disclosed techniques may exhibit a reduced defect density and surface smoothness while having substantially no surface cracks (or an otherwise minimal number of surface cracks). For instance, the disclosed techniques can be used, in one specific example embodiment, to provide a GaN-on-Si(lOO) structure (that is, GaN on silicon having a crystal orientation of [100]) having a defect density in the range of about 2-3>< 10 9/cm 2or less. In some such cases, such a reduction in defect density may be achieved while simultaneously reducing surface crack density. For instance, in some example cases, the surface crack density of such a GaN-on-Si(lOO) structure may be reduced to be in the range of less than or equal to about 200 cracks/mm 2(e.g., about 150 cracks/mm 2or fewer; about 100 cracks/mm 2or fewer; about 50 cracks/mm 2or fewer; about 10 cracks/mm 2or fewer; about 5 cracks/mm 2or fewer; etc.). It should be noted, however, that the claimed invention is not so limited, as in some other cases, surface cracks may be eliminated altogether (e.g., surface crack density may be approximately or equal to zero). In a more general sense, defect density and surface crack density can vary from one embodiment to the next, and the claimed invention is not intended to be limited to any particular range. Also, as previously noted, some embodiments of structures provided using the disclosed techniques may exhibit improved (or otherwise preserved) surface smoothness. For instance, the disclosed techniques can be used, in one example embodiment, to provide a GaN-on-Si(lOO) structure having a root mean square (RMS) surface roughness in the range of less than or equal to about 15 nm (e.g., about 12 nm or less; about 6 nm or less; about 3 nm or less; about 2 nm or less; about 1.5 nm or less; etc.), which may provide GaN-on-Si(lOO) structures, for example, suitable for any of a wide variety of device fabrication processes. Other defect density, surface crack density, and/or surface roughness ranges achievable using the disclosed techniques will depend on a given application and will be apparent in light of this disclosure. As will be further appreciated in light of this disclosure, some embodiments of the present invention may be used in any of a wide variety of applications or end uses in any of a wide variety of fields, such as, but not limited to: wireless communications/transmissions; power management, conversion, and transmission; electric vehicles; light emitting diodes (LEDs), lasers, and other III-N optoelectronic devices; and/or solid state lighting (SSL). Some embodiments may be used, for example, in system-on-chip (SoC) circuits which may be used in any of a wide range of electronic devices including, but not limited to: smartphones; notebooks; tablets; personal computers (PCs); etc. Also, some embodiments of the present invention may be used in electronic devices, for example, which utilize direct battery high voltage switching transistors (e.g., power management ICs; DC-to-DC conversion in output filters and in drive circuitries; etc.). As will be further appreciated in light of this disclosure, in some cases the disclosed techniques can be used to fabricate GaN-based devices (e.g., electronics, LEDs/lasers, etc.) on a large area Si(100) substrate, which may reduce production cost and/or enable high volume manufacturing. Other suitable uses of one or more embodiments of the present invention will depend on a given application and will be apparent in light of this disclosure. As will be appreciated in light of this disclosure, and in accordance with an embodiment, use of the disclosed techniques/structure may be detected, for example, by visual or other inspection (e.g., scanning electron microscopy or SEM; transmission electron microscopy or TEM; etc.) and/or materials analysis (e.g., energy-dispersive X-ray spectroscopy or EDX; secondary ion mass spectrometry or SIMS; high-resolution TEM; etc.) of a given IC or other device that has a III-N semiconductor on silicon structure configured as described herein. Three-Dimensional and Two-Dimensional GaN Structure Figure 1A is a side cross-sectional view of an integrated circuit (IC) 100 configured in accordance with an embodiment of the present invention. As can be seen, IC 100 may include a substrate 110, a nucleation layer 120 disposed on substrate 110, a layer 130 of three-dimensional semiconductor structures disposed on nucleation layer 120, and a two-dimensional semiconductor layer 140 disposed on the three-dimensional semiconductor layer 130. As will be appreciated in light of this disclosure, IC 100 may include additional, fewer, and/or different elements or components from those here described, and the claimed invention is not intended to be limited to any particular IC configurations, but can be used with numerous configurations in numerous applications. In accordance with an embodiment, substrate 110 may have any of a wide range of configurations. For instance, some suitable configurations for substrate 110 may include, but are not limited to: (1) a bulk substrate; (2) a semiconductor-on-insulator (XOI, where X is a semiconductor material such as silicon, germanium, germanium-enriched silicon, etc.); (3) a wafer; (4) a multi-layered structure; and/or (5) any other suitable configuration, as will be apparent in light of this disclosure. Furthermore, and in accordance with an embodiment, substrate 110 may comprise any of a wide range of materials. Some example suitable materials for substrate 110 may include, but are not necessarily limited to: (1) silicon (Si) having a crystal orientation of [100]— hereinafter referred to as Si(100)— and optionally having an offcut towards the [110] direction of up to about 1 1° or less; (2) Si having a crystal orientation of [1 10]— hereinafter referred to as Si(l 10)— and optionally having an offcut towards the [11 1] direction of up to about 6° or less; and/or (3) Si having a crystal orientation of [11 1]— hereinafter referred to as Si(l l l). However, the claimed invention is not so limited, and other suitable materials, crystallographic orientations, and/or configurations for substrate 110 will depend on a given application and will be apparent in light of this disclosure. As previously noted, and in accordance with an embodiment, a nucleation layer 120 may be disposed on substrate 110, for example, to help begin growth on IC 100 of one or more layers of semiconductor material (e.g., one or more III-N semiconductor materials such as GaN, AlGaN, AlInN, etc., as discussed below). In some cases in which substrate 110 comprises Si(100), for example, nucleation layer 120 may comprise a semiconductor material such as, but not limited to, aluminum nitride (ΑΓΝ), AlGaN, an alloy of any of the aforementioned, and/or a combination of any of the aforementioned. However, the claimed invention is not so limited, and other suitable materials for nucleation layer 120 will depend on a given material composition of substrate 1 10 and/or layer 130 (discussed below) and will be apparent in light of this disclosure. In a more general sense, layer 120 may be any material suitable for providing nucleation sites to layer 130. In accordance with an embodiment, nucleation layer 120 may be formed (e.g., deposited, grown, etc.) on substrate 1 10 using any of a wide range of techniques. Some example suitable formation techniques may include, but are not limited to, molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE), etc. Also, and in accordance with an embodiment, nucleation layer 120 may be provided with any given thickness, as desired for a given application or end use. In some embodiments, nucleation layer 120 may have a thickness in the range of about a monolayer to about 300 nm or greater (e.g., about 100-200 nm or greater, or any other sub-range within the range of about 1-300 nm or greater). In some cases, nucleation layer 120 may have a substantially uniform thickness across the topology provided by the underlying substrate 1 10. However, the claimed invention is not so limited, as in some other instances, nucleation layer 120 may be provided with a non-uniform or otherwise varying thickness over such topology. For instance, in some cases a first portion of nucleation layer 120 may have a thickness within a first range while a second portion thereof has a thickness within a second, different range. Other suitable formation techniques and/or thickness ranges for nucleation layer 120 will depend on a given application and will be apparent in light of this disclosure. As previously noted, and in accordance with an embodiment, a three-dimensional semiconductor layer 130 may be disposed on nucleation layer 120. In some cases, semiconductor layer 130 may comprise, for example, a III-N semiconductor material such as, but not limited to: (1) gallium nitride (GaN); (2) aluminum gallium nitride (AlGaN) having an Al concentration in the range of about 0% to 10% (e.g., about 5% or less); (3) aluminum indium nitride (AlInN) having an Al concentration in the range of about 0% to 10% (e.g., about 5% or less); and/or (4) a combination of any of the aforementioned. Other suitable materials for three-dimensional semiconductor layer 130 will depend on a given material composition of nucleation layer 120 and/or application of IC 100 and will be apparent in light of this disclosure. In accordance with an embodiment, three-dimensional semiconductor layer 130 may have any of a wide range of configurations. For example, three-dimensional semiconductor layer 130 may comprise, in accordance with an embodiment, a plurality of three-dimensional semiconductor structures (e.g., island-like structures 130a, nanowires 130b, etc., discussed below) which, in the aggregate, define a three-dimensional layer of one or more semiconductor materials on nucleation layer 120. Furthermore, and in accordance with an embodiment, three- dimensional semiconductor layer 130 may be provided with any thickness, as desired for a given application or end use. For instance, in some example embodiments, three-dimensional semiconductor layer 130 may have a thickness in the range of about 1-250 nm or greater (e.g., about 50-100 nm or greater; about 100-150 nm or greater; about 150-200 nm or greater; about 200-250 nm or greater; or any other sub-range within the range of about 1-250 nm or greater). As will be appreciated in light of this disclosure, and in accordance with an embodiment, three- dimensional semiconductor layer 130 may be provided as a generally discontinuous layer (e.g., by virtue of its constituent structures 130a, 130b, etc., discussed below). The thickness of three- dimensional semiconductor layer 130 may be varied as desired across the underlying topology (e.g., provided by the underlying nucleation layer 120). Other suitable structural configurations and/or thickness ranges for three-dimensional semiconductor layer 130 will depend on a given application and will be apparent in light of this disclosure. As can be seen from Figure 1A, for example, in some cases, three-dimensional semiconductor layer 130 may comprise a plurality of island-like semiconductor structures 130a. In accordance with an embodiment, the island-like structures 130a may be located sufficiently proximate one another so as to generally overlap or otherwise merge with another while remaining substantially discrete so as not to form a continuous layer across the underlying topology of nucleation layer 120. In accordance with an embodiment, the plurality of island-like structures 130a may be formed on nucleation layer 120 using any of a wide range of techniques, as discussed below. In some instances, a given island-like structure 130a may exhibit a generally polygonal cross-sectional geometry (e.g., an approximately hexagonal cross-sectional geometry as viewed from a top-down vantage point). However, the claimed invention is not so limited, and some other embodiments may include a three-dimensional semiconductor layer 130 of islandlike structures 130a of non-polygonal (e.g., curved, articulated, etc.) cross-sectional geometry. Also, in some cases, a given island-like structure 130a may have a width (e.g., as determined between the most distal vertices thereof) or diameter, for example, in the range of about 1- 200 nm or greater. As previously noted, three-dimensional semiconductor layer 130 may have a thickness in the range of about 1-250 nm, in some example cases, and thus in some such instances, a given island-like structure 130a may have a height/depth in the range of about 1- 250 nm or greater (e.g., about 100 nm or greater). Other suitable geometries and/or dimensions for island-like structures 130a will depend on a given application and will be apparent in light of this disclosure. In accordance with an embodiment, the island-like structures 130a of three-dimensional semiconductor layer 130 may be formed (e.g., deposited, grown, etc.) on nucleation layer 120 using any of a wide range of techniques. For instance, in some embodiments (e.g., such as that depicted by Figure 1A), a three-dimensional semiconductor layer 130 comprising island-like semiconductor structures 130a can be formed by deposition or epitaxial growth in a three- dimensional growth mode using processes such as, but not limited to, molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE), etc. The formation of three-dimensional semiconductor layer 130 using such processes may be controlled, in part or in whole, by adjusting one or more growth parameters, in accordance with an embodiment. For example, when providing a three-dimensional semiconductor layer 130 comprising a plurality of islandlike GaN structures 130a, it may be desirable to: (1) provide a gas flow having a low V/III ratio of trimethylgallium (Ga(CH 3)3 or TMGa) to ammonia (NH 3); (2) provide a low growth temperature (e.g., in the range of about 500-800 °C or lower); and/or (3) provide a high growth pressure (e.g., in the range of about 100-200 torr or greater). Other suitable parameter ranges for providing a three-dimensional semiconductor layer 130 of GaN or other semiconductor material(s) will depend on a given application and will be apparent in light of this disclosure. In some other example embodiments, a three-dimensional semiconductor layer 130 comprising island-like semiconductor structures 130a can be formed by being forced to grow in a three-dimensional mode by in-situ patterning. For instance, consider Figure IB, which is a side cross-sectional view of an IC 100 including a three-dimensional semiconductor layer 130 formed from a plurality of island-like structures 130a formed by in-situ patterning, in accordance with an embodiment of the present invention. As can be seen, IC 100 optionally may include an insulator layer 124 disposed on nucleation layer 120. In some cases in which nucleation layer 120 comprises A1N, for example, insulator layer 124 may comprise an insulator material such as, but not limited to, silicon dioxide (S1O 2), silicon nitride (SiN x), tungsten dinitride (WN 2), tungsten and titanium nitride, aluminum oxide (AI2O3), etc. Other suitable insulator materials for insulator layer 124 will depend on a given material composition of nucleation layer 120 and/or application of IC 100 and will be apparent in light of this disclosure. In accordance with an embodiment, insulator layer 124 may be formed (e.g., deposited, grown, etc.) on nucleation layer 120, for example, using any of a wide range of techniques, including, but not limited to, metalorganic vapor phase epitaxy (MOVPE), etc. In some cases, insulator layer 124 may be formed as a plurality of small features (e.g., in-situ islands, patches, etc.) which may help to ensure that subsequent formation of semiconductor layer 130 is three- dimensional (e.g., consists of a plurality of island-like semiconductor structures 130a), in accordance with an embodiment. In some example instances, these small, patchy features of insulator layer 124 may have a thickness (e.g., a height/depth) in the range of about 10 nm or less (e.g., about 5-10 nm or less; about 1-5 nm or less; a monolayer; etc.). By virtue of providing such an optional insulator layer 124, the island-like structures 130a may be caused to grow or otherwise form between the features thereof, as can be seen from Figure IB. Other suitable configurations, geometries, and/or thicknesses for insulator layer 124 will depend on a given application and will be apparent in light of this disclosure. It should be noted, however, that the claimed invention is not limited to only a three- dimensional semiconductor layer 130 comprising a plurality of island-like semiconductor structures 130a. For instance, in some cases, semiconductor layer 130 alternatively may comprise a plurality of nanowire structures 130b formed by being forced to grow in a three- dimensional mode by ex-situ patterning, as discussed below. For example, consider Figure IC, which is a side cross-sectional view of an IC 100 including a three-dimensional semiconductor layer 130 formed from a plurality of nanowires 130b formed by ex-situ patterning, in accordance with an embodiment of the present invention. As can be seen, in some embodiments, IC 100 optionally may include an insulator layer 126 disposed on nucleation layer 120 and patterned with one or more gap features 126a. In some cases in which nucleation layer 120 comprises A1N, for example, insulator layer 126 may comprise an insulator material such as, but not limited to, silicon dioxide (S1O 2), silicon nitride (SiN x), tungsten dinitride (WN 2), tungsten and titanium nitride, aluminum oxide (A1 20 3), etc. Other suitable insulator materials for insulator layer 126 will depend on a given material composition of nucleation layer 120 and/or semiconductor layer 130 and/or application of IC 100 and will be apparent in light of this disclosure. In accordance with an embodiment, insulator layer 126 may be formed (e.g., deposited, grown, etc.) on nucleation layer 120, for example, using any of a wide range of techniques, including, but not limited to, metalorganic vapor phase epitaxy (MOVPE), etc. In some cases, insulator layer 126 may be patterned with one or more gap features 126a which may help to ensure that subsequent formation of semiconductor layer 130 is three-dimensional (e.g., consists of a plurality of nanowires 130b), in accordance with an embodiment. As will be appreciated in light of this disclosure, and in accordance with an embodiment, the dimensions of a given gap feature 126a may be customized as desired, and in some example instances may have a width in the range of about 1-250 nm or greater. In some instances, a given gap feature 126a may have a height/depth in the range of about 1-250 nm or greater. By virtue of providing such an optional insulator layer 126, the nanowires 130b may be caused to grow or otherwise form within gap features 126a and to broaden/expand therefrom, as can be seen from Figure 1C. Other suitable configurations, geometries, and/or thicknesses for insulator layer 126 will depend on a given application and will be apparent in light of this disclosure. As will be appreciated in light of this disclosure, the dimensions of a given nanowire 130b may depend, at least in part, on the dimensions of the given gap feature 126a from which it is formed. Thus, in some cases, a given nanowire 130b may have a width in the range of about 1- 250 nm or greater. Also, in some embodiments, a given nanowire 130b may have a height/depth in the range of about 1-250 nm or greater. Other suitable dimensions for a given nanowire 130b will depend on a given application and will be apparent in light of this disclosure. By virtue of its configuration, and in accordance with an embodiment, three-dimensional semiconductor layer 130 (e.g., with its constituent plurality of island-like structures 130a, nanowires 130b, etc.) may serve to help reduce the defect density of IC 100. To illustrate, consider Figure ID, which is a side cross-sectional view of an IC 100 configured in accordance with an embodiment of the present invention. As can be seen, threading dislocations may be bent/terminated (e.g., annihilated or otherwise curtailed) due to dislocation interaction at any of the various interfaces where the three-dimensional semiconductor structures of semiconductor layer 130 merge/overlap. Thus, by virtue of its configuration, three-dimensional semiconductor layer 130 may function to arrest/trap threading dislocation defects near substrate 110 (e.g., within the first 20-200 nm of three-dimensional semiconductor layer 130), thereby preventing or otherwise reducing the ability of such defects to migrate through IC 100 to the top/active layer thereof. As will be appreciated in light of this disclosure, a reduction in the number of threading dislocations which are permitted to migrate to the top/active layer of IC 100 may yield a reduction in the density of surface cracks at the top/active layer of IC 100, which in turn may improve or otherwise enhance device performance, reliability, and/or yield. Furthermore, in some embodiments, three-dimensional semiconductor layer 130 may help to reduce the tensile strain state of IC 100 post-cooling. As previously noted, IC 100 may include a two-dimensional semiconductor layer 140 on three-dimensional semiconductor layer 130, in accordance with an embodiment. In some cases, two-dimensional semiconductor layer 140 may comprise, for example, a III-N semiconductor material such as, but not limited to: (1) gallium nitride (GaN); (2) aluminum gallium nitride (AlGaN) having an Al concentration in the range of about 0% to 20% (e.g., about 10% or less); and/or (3) a combination of any of the aforementioned. However, the claimed invention is not so limited, and other suitable materials for a given two-dimensional semiconductor layer 140 will depend on a given material composition of three-dimensional semiconductor layer 130 and/or application of IC 100 and will be apparent in light of this disclosure. In accordance with an embodiment, two-dimensional semiconductor layer 140 may be formed (e.g., deposited, grown, etc.), for example, layer by layer in a substantially two- dimensional fashion on the topology presented by the underlying three-dimensional semiconductor layer 130 using any of a wide range of techniques. Some example suitable formation techniques include, but are not limited to, molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE), etc. Also, and in accordance with an embodiment, two-dimensional semiconductor layer 140 may be provided with any given thickness, as desired for a given application or end use. For instance, two-dimensional semiconductor layer 140 may be provided as a monolayer (e.g., having the thickness of a single atom/molecule of the semiconductor material utilized) in some embodiments, while in some other embodiments layer 140 may have a thickness in the range of about 5 nm to 5 μιη or greater (e.g., in the range of about 1.2-1.5 μιη or greater, or any other sub-range within the range of about 5 nm to 5 μιη). Other suitable formation techniques and/or thickness ranges for two-dimensional semiconductor layer 140 will depend on a given application and will be apparent in light of this disclosure. The formation of two-dimensional semiconductor layer 140 using such processes may be controlled, in part or in whole, by adjusting one or more growth parameters, in accordance with an embodiment. For example, when providing a two-dimensional semiconductor layer 140 comprising GaN, it may be desirable to: (1) provide a gas flow having a high V/III ratio of trimethylgallium (Ga(CH 3)3 or TMGa) to ammonia (NH 3) (e.g., in the range of about one to ten times the V/III ratio utilized, for example, in formation of a three-dimensional semiconductor layer 130 comprising a plurality of island-like GaN structures 130a, as previously discussed); (2) provide a high growth temperature (e.g., in the range of about 800-1 100 °C or lower); and/or (3) provide a low growth pressure (e.g., in the range of about 10-100 torr or lower). Other suitable growth parameter ranges for providing a two-dimensional semiconductor layer 140 of GaN or other semiconductor material(s) will depend on a given application and will be apparent in light of this disclosure. By virtue of its configuration, a given two-dimensional semiconductor layer 140 may help, in accordance with an embodiment, to recover a desired degree of surface smoothness for IC 100 (e.g., which may have been lost due to the comparatively rough surface topology presented by the island-like structures 130a, nanowire structures 130b, etc., of three-dimensional semiconductor layer 130). As compared with existing designs/structures, some example embodiments of an IC 100 having a three-dimensional semiconductor layer 130 and an overlying two-dimensional semiconductor layer 140 may exhibit: (1) a reduced defect density; (2) a reduced surface crack density; and/or (3) improved (or otherwise preserved) surface smoothness (e.g., of the top/active layer of the structure). For instance, in some cases, IC 100 may exhibit a defect density in the range of about 2-3>< 10 9/cm 2. Also, in some cases, IC 100 may exhibit a surface crack density of less than or equal to about 200 cracks/mm 2(e.g., about 150 cracks/mm 2or fewer; about 100 cracks/mm 2or fewer; about 50 cracks/mm 2or fewer; about 10 cracks/mm 2or fewer; about 5 cracks/mm 2or fewer; etc.). Furthermore, in some cases, IC 100 may exhibit a root mean square (RMS) surface roughness of less than or equal to about 5 nm (e.g., about 2 nm or less; about 1.8 nm or less; about 1.6 nm or less; etc.). Multiple A1N Interlayer Structure Figure 2A is a cross-section view of an integrated circuit (IC) 200a configured in accordance with an embodiment of the present invention. As can be seen, IC 200a may include a substrate 1 10, a nucleation layer 120 disposed on substrate 1 10, and a two-dimensional semiconductor layer 140 disposed on nucleation layer 120. As will be appreciated in light of this disclosure, the discussion of suitable materials, formation techniques/processes, and configurations for substrate 1 10, nucleation layer 120, and semiconductor layer 140 provided above with reference to Figures 1A-1D may be applied equally here. As can further be seen, and in accordance with an embodiment, one or more semiconductor layers 150 (150a, 150b, etc.) may be provided (e.g., stacked together in an adjacent or otherwise neighboring fashion) on semiconductor layer 140, and a final semiconductor layer 160' (discussed below) may be disposed on the last or otherwise upper- most of such semiconductor layers 150. As will be further appreciated in light of this disclosure, IC 200a may include additional, fewer, and/or different elements or components from those here described (e.g., in some embodiments, IC 200a may not include any semiconductor layers 150 and/or a final semiconductor layer 160'), and the claimed invention is not intended to be limited to any particular IC configurations, but can be used with numerous configurations in numerous applications. In accordance with an embodiment, a given semiconductor layer 150 (150a, 150b, etc.) may comprise any of a wide range of semiconductor materials. Some example suitable materials may include, but are not necessarily limited to: (1) aluminum gallium nitride (AlGaN); (2) aluminum indium nitride (AlInN); (3) gallium nitride (GaN); and/or (4) a combination of any of the aforementioned. Other suitable materials for a given semiconductor layer 150 (150a, 150b, etc.) will depend on a given material composition of the underlying and/or otherwise adjacent layer (e.g., semiconductor layer 140, a neighboring semiconductor layer 150, etc.) and/or application of IC 200a and will be apparent in light of this disclosure. As will be appreciated in light of this disclosure, as the temperature of IC 200a decreases (e.g., is ramped down during the fabrication process), the stacked structure may come under tensile stress, for example, due to the thermal mismatch of the semiconductor material of layer 140 and substrate 1 10 (e.g., in some cases in which GaN and Si are utilized, the thermal mismatch there between may be about 116% or greater, as previously noted). However, inclusion of the one or more semiconductor layers 150 (150a, 150b, etc.) may serve to induce compressive stress, for example, in the two-dimensional semiconductor layer 140 and thus aid in changing the stress state of the structure to a compressive one at the end of fabrication of IC 200a (e.g., during cool-down thereof after epitaxial growth). By virtue of this balancing between tensile and compressive stresses, surface cracks in the top/active layer of IC 200a may be eliminated altogether, in some cases, or otherwise substantially reduced. In accordance with an embodiment, a given semiconductor layer 150 (150a, 150b, etc.) may be formed (e.g., deposited, grown, etc.) on an underlying layer using any of a wide range of techniques. For instance, in some cases, a given semiconductor layer 150 may be formed by epitaxial growth using processes such as, but not limited to, molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE), etc. As will be appreciated in light of this disclosure, and in accordance with an embodiment, the formation of a given semiconductor layer 150 using such processes may be controlled, in part or in whole, by adjusting one or more of the growth parameters, including, but not limited to: (1) gas flow; (2) growth temperature; and/or (3) pressure. For instance, to aid in reducing surface cracks, it may be desirable, in some cases, to form a given semiconductor layer 150 at a growth temperature in the range of about 250- 1000 °C or lower (e.g., about 500-600 °C; about 600-700 °C; about 700-800 °C; or any other sub-range within the range of about 500-800 °C). Other suitable techniques for providing a given semiconductor layer 150 will depend on a given application and will be apparent in light of this disclosure. In accordance with an embodiment, a given semiconductor layer 150 (150a, 150b, etc.) may be provided with any thickness, as desired for a given application or end use. In some embodiments, a given semiconductor layer 150 may have a thickness, for example, in the range of about 1-100 nm or greater (e.g., about 20 nm or less; about 50 nm or less; about 80 nm or less; or any other sub-range within the range of about 1-100 nm or greater). In some example cases in which a given semiconductor layer 150 comprises AlGaN having a high concentration of Al (e.g., greater than about 5%), for instance, such semiconductor layer 150 may have a thickness in the range of about 1-20 nm or less. In some example cases in which a given semiconductor layer 150 comprises AlGaN having a low concentration of Al (e.g., less than or equal to about 5%), for instance, such semiconductor layer 150 may have a thickness in the range of about 10-100 nm or less. As will be appreciated in light of this disclosure, any quantity of semiconductor layers 150 may be stacked together in IC 200a. In some cases, a given semiconductor layer 150 may have a substantially uniform thickness across the topology provided by an underlying layer (e.g., a two-dimensional semiconductor layer 140, a neighboring semiconductor layer 150, etc.). However, the claimed invention is not so limited, as in some other instances, a given semiconductor layer 150 may be provided with a non-uniform or otherwise varying thickness over such topology. For instance, in some cases a first portion of a semiconductor layer 150 may have a thickness within a first range while a second portion thereof has a thickness within a second, different range. Other suitable formation techniques and/or thickness ranges for a given individual and/or stack of semiconductor layers 150 (150a, 150b, etc.) will depend on a given application and will be apparent in light of this disclosure. In some cases, and in accordance with an embodiment, one or more additional two- dimensional semiconductor layers may be dispersed in a stacked configuration like that of IC 200a. For instance, consider Figure 2B, which is a cross-section view of an integrated circuit (IC) 200b configured in accordance with an embodiment of the present invention. As can be seen, IC 200b is configured in much the same way as IC 200a with an example difference being that the semiconductor layers 150 (150a, 150b, etc.) of IC 200b may be provided in a dispersed configuration by virtue of including a two-dimensional semiconductor layer 160 (160a, 160b, etc.) between neighboring semiconductor layers 150. For instance, a first two-dimensional semiconductor layer 160a may be disposed between neighboring semiconductor layers 150a and 150b, a second two-dimensional semiconductor layer 160b may be disposed between neighboring semiconductor layers 150b and 150c, and so on as desired. As can further be seen, a final semiconductor layer 160' may be disposed on the last of such semiconductor layers 150 (150a, 150b, etc.) of IC 200b. As will be appreciated in light of this disclosure, IC 200b may include additional, fewer, and/or different elements or components from those here described, and the claimed invention is not intended to be limited to any particular IC configurations, but can be used with numerous configurations in numerous applications. In accordance with an embodiment, the discussion provided above with reference to Figures 1A-1D of the materials, formation techniques/processes, and configurations for two- dimensional semiconductor layer 140 may be applied equally here in the context of the one or more semiconductor layers 160 (160a, 160b, 160', etc.). Also, in accordance with an embodiment, a given semiconductor layer 160 may be provided with any given thickness, as desired for a given application or end use. In some embodiments, a given semiconductor layer 160 may have a thickness in the range of about 10-1000 nm or greater. Other suitable materials, formation techniques/process, thicknesses, and/or configurations for a given semiconductor layer 160 (160a, 160b, 160', etc.) will depend on a given application and will be apparent in light of this disclosure. Three-Dimensional and Two-Dimensional GaN with Multiple A1N Interlayer Structure In some cases, and in accordance with an embodiment, the structure of IC 100 may be integrated with the structure of IC 200a/200b to provide an IC 300a/300b (discussed below) which may exhibit, for example: (1) a reduced defect density; (2) a reduced surface crack density (e.g., no cracks or an otherwise minimal presence thereof); and/or (3) a substantially smooth top/active layer surface. Figure 3A is a cross-section view of an integrated circuit (IC) 300a configured in accordance with an embodiment of the present invention. As can be seen, IC 300a may include a substrate 110, a nucleation layer 120 disposed on substrate 1 10, a three-dimensional semiconductor layer 130 disposed on nucleation layer 120, and a two-dimensional semiconductor layer 140 disposed on three-dimensional semiconductor layer 130, as similarly discussed above in the context of Figures 1A-1D. As will be appreciated in light of this disclosure, the discussion of suitable materials, formation techniques/processes, and configurations for substrate 110, nucleation layer 120, three-dimensional semiconductor layer 130, and two-dimensional semiconductor layer 140 provided above with reference to Figures 1A-1D and Figures 2A-2B may be applied equally here. As can be further seen from Figure 3A, in some embodiments, IC 300a may include one or more semiconductor layers 150 (150a, 150b, etc.) disposed on the two-dimensional semiconductor layer 140. In some embodiments, IC 300a may include a final semiconductor layer 160' disposed on the last or otherwise uppermost of the one or more semiconductor layers 150. Furthermore, in some embodiments, IC 300a may include an optional capping layer 170 (discussed below) disposed on the final semiconductor layer 160'. As will be appreciated in light of this disclosure, IC 300a may include additional, fewer, and/or different elements or components from those here described, and the claimed invention is not intended to be limited to any particular IC configurations, but can be used with numerous configurations in numerous applications. Figure 3B is a cross-section view of an integrated circuit (IC) 300b configured in accordance with an embodiment of the present invention. As can be seen, IC 300b is configured in much the same way as IC 300a with an example difference being that the semiconductor layers 150 (150a, 150b, etc.) of IC 300b may be provided in a dispersed configuration by virtue of including a two-dimensional semiconductor layer 160 (160a, 160b, etc.) between neighboring semiconductor layers 150. For instance, a first two-dimensional semiconductor layer 160a may be disposed between neighboring semiconductor layers 150a and 150b, a second two- dimensional semiconductor layer 160b may be disposed between neighboring semiconductor layers 150b and 150c, and so on as desired. As can further be seen, a final semiconductor layer 160' may be disposed on the last of such semiconductor layers 150 (150a, 150b, etc.) of IC 300b. Still further, in some embodiments, IC 300b may include an optional capping layer 170 (discussed below) disposed on the final semiconductor layer 160'. As will be appreciated in light of this disclosure, IC 300b may include additional, fewer, and/or different elements or components from those here described, and the claimed invention is not intended to be limited to any particular IC configurations, but can be used with numerous configurations in numerous applications. As previously noted, and as can be seen from Figures 3A-3B, IC 300a/300b optionally may include a capping layer 170 disposed on the final semiconductor layer 160'. As will be appreciated in light of this disclosure, and in accordance with an embodiment, the optional capping layer 170 may be customized as desired for a given application or end use of IC 300a/300b. For instance, in some cases (e.g., such as in electronic device applications), a capping layer 170 comprising aluminum indium nitride (AlInN) or AlGaN may be provided. In some other cases (e.g., such as for optoelectronics applications), a capping layer 170 comprising indium gallium nitride (InGaN) or AlGaN may be provided. Other suitable materials for a given optional capping layer 170 will depend on a given application and will be apparent in light of this disclosure. In accordance with an embodiment, optional capping layer 170 may be formed (e.g., deposited, grown, etc.) on the final semiconductor layer 160' using any of a wide range of techniques. Some example suitable formation techniques include, but are not limited to, molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE), etc. Also, and in accordance with an embodiment, optional capping layer 170 may be provided with any given thickness, as desired for a given application or end use. In some embodiments, optional capping layer 170 may have a thickness in the range of about 1-50 nm or greater (e.g., about 2-25 nm or greater, or any other sub-range within the range of about 1-50 nm). In some cases, optional capping layer 170 may have a substantially uniform thickness across the topology provided by the underlying final semiconductor layer 160'. However, the claimed invention is not so limited, as in some other instances, optional capping layer 170 may be provided with a non-uniform or otherwise varying thickness over such topology. For instance, in some cases a first portion of optional capping layer 170 may have a thickness within a first range while a second portion thereof has a thickness within a second, different range. Other suitable formation techniques and/or thickness ranges for optional capping layer 170 will depend on a given application and will be apparent in light of this disclosure. Example System Figure 4 illustrates a computing system 1000 implemented with integrated circuit structures or devices formed by one or more of the defect density and/or crack density reduction techniques disclosed herein, in accordance with an example embodiment of the present invention. As can be seen, the computing system 1000 houses a motherboard 1002. The motherboard 1002 may include a number of components, including, but not limited to, a processor 1004 and at least one communication chip 1006, each of which can be physically and electrically coupled to the motherboard 1002, or otherwise integrated therein. As will be appreciated, the motherboard 1002 may be, for example, any printed circuit board, whether a main board, a daughterboard mounted on a main board, or the only board of system 1000, etc. Depending on its applications, computing system 1000 may include one or more other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the components included in computing system 1000 may include one or more integrated circuit structures or devices formed by one or more of the defect density and/or crack density reduction techniques disclosed herein in accordance with an example embodiment of the present invention. In some embodiments, multiple functions can be integrated into one or more chips (e.g., for instance, note that the communication chip 1006 can be part of or otherwise integrated into the processor 1004). The communication chip 1006 enables wireless communications for the transfer of data to and from the computing system 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 may implement any of a number of wireless standards or protocols, including, but not limited to, Wi-Fi (IEEE 802.1 1 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing system 1000 may include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1006 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. The processor 1004 of the computing system 1000 includes an integrated circuit die packaged within the processor 1004. In some embodiments of the present invention, the integrated circuit die of the processor includes onboard memory circuitry that is implemented with one or more integrated circuit structures or devices formed by one or more of the defect density and/or crack density reduction techniques, as variously described herein. The term "processor" may refer to any device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The communication chip 1006 also may include an integrated circuit die packaged within the communication chip 1006. In accordance with some such example embodiments, the integrated circuit die of the communication chip includes one or more integrated circuit structures or devices formed by one or more of the defect density and/or crack density reduction techniques as described herein. As will be appreciated in light of this disclosure, note that multi- standard wireless capability may be integrated directly into the processor 1004 (e.g., where functionality of any chips 1006 is integrated into processor 1004, rather than having separate communication chips). Further note that processor 1004 may be a chip set having such wireless capability. In short, any number of processor 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chip set can have multiple functions integrated therein. In various implementations, the computing device 1000 may be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, a digital video recorder, or any other electronic device that processes data or employs one or more integrated circuit structures or devices formed by one or more of the defect density and/or crack density reduction techniques, as variously described herein. Numerous embodiments will be apparent in light of this disclosure. One example embodiment of the present invention provides an integrated circuit including a crystalline silicon substrate, a nucleation layer on the substrate, and a first semiconductor layer formed on the nucleation layer, the first semiconductor layer including a three-dimensional gallium nitride (GaN) layer on the nucleation layer and having a plurality of three-dimensional semiconductor structures and a two-dimensional GaN layer on the three-dimensional GaN layer. In some cases, the nucleation layer includes at least one of aluminum nitride (A1N), aluminum gallium nitride (AlGaN), and/or a combination of any of the aforementioned, and the integrated circuit further includes a patterned insulator layer on the nucleation layer, the patterned insulator layer including at least one of silicon dioxide (S1O 2), silicon nitride (SiN x), tungsten dinitride (WN 2), tungsten and titanium nitride, aluminum oxide (AI 2O 3), and/or a combination of any of the aforementioned. In some cases, the integrated circuit further includes a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer includes aluminum gallium nitride (AlGaN) on the two-dimensional GaN layer and a GaN layer on the AlGaN layer. In some such cases, the second semiconductor layer includes multiple alternating layers of AlGaN and GaN. In some other such cases, the second semiconductor layer is within the two-dimensional GaN layer. In some instances, the three-dimensional GaN layer includes at least one of a plurality of island-like semiconductor structures and/or a plurality of nanowires. In some instances, the substrate has a crystal orientation of [100]. In some cases, the integrated circuit further includes a capping layer including at least one of AlGaN, aluminum indium nitride (AlInN), and/or indium gallium nitride (InGaN). In some example instances, the integrated circuit exhibits at least one of a defect density of about 3>< 10 9/cm 2or less, a surface crack density of about 200 cracks/mm 2or fewer, and/or a root mean square (RMS) surface roughness of about 5 nm or less. In some cases, a system-on-chip including the integrated circuit is provided. In some cases, a mobile computing system including the integrated circuit is provided. Another example embodiment of the present invention provides an integrated circuit including a crystalline silicon substrate, a nucleation layer on the substrate, a first semiconductor layer formed on the nucleation layer, the first semiconductor layer comprising a two-dimensional gallium nitride (GaN) layer on the nucleation layer, and a second semiconductor layer formed on or within the first semiconductor layer, wherein the second semiconductor layer includes an aluminum gallium nitride (AlGaN) layer on the two-dimensional GaN layer and a GaN layer on the AlGaN layer. In some cases, the nucleation layer includes at least one of aluminum nitride (A1N), aluminum gallium nitride (AlGaN), and/or a combination of any of the aforementioned. In some cases, the second semiconductor layer includes multiple alternating layers of AlGaN and GaN. In some instances, the second semiconductor layer is within the two-dimensional GaN layer. In some instances, the substrate has a crystal orientation of [100]. In some cases, the integrated circuit further includes a capping layer including at least one of AlGaN, aluminum indium nitride (AlInN), and/or indium gallium nitride (InGaN). In some example instances, the integrated circuit exhibits at least one of a defect density of about 3>< 10 9/cm 2or less, a surface crack density of about 200 cracks/mm 2or fewer, and/or a root mean square (RMS) surface roughness of about 5 nm or less. In some cases, a system-on-chip including the integrated circuit is provided. In some cases, a mobile computing system including the integrated circuit is provided. Another example embodiment of the present invention provides a method of forming an integrated circuit, the method including forming a nucleation layer on a crystalline silicon substrate and forming a first semiconductor layer on the nucleation layer, the first semiconductor layer including either a three-dimensional gallium nitride (GaN) layer on the nucleation layer and having a plurality of three-dimensional semiconductor structures and a two-dimensional GaN layer on the three-dimensional GaN layer or a two-dimensional GaN layer on the nucleation layer, wherein in response to the first semiconductor layer including a two- dimensional GaN layer on the nucleation layer, the method further includes forming a second semiconductor layer on or within the first semiconductor layer, wherein the second semiconductor layer includes an aluminum gallium nitride (AlGaN) layer on the two- dimensional GaN layer and a GaN layer on the AlGaN layer. In some cases, the method further includes forming a patterned insulator layer on the nucleation layer prior to forming the first semiconductor layer, wherein the patterned insulator layer includes at least one of silicon dioxide (S1O 2), silicon nitride (SiN x), tungsten dinitride (WN 2), tungsten and titanium nitride, aluminum oxide (AI2O3), and/or a combination of any of the aforementioned. In some instances, forming the first semiconductor layer includes an in-situ patterning process. In some other instances, forming the first semiconductor layer includes an ex-situ patterning process. In some cases, at least one semiconductor layer is formed using at least one of a molecular beam epitaxy (MBE) process and/or a metalorganic vapor phase epitaxy (MOVPE) process. The foregoing description of the embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. |
Embodiments of a silicon-on-insulator (SOI) wafer having an etch stop layer (130) overlying the buried oxide layer (120) , as well as embodiments of a method of making the same, are disclosed. The etch stop layer may comprise silicon nitride, nitrogen-doped silicon dioxide, or silicon oxynitride, as well as some combination of these materials. Other embodiments are described and claimed. |
CLAIMS What is claimed is: 1. A method comprising forming an etch stop layer in a silicon-on-insulator (SOI) wafer, the etch stop layer overlying an insulating layer of the SOI wafer. 2. The method of claim 1, wherein the etch stop layer comprises a material selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 3. The method of claim 1, wherein the etch stop layer comprises at least two materials selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 4. The method of claim 1, wherein a concentration of nitrogen varies through the etch stop layer. 5. The method of claim 1 , wherein forming the etch stop layer comprises: implanting nitrogen into the SOI wafer; and annealing the SOI wafer to form the etch stop layer. 6. The method of claim 1 , wherein the SOI wafer comprises a number of bonded layers. 7. The method of claim 1, wherein the insulating layer of the SOI wafer comprises an oxide layer formed by a process including oxygen implantation. 8. A method comprising forming an insulating layer and an etch stop layer in a semiconductor wafer, the etch stop layer disposed between the insulating layer and an upper semiconductor layer. 9. The method of claim 8, wherein the semiconductor wafer comprises silicon and the insulating layer comprises silicon dioxide. 10. The method of claim 8, wherein the etch stop layer comprises a material selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 11. The method of claim 8, wherein the etch stop layer comprises at least two materials selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 12. The method of claim 8, wherein a concentration of nitrogen varies through the etch stop layer. 13. The method of claim 8, wherein forming the etch stop layer comprises: implanting nitrogen into the semiconductor wafer; and annealing the semiconductor wafer to form the etch stop layer. 14. The method of claim 13, wherein forming the insulating layer comprises: implanting oxygen into the semiconductor wafer; wherein the insulating layer is formed during the annealing of the semiconductor wafer. 15. A method comprising: implanting nitrogen into a wafer, the wafer including a base layer of silicon, a layer of silicon dioxide overlying the base silicon layer, and an upper layer of silicon overlying the silicon dioxide layer; and annealing the wafer to form a layer between the upper silicon layer and the insulating layer, the layer including a material selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 16. The method of claim 15, wherein the layer includes at least two materials selected from the group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 17. The method of claim 15, wherein a maximum concentration of the implanted nitrogen is at an interface between the upper silicon layer and the silicon dioxide layer. 18. The method of claim 15 , wherein a maximum concentration of the implanted nitrogen is in the silicon dioxide layer. 19. The method of claim 15, further comprising forming integrated circuitry for a number of die in the upper silicon layer. 20. The method of claim 19, wherein the layer formed between the silicon dioxide layer and the upper silicon layer functions as an etch stop. 21. A method comprising: implanting oxygen into a wafer, wherein the wafer comprises silicon; implanting nitrogen into the wafer; and annealing the wafer to form a silicon dioxide layer overlying a base layer of silicon and a layer between the silicon dioxide layer and an upper silicon layer, the layer including a material selected from a group consisting of silicon nitride, nitrogen- doped silicon dioxide, and silicon oxynitride. 22. The method of claim 21 , wherein the layer includes at least two materials selected from the group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 23. The method of claim 21 , wherein a maximum concentration of implanted nitrogen is at a region that is to form an interface between the upper silicon layer and the silicon dioxide layer. 24. The method of claim 21 , wherein a maximum concentration of the implanted nitrogen is in a region that is to form the silicon dioxide layer. 25. The method of claim 21 , further comprising forming integrated circuitry for a number of die in the upper silicon layer. 26. The method of claim 25, wherein the layer between the silicon dioxide and upper silicon layers functions as an etch stop. 27. A semiconductor wafer comprising: a base layer comprised of a semiconductor material; a layer of an insulating material overlying the base layer; an etch stop layer overlying the insulating layer; and an upper layer comprised of the semiconductor material overlying the etch stop layer. 28. The wafer of claim 27, wherein the semiconductor material comprises silicon and the insulating material comprises silicon dioxide. 29. The wafer of claim 27, wherein the etch stop layer comprises a material selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 30. The wafer of claim 27, wherein the etch stop layer comprises at least two materials selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 31. A wafer comprising: a base layer of silicon; a layer of silicon dioxide overlying the base silicon layer; an upper layer of silicon disposed over the silicon dioxide layer; and a layer disposed between the silicon dioxide and upper silicon layers, the layer including a material selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 32. The wafer of claim 31, wherein the layer disposed between the silicon dioxide and upper silicon layers comprises at least two materials selected from the group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. 33. The wafer of claim 31 , wherein the layer disposed between the silicon dioxide and upper silicon layers functions as an etch stop. 34. A device comprising: a die, the die including a base layer comprised of silicon, an insulating layer overlying the base layer, the insulating layer comprised of silicon dioxide, an etch stop layer overlying the insulating layer, the etch stop layer including a material selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride, and an upper layer comprised of silicon overlying the etch stop layer; and at least one circuit element formed in the upper silicon layer. 35. The device of claim 34, wherein the etch stop layer includes at least two materials selected from the group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride 36. The device of claim 34, wherein the at least one circuit element comprises a transistor. 37. The device of claim 34, wherein the at least one circuit element comprises part of a processing system. 38. A system comprising: a processing device, the processing device comprised of a die including a base layer comprised of silicon, an insulating layer overlying the base layer, the insulating layer comprised of silicon dioxide, an etch stop layer overlying the insulating layer, the etch stop layer including a material selected from a group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride, and an upper layer comprised of silicon overlying the etch stop layer; and a memory device coupled with the processing device. 39. The system of claim 38, wherein the etch stop layer includes at least two materials selected from the group consisting of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride 40. The system of claim 38, further comprising a network interface coupled with the processing device. |
METHOD FOR MANUFACTURING A SILICON-ON-INSULATOR (SOI) WAFERWITH AN ETCH STOP LAYERFIELD OF THE INVENTION [0001] The invention relates generally to the manufacture of integrated circuit devices and, more particularly, to the manufacture of a silicon-on-insulator (SOI) wafer having an etch stop layer overlying the buried oxide layer.BACKGROUND OF THE INVENTION [0002] A silicon-on-insulator (SOI) wafer may include a base layer of silicon, an insulating layer comprised of silicon dioxide overlying the base layer, and an upper silicon layer overlying the silicon dioxide layer. The silicon dioxide insulating layer is often referred to as the "buried oxide" layer. Integrated circuits including a collection of transistors and other circuit elements can be fabricated in the upper silicon layer. SOI wafers offer the potential for fabricating large-scale integrated circuits (ICs) that, for example, provide high-speed operation and exhibit low power consumption. [0003] Methods for manufacturing SOI wafers include wafer bonding and separation by implanted oxygen (SIMOX). To form an SOI wafer by wafer bonding, a silicon dioxide layer is formed on one surface of a first silicon wafer, and a second silicon wafer is then bonded to this surface (e.g., the surface over which the oxide layer has been formed). The second wafer, which may be thinned, forms an upper silicon layer that overlies a buried oxide layer. To form an SOI wafer by SIMOX, oxygen ions are implanted into a silicon wafer, and the wafer is annealed to form a buried layer of silicon dioxide within the silicon wafer. An example of a SIMOX process can be found in Matsumura et al., Technological Innovation in Low-Dose SIMOX Wafers Fabricated by an Internal Thermal Oxidation (ITOX) Process, MICROELECTRONIC ENGINEERING, vol. 66, pgs. 400-414 (2003).[0004] One problem with SOI wafers is that the buried oxide layer may provide poor etch resistance (during, for example, the formation of isolation trenches). It has been suggested that silicon nitride be used as the insulating layer in an SOI wafer rather than silicon dioxide, as silicon nitride may in some instances provide better etch resistance than silicon dioxide. An example of a technique for creating an SOI wafer having a silicon nitride insulating layer is described in Meekison et al., A Transmission ElectronMicroscope Investigation of the Dose Dependence of the Microstructure of Silicon-on- Insulator Structures Formed by Nitrogen Implantation of Silicon, JOURNAL OF APPLIEDPHYSICS, vol. 69, no. 6 (1991). Silicon nitride is, however, a poor insulator in comparison to silicon dioxide. The band gap of silicon nitride is approximately 40 percent less than that of silicon dioxide, so the electrical isolation afforded by silicon nitride is significantly less than that provided by silicon dioxide.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. IA is a plan view of one embodiment of an SOI wafer having an etch stop layer overlying a buried oxide layer.[0006] FIG. IB is a cross-sectional elevation view of the SOI wafer of FIG. IA, as taken along line B-B of FIG. IA.[0007] FIG. 2 is a block diagram illustrating an embodiment of a method of creating an etch stop layer in a SOI wafer.[0008] FIGS. 3A-3C are schematic diagrams illustrating embodiments of the method shown in FIG. 2. [0009] FIG. 4 is a schematic diagram showing nitrogen concentration vs. wafer depth for various embodiments of the method illustrated in FIG. 2.[0010] FIG. 5 is a block diagram illustrating an embodiment of a method of creating an SOI wafer having an etch stop layer overlying the buried oxide. [0011] FIGS. 6A-6D are schematic diagrams illustrating embodiments of the method shown in FIG. 5.[0012] FIG. 7 is a schematic diagram showing both nitrogen concentration and oxygen concentration vs. wafer depth for various embodiments of the method illustrated in FIG. 5. [0013] FIG. 8 is a schematic diagram illustrating an embodiment of a computer system, which may include a semiconductor die formed according to the disclosed embodiments.DETAILED DESCRIPTION OF THE INVENTION [0014] Embodiments of a method for fabricating a silicon-on-insulator (SOI) wafer having an etch stop layer overlying the insulating layer are disclosed. Also disclosed are embodiments of an SOI wafer having an etch stop layer overlying the insulating layer, wherein the insulating layer may comprise silicon dioxide (SiO2). In one embodiment, the etch stop layer comprises silicon nitride (Si3N4). In another embodiment, the etch stop layer comprises nitrogen-doped silicon dioxide. In a further embodiment, the etch stop layer comprises silicon oxynitride (Si(X)O(Y)N(Z)). In yet a further embodiment, the etch stop layer comprises a combination of two or more of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. In another embodiment, the concentration of nitrogen varies through the thickness of the etch stop layer (and, perhaps, within other layers of the SOI wafer). The disclosed SOI wafer may provide both the electrical isolation characteristics of an oxide insulating layer and the etch stop capabilities of the etch stop layer.[0015] Illustrated in FIGS. IA and IB is an embodiment of an SOI wafer 100 having an etch stop layer overlying an insulating layer. Referring to these figures, the SOI wafer 100 comprises a base layer of a semiconductor material 110, a layer of an insulating material 120 overlying the base layer 110, an etch stop layer 130 overlying the insulating layer 120, and an upper layer 140 of the semiconductor material overlying the etch stop layer 130. In one embodiment, the semiconductor material (of base layer 110 and upper layer 140) comprises silicon, and the insulating layer 120 comprises silicon dioxide (SiO2). In one embodiment, the etch stop layer 130 comprises silicon nitride (Si3N4).However, the etch stop layer 130 may not comprise a distinct stoichiometric silicon nitride material, and in other embodiments the concentration of nitrogen varies through the thickness of the etch stop layer (and the nitrogen concentration may also vary within other layers of the SOI wafer 100). Thus, for example, in another embodiment the etch stop layer comprises nitrogen-doped silicon dioxide, and in a further embodiment the etch stop layer comprises silicon oxynitride (Si(X)O([gamma])N(Z)). In yet a further embodiment, the etch stop layer comprises a combination of two or more of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. [0016] In one embodiment, the insulating layer 120 has a thickness of between approximately 300-2500 Angstroms, the etch stop layer 130 has a thickness of between approximately 3-200 Angstroms, and the upper semiconductor layer 140 has a thickness of between approximately 30-2000 Angstroms. The overall thickness of the SOI wafer 100is, in one embodiment, approximately 775 [mu]m for a 300 mm wafer. In a furtherembodiment, the etch stop layer 130 overlies substantially all (or a substantial portion) of the insulating layer 120, and in a further embodiment the upper semiconductor layer 140 overlies substantially all (or a substantial portion) of the etch stop layer 130. [0017] In other embodiments, the base semiconductor layer 110, insulating layer 120, etch stop layer 130, and upper semiconductor layer 140 comprise materials other than those described above. For example, in other embodiments, an etch stop layer may be formed by doping or implantation of a substance other than nitrogen. Thus, it should be understood that the disclosed embodiments are not limited to an etch stop layer including nitrogen and, further, that other etch stop materials are within the scope of the disclosed embodiments. Furthermore, it should be understood that the disclosed etch stop layer may perform other functions in addition to (or in lieu of) that of an etch stop. For example, the disclosed nitrogen-containing layer overlying the buried oxide layer may also function as a diffusion barrier (e.g., as a dopant diffusion barrier to facilitate doping of the upper semiconductor layer 140). [0018] In another embodiment, as shown in FIG. IA, integrated circuitry for a number of die 102 may be formed on the SOI wafer 100. The integrated circuitry of each die 102 may be formed in the upper semiconductor layer 140, and the underlying layer 130 may function as an etch stop during the formation of this circuitry (e.g., as an etch stop during the formation of isolation trenches, etc.). Although not shown in the figures for ease of illustration, a number of layers of metallization (each layer of metallization separated from adjacent layers by a dielectric layer) may be formed over the wafer 100 to create an interconnect structure for each die 102. Ultimately, each of the die 102 may be singulated from the wafer 100, and each die 102 may be packaged in some manner for integration into a next-level assembly (e.g., a circuit board, a computer system, a wireless communication device, etc.). [0019] The disclosed embodiments encompass various methods of forming an etch stop layer that overlies the buried oxide layer (or other insulating layer) of a SOI wafer. Illustrated in FIG. 2 is one embodiment of a method 200 of forming an etch stop layer in a SOI wafer. Embodiments of the method 200 shown in FIG. 2 are further illustrated in the schematic diagrams of FIGS. 3 A through 3C, as well as FIG. 4, and reference should be made to these figures as called out in the text.[0020] Referring first to FIG. 3A5 an embodiment of a SOI wafer 300 is shown. This wafer 300 includes a base layer of silicon 310, a layer of silicon dioxide 320 overlying the base layer 310, and an upper layer of silicon 340. The SOI wafer 300 of FIG. 3A may be formed by any suitable process, such as, for example, wafer bonding or SIMOX.[0021] As set forth in block 210 of FIG. 2, nitrogen is implanted into a SOI wafer. This is illustrated in FIG. 3B, where nitrogen has been implanted into a region 390 of the SOI wafer 300 of FIG. 3 A. It should be understood that region 390 is representative of a region that can be targeted for implantation of nitrogen and that, in practice, nitrogen may be implanted into additional portions of the wafer 300. For example, as will be described below, the nitrogen concentration may vary from a small amount near the surface of the upper silicon layer 340 to a maximum concentration lower in the wafer, and again vary to a small amount even deeper into the wafer. The maximum concentration may, for example, occur at the interface between the upper silicon layer 340 and the silicon dioxide layer 320, or the maximum nitrogen concentration may occur at some point within the silicon dioxide layer 320.[0022] Nitrogen can be implanted under any suitable conditions using any suitable implantation equipment. In one embodiment, nitrogen implantation is performed at an elevated temperature to increase the nitrogen concentration while decreasing the potential for damage in the upper silicon layer 340. For example, according to one embodiment, nitrogen implantation is performed at a temperature in a range up to 450 degrees Celsius. [0023] The implanted nitrogen will be used to form an etch stop layer, as will be described below. According to some embodiments, this etch stop layer may comprise silicon nitride, nitrogen-doped silicon dioxide, or silicon oxynitride (or a combination of these materials). Two factors which may impact the characteristics of this etch stop layer include the maximum nitrogen concentration and the region or depth that is targeted to receive the maximum nitrogen dose. This is further illustrated in FIG. 4, which shows nitrogen concentration as a function of wafer depth. According to one embodiment, the maximum concentration of nitrogen is implanted at the interface between the buried oxide layer and the upper silicon layer. This is illustrated by curve 490a, which has a maximum nitrogen concentration at the interface between a buried oxide layer 420 and an upper silicon layer 440. In another embodiment, the maximum concentration of the nitrogen is implanted within the buried oxide layer. This is illustrated by curve 490b, which has a maximum nitrogen concentration at some location within the silicon dioxide layer 420. In one embodiment, the maximum concentration of nitrogen may be within a range up to 10<20> atoms/cm<3>.[0024] Targeting the maximum nitrogen concentration at the interface between the buried oxide layer and the upper silicon layer may provide the greatest thickness of silicon nitride above the buried oxide layer, whereas targeting the maximum nitrogen concentration at a region within the buried oxide layer may reduce the concentration of nitrogen in the upper silicon layer. The maximum nitrogen concentration and the region targeted to receive the maximum concentration will be a function of the desired characteristics of the SOI wafer, and these variables - as well as others, such as the implantation conditions - can be tailored as appropriate on a case-by-case basis. [0025] It should be noted that, in FIG. 4, the curves 490a, 490b representing the nitrogen concentration as a function of depth have been idealized for ease of illustration and understanding. For example, the curves 490a, 490b are shown as being generally smooth and continuous; however, in practice, there may be discontinuities in nitrogen concentration at the boundaries between material layers.[0026] Referring next to block 220 in FIG. 2, an annealing process is performed to form an etch stop layer. This is illustrated in FIG. 3C, where an etch stop layer 330 has been formed in the SOI wafer 300, this etch stop layer disposed above the buried oxide layer 320 and below the upper silicon layer 340. According to one embodiment, while at elevated temperature during anneal, silicon nitride precipitates begin to form, and these precipitates will gather nitrogen from the surrounding silicon. Thus, as heating continues, diffusion and/or redistribution of nitrogen may occur and a silicon nitride layer may form at the interface between the upper silicon layer 340 and the buried oxide layer 320. [0027] As previously noted, however, the etch stop layer may not comprise a distinct stoichiometric silicon nitride layer, and the formation of silicon nitride precipitates may not occur. Further, in other embodiments, the nitrogen concentration of the formed etch stop layer may vary continuously through the interface region between the upper silicon layer and the buried oxide layer. For example, the maximum concentration of nitrogen in the etch stop layer may occur at the interface region between the upper silicon layer and the buried oxide layer, with the nitrogen concentration decaying away into the buried oxide layer (as well as decaying into the upper silicon layer). Accordingly, in one embodiment, the etch stop layer 330 may comprise nitrogen-doped silicon dioxide, and in a further embodiment the etch stop layer may comprise silicon oxynitride. In another embodiment, the etch stop layer 330 may comprise a combination of two or more of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. [0028] Annealing to form the etch stop layer (from the implanted nitrogen) may be performed under any suitable conditions which can lead to the formation of silicon nitride, nitrogen-doped silicon dioxide, or silicon oxynitride (or some combination of these materials). In one embodiment, anneal is performed at a temperature of approximately 1200 degrees Celsius for approximately 2 hours. According to another embodiment, the SOI wafer is placed in a process chamber in which nitrogen can be introduced, and annealing is performed in a flowing nitrogen environment.[0029] Illustrated in FIG. 5 is another embodiment of a method 500 of forming a SOI wafer including an etch stop layer. Embodiments of the method 500 shown in FIG. 5 are further illustrated in the schematic diagrams of FIGS. 6A through 6D, as well as FIG. 7, and reference should be made to these figures as called out in the text. [0030] Referring first to FIG. 6A, an embodiment of a wafer 600 is shown. In one embodiment, the wafer 600 includes a substrate 605 comprised of silicon. [0031] As set forth in block 510 of FIG. 5, oxygen is implanted into a silicon wafer. This is illustrated in FIG. 6B, where oxygen has been implanted into a region 680 of the wafer 600 of FIG. 6A. The implanted oxygen will be used to form a buried oxide layer. It should be understood that the region 680 is representative of a region that can be targeted for implantation of oxygen and that, in practice, oxygen may be implanted into additional portions of the wafer 600. By way of example, the oxygen concentration may vary from a small amount near the upper surface of the wafer 600 to a maximum concentration lower in the wafer, and again vary to a small amount even deeper into the wafer. [0032] Oxygen can be implanted under any suitable conditions using any suitable implantation equipment. According to one embodiment, oxygen is implanted at an elevated temperature to increase the oxygen concentration while decreasing the potential for damage in the silicon substrate 605. By way of example, in one embodiment, oxygen implantation is performed at a temperature in a range up to 450 degrees Celsius. The maximum oxygen concentration is targeted at that region or depth of the wafer where the buried oxide layer is to be formed. This is illustrated in FIG. 7, which shows the oxygen concentration as a function of wafer depth (nitrogen concentration is also shown in this figure and will be described below). The curve 780 (dashed line) represents the oxygen concentration, and this curve suggests that the maximum oxygen concentration falls within that region of the wafer where a buried oxide layer (see item 720) is to be formed. In one embodiment, the maximum oxygen concentration may be within a range up to 10<22> atoms/cm . [0033] Referring to block 520 in FIG. 5, nitrogen is implanted into the wafer. This is illustrated in FIG. 6C, where nitrogen has been implanted into a region 690 of the silicon wafer 600. It should be understood that region 690 is representative of a region that can be targeted for implantation of nitrogen and that, in practice, nitrogen may be implanted into additional portions of the wafer 600. For example, as will be described below, the nitrogen concentration may vary from a small amount near the upper surface of the wafer 600 to a maximum concentration lower in the wafer, and again vary to a small amount even deeper into the wafer. The maximum concentration may, for example, occur at that region that is to become the interface between an upper silicon layer and a buried oxide layer, or the maximum nitrogen concentration may occur at some point within that region that is to become the buried oxide layer.[0034] As before, nitrogen can be implanted under any suitable conditions using any suitable implantation equipment. In one embodiment, nitrogen implantation is performed at an elevated temperature to increase the nitrogen concentration while decreasing the potential for damage to the wafer 600 (e.g., that portion of wafer 600 that is to become an upper silicon layer 640). For example, according to one embodiment, nitrogen implantation is performed at a temperature in a range up to 450 degrees Celsius. [0035] The implanted nitrogen will be used to form an etch stop layer that overlies the buried oxide layer (that is to be formed from the implanted oxygen). In some embodiments, this etch stop layer may comprise silicon nitride, nitrogen-doped silicon dioxide, or silicon oxynitride (or a combination of these materials). As noted above, two factors which may impact the characteristics of the silicon nitride layer include the maximum nitrogen concentration and the region or depth that is targeted to receive the maximum nitrogen dose. This is illustrated in FIG. 7, which shows nitrogen concentration (and oxygen concentration) as a function of wafer depth. According to one embodiment, the maximum concentration of nitrogen is implanted at that region that is to become the interface between a buried oxide layer and an upper silicon layer. This is illustrated by curve 790a, which has a maximum nitrogen concentration at that plane that is to become the interface between a silicon dioxide layer 720 and an upper silicon layer 740. In another embodiment, the maximum concentration of nitrogen is implanted within a region that is to become a buried oxide layer. This is illustrated by curve 790b, which has a maximum nitrogen concentration at some location within the region that is to become the buried oxide layer 720. In one embodiment, the maximum concentration of nitrogen may be within a range up to 10<20> atoms/cm<3>. [0036] Targeting the maximum nitrogen concentration to be at the interface between the buried oxide layer and the upper silicon layer may provide the greatest thickness of silicon nitride above the buried oxide layer, whereas targeting the maximum nitrogen concentration to be at a region within the buried oxide layer may reduce the concentration of nitrogen in the upper silicon layer. As previously suggested, the maximum nitrogen concentration (and maximum oxygen concentration) and the region targeted to receive the maximum concentration will be a function of the desired characteristics of the SOI wafer, and these variables - as well as others, such as the implantation conditions - can be tailored as appropriate on a case-by-case basis.[0037] It should be noted that, in FIG. 7, the curves 790a, 790b representing the nitrogen concentration as a function of depth (as well as curve 780 representing the oxygen concentration) have been idealized for ease of illustration and understanding. For example, the curves 790a, 790b (and 780) are shown as being generally smooth and continuous; however, in practice, there may be discontinuities in nitrogen concentration at the boundaries between material layers. [0038] As set forth in block 530 in FIG. 5, an annealing process is performed to form a silicon dioxide layer as well as an etch stop layer. This is illustrated in FIG. 6D, where a silicon dioxide layer 620 and an etch stop layer 630 have each been formed to create a SOI wafer 600. The silicon dioxide layer 620 is formed in that region of the silicon wafer that was targeted for oxygen implantation, and the etch stop layer 630 is formed in the interface region between the silicon dioxide layer 620 and an upper silicon layer 640.Thus, the etch stop layer is disposed above the buried oxide layer 620 and below the upper silicon layer 640. According to one embodiment, during anneal at elevated temperature, silicon nitride precipitates begin to form, and these precipitates will gather nitrogen from the surrounding silicon. Thus, as heating continues, diffusion and/or redistribution of nitrogen will occur and a silicon nitride layer may form at the interface between the upper silicon layer 640 and the buried oxide layer 620 that is forming. Similar mechanisms may lead to the formation of the silicon dioxide layer.[0039] As previously noted, however, the etch stop layer may not comprise a distinct stoichiometric silicon nitride layer, and the formation of silicon nitride precipitates may not occur. Further, in other embodiments, the nitrogen concentration of the formed etch stop layer may vary continuously through the interface region between the upper silicon layer and the buried oxide layer that is forming. For example, the maximum concentration of nitrogen in the etch stop layer may occur at the interface region between the upper silicon layer and the buried oxide layer, with the nitrogen concentration decaying away into the buried oxide layer (as well as decaying into the upper silicon layer). Accordingly, in one embodiment, the etch stop layer 330 may comprise nitrogen-doped silicon dioxide, and in a further embodiment the etch stop layer may comprise silicon oxynitride. In another embodiment, the etch stop layer 330 may comprise a combination of two or more of silicon nitride, nitrogen-doped silicon dioxide, and silicon oxynitride. [0040] Annealing to form the silicon dioxide layer (from the implanted oxygen) and the etch stop layer (from the implanted nitrogen) may be performed under any suitable conditions which can lead to the formation of silicon dioxide and silicon nitride, nitrogen- doped silicon dioxide, or silicon oxynitride (or some combination of these materials). In one embodiment, anneal is performed at a temperature of approximately 1350 degrees Celsius for between approximately 5 to 12 hours. According to another embodiment, the wafer is placed in a process chamber in which nitrogen and/or oxygen can be introduced, and annealing is performed in a flowing nitrogen and/or oxygen environment. [0041] Referring to FIG. 8, illustrated is an embodiment of a computer system 800. Computer system 800 includes a bus 805 to which various components are coupled. Bus 805 is intended to represent a collection of one or more buses - e.g., a system bus, a Peripheral Component Interface (PCI) bus, a Small Computer System Interface (SCSI) bus, etc. - that interconnect the components of system 800. Representation of these buses as a single bus 805 is provided for ease of understanding, and it should be understood that the system 800 is not so limited. Those of ordinary skill in the art will appreciate that the computer system 800 may have any suitable bus architecture and may include any number and combination of buses.[0042] Coupled with bus 805 is a processing device (or devices) 810. The processing device 810 may comprise any suitable processing device or system, including a microprocessor, a network processor, an application specific integrated circuit (ASIC), or a field programmable gate array (FPGA), or similar device. It should be understood that, although FIG. 8 shows a single processing device 810, the computer system 800 may include two or more processing devices. [0043] Computer system 800 also includes system memory 820 coupled with bus 805, the system memory 820 comprising, for example, any suitable type and number of memories, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), or double data rate DRAM (DDRDRAM). During operation of computer system 800, an operating system and other applications may be resident in the system memory 820. [0044] The computer system 800 may further include a read-only memory (ROM) 830 coupled with the bus 805. During operation, the ROM 830 may store temporary instructions and variables for processing device 810. The system 800 may also include a storage device (or devices) 840 coupled with the bus 805. The storage device 840 comprises any suitable non-volatile memory, such as, for example, a hard disk drive. The operating system and other programs may be stored in the storage device 840. Further, a device 850 for accessing removable storage media (e.g., a floppy disk drive or a CD ROM drive) may be coupled with bus 805.[0045] The computer system 800 may also include one or more I/O (Input/Output) devices 860 coupled with the bus 805. Common input devices include keyboards, pointing devices such as a mouse, as well as other data entry devices, whereas common output devices include video displays, printing devices, and audio output devices. It will be appreciated that these are but a few examples of the types of I/O devices that may be coupled with the computer system 800.[0046] The computer system 800 may further comprise a network interface 870 coupled with bus 805. The network interface 870 comprises any suitable hardware, software, or combination of hardware and software that is capable of coupling the system 800 with a network (e.g., a network interface card). The network interface 870 may establish a link with the network (or networks) over any suitable medium - e.g., wireless, copper wire, fiber optic, or a combination thereof- supporting the exchange of information via any suitable protocol - e.g., TCP/IP (Transmission ControlProtocol/Internet Protocol), HTTP (Hyper-Text Transmission Protocol), as well as others. [0047] It should be understood that the computer system 800 illustrated in FIG. 8 is intended to represent an exemplary embodiment of such a system and, further, that this system may include many additional components, which have been omitted for clarity and ease of understanding. By way of example, the system 800 may include a DMA (direct memory access) controller, a chip set associated with the processing device 810, additional memory (e.g., a cache memory), as well as additional signal lines and buses. Also, it should be understood that the computer system 800 may not include all of the components shown in FIG. 8. [0048] In one embodiment, the computer system 800 includes a component having an integrated circuit die that was formed on an SOI wafer having an etch stop layer, such as a silicon nitride layer, as described above. For example, the processing device 810 of system 800 may include such an integrated circuit die. However, it should be understood that other components of system 800 (e.g., network interface 870, etc.) may include a device having an integrated circuit die formed on an SOI wafer including a silicon nitride etch stop (or other etch stop layer).[0049] The foregoing detailed description and accompanying drawings are only illustrative and not restrictive. They have been provided primarily for a clear and comprehensive understanding of the disclosed embodiments and no unnecessary limitations are to be understood therefrom. Numerous additions, deletions, and modifications to the embodiments described herein, as well as alternative arrangements, may be devised by those skilled in the art without departing from the spirit of the disclosed embodiments and the scope of the appended claims. |
An apparatus is described. The apparatus includes a memory controller to interface with a multi-level system memory. The multi-level system memory has a near memory level and a far memory level. The near memory level has a sectored cache to cache super lines having multiple cache lines as a single cacheable item. The memory controller has tracker circuitry to track status information of an old request super line and a new request super-line that compete for a same slot within the sectored cache, wherein, the status information includes an identification of which one of the old and new super-lines is currently cached in the sectored cache. |
Claims1. An apparatus, comprising:a memory controller to interface with a multi-level system memory comprising a near memory level and a far memory level, said near memory level comprising a sectored cache to cache super lines comprising multiple cache lines as a single cacheable item, said memory controller comprising tracker circuitry to track status information of an old request super line and a new request super-line that compete for a same slot within said sectored cache, wherein said status includes an identification of which one of the old and new super-lines is currently cached in the sectored cache.2. The apparatus of claim 1 wherein the status further identifies whether a cached super-line is in a modified state. 3. The apparatus of claim 1 wherein the memory controller further comprises fill request handler circuitry, the fill request handler circuitry to receive a request from the tracker circuitry after the tracker circuitry recognizes that the new request super-line competes with the old request super line for the slot in the sectored cache. 4. The apparatus of claim 3 wherein the fill request handler circuitry causes the old request super line to be read from the sectored cache and placed into an outbound far memory FIFO.5. The apparatus of claim 4 wherein the fill request handler places super-lines being evicted having a modified state ahead in the FIFO of super-lines being evicted that do not have a modified state.6. The apparatus of claim 1 wherein, upon receipt of a read request for the old request super- line, the memory controller will check the tracker circuitry for the status of the old request cache line, and, if the old request cache line is currently cached in the sectored cache, the memory controller will service the read request from the sectored cache.7. The apparatus of claim 1 wherein, upon receipt of a write request for the old request super- line, the memory controller will check the tracker circuitry for the status of the old request super- line, and, if the old request super-line is currently cached in the sectored cache, the memory controller will service the write request by writing to the old request super-line before it is evicted from the memory controller.8. The apparatus of claim 1 wherein, upon receipt of a read or write request for the old request super-line, the memory controller will check the tracker circuitry for the status of the old request super-line, and, if the old request super-line is not currently cached in the sectored cache, the memory controller will service the read or write request by entering the read or write request in an outbound far memory FIFO queue. 9. A method, comprising:managing a multi-level system memory comprising a near memory and a far memory where the near memory comprises a sectored cache that caches super-lines, the managing including determining cache hits and cache misses in the near memory;keeping track of status information for an older request super-line and a newer request super- line that compete for a same slot within said sectored cache, said keeping track of status information including identifying which one of said older request super-line and said newer request super-line are currently stored in the slot.10. The method of claim 9 wherein said status information also identifies whether said older request super- line is modified.11. The method of claim 10 wherein said method includes moving said older request super- line while said older request super-line is the process of being evicted to said far memory ahead of other super-lines being evicted that are not in a modified state.12. The method of claim 9 wherein said method comprises:receiving a read request for said older request super-line before said older request super- line has been written to said far memory;referring to said status information to understand that said older request super-line is currently within sectored cache; and,servicing said read request from said sectored cache.13. The method of claim 9 wherein said method comprises: receiving a write request for said older request super-line before said older request super- line has been written to said far memory;referring to said status information to understand that said older request super-line is currently within said sectored cache; and,servicing said write request by writing to said older request super- line before said older request super-line is written to said far memory.14. The method of claim 9 wherein said method comprises:receiving a read or write request for said older request super-line after said older request super- line has been written to said far memory;referring to said status information to understand that said older request super-line is no longer within said sectored cache; and,servicing said read or write request by forwarding said read or write request to said far memory.15. An apparatus, comprising:a multi-level system memory comprising a near memory level and a far memory level, said near memory level comprising a sectored cache to cache super lines comprising multiple cache lines as a single cacheable itema memory controller between the one or more processing cores and the networking interface, the memory controller to interface with the multi-level system memory, said memory controller comprising tracker circuitry to track status information of old request and new request super-lines that compete for a same slot within said sectored cache, wherein said status includes an identification of which one of the old and new super-lines is currently cached in the sectored cache.16. The apparatus of claim 15 wherein the status further identifies whether a cached super-line is in a modified state. 17. The apparatus of claim 15 wherein the memory controller further comprises fill request handler circuitry, the fill request handler circuitry to receive a request from the tracker circuitry after the tracker circuitry recognizes that the new request super-line competes with the old request super line for the slot in the sectored cache.18. The apparatus of claim 15 wherein, upon receipt of a read request for the old request super- line, the memory controller will check the tracker circuitry for the status of the old request cache line, and, if the old request cache line is currently cached in the sectored cache, the memory controller will service the read request from the sectored cache.19. The apparatus of claim 15 wherein, upon receipt of a write request for the old request super- line, the memory controller will check the tracker circuitry for the status of the old request super- line, and, if the old request super-line is currently cached in the sectored cache, the memory controller will service the write request by writing to the old request super-line before it is evicted from the memory controller.20. The apparatus of claim 15 wherein, upon receipt of a read or write request for the old request super-line, the memory controller will check the tracker circuitry for the status of the old request super-line, and, if the old request super-line is not currently cached in the sectored cache, the memory controller will service the read or write request by entering the read or write request in an outbound far memory FIFO queue.21. The apparatus of claim 15,at least one processor communicatively coupled to the memory controller anda network interface communicatively coupled to the at least one processor.22, The apparatus of claim 21 comprising:a display communicatively coupled to the at least one processor. |
MEMORY CONTROLLER FOR MULTI-LEVEL SYSTEM MEMORY HAVINGSECTORED CACHEField of InventionThe field of invention pertains generally to computing systems, and, more specifically, to a memory controller for a multi-level system memory having a sectored cache.BackgroundComputing systems typically include system memory (or main memory) that contains data and program code of the software code that the system's processor(s) are currently executing. A pertinent bottleneck in many computer systems is the system memory. Here, as is understood in the art, a computing system operates by executing program code stored in system memory. The program code when executed reads and writes data from/to system memory. As such, system memory is heavily utilized with many program code and data reads as well as many data writes over the course of the computing system's operation. Finding ways to speed-up system memory is therefore a motivation of computing system engineers.FiguresA better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:Fig. 1 shows a computing system having a multi-level system memory; Figs. 2a through 2f show operation of a memory controller that tracks eviction and fill status of competing super-line pairs;Figs. 3a-c shows various scenarios of operation of the memory controller of Figs. 2a through 2f;Fig. 4 shows a methodology that can be performed by the memory controller of Figs. 2a through 2f;Fig. 5 shows an embodiment of a computing system. Detailed DescriptionOne of the ways to speed-up system memory without significantly increasing power consumption is to have a multi-level system memory. Fig. 1 shows an embodiment of a computing system 100 having a multi-tiered or multi-level system memory 112. According to various embodiments, a faster near memory 113 may be utilized as a memory side cache.In the case where near memory 113 is used as a memory side cache, near memory 113 is used to store data items that are expected to be more frequently called upon by the computing system. The near memory cache 113 has lower access times than the lower tiered far memory 114 region. By storing the more frequently called upon items in near memory 113, the system memory will be observed as faster because the system will often read items that are being stored in faster near memory 113.According to some embodiments, for example, the near memory 113 exhibits reduced access times by having a faster clock speed than the far memory 114. Here, the near memory 113 may be a faster, volatile system memory technology (e.g., high performance dynamic random access memory (DRAM)) or faster non volatile memory. By contrast, far memory 114 may be either a volatile memory technology implemented with a slower clock speed (e.g., a DRAM component that receives a slower clock) or, e.g., a non volatile memory technology that is inherently slower than volatile/DRAM memory or whatever technology is used for near memory.For example, far memory 114 may be comprised of an emerging non volatile byte addressable random access memory technology such as, to name a few possibilities, a phase change based memory, a ferro-electric based memory (e.g., FRAM), a magnetic based memory (e.g., MRAM), a spin transfer torque based memory (e.g., STT-RAM), a resistor based memory (e.g., ReRAM), a Memristor based memory, universal memory, Ge2Sb2Te5 memory, programmable metallization cell memory, amorphous cell memory, Ovshinsky memory, etc..Such emerging non volatile random access memories technologies typically have some combination of the following: 1) higher storage densities than DRAM (e.g., by being constructed in three-dimensional (3D) circuit structures (e.g., a crosspoint 3D circuit structure); 2) lower power consumption densities than DRAM (e.g., because they do not need refreshing); and/or 3) access latency that is slower than DRAM yet still faster than traditional non-volatile memory technologies such as FLASH. The later characteristic in particular permits an emerging non volatile memory technology to be used in a main system memory role rather than a traditional storage role (which is the traditional architectural location of non volatile storage).Regardless of whether far memory 114 is composed of a volatile or non volatile memory technology, in various embodiments far memory 114 acts as a true system memory in that it supports finer grained data accesses (e.g., cache lines) rather than larger blocked based accesses associated with traditional, non volatile storage (e.g., solid state drive (SSD), hard disk drive (HDD)), and/or, otherwise acts as an (e.g., byte) addressable memory that the program code being executed by processor(s) of the CPU operate out of.Because near memory 113 acts as a cache, near memory 113 may not have its own individual addressing space. Rather, only far memory 114 includes the individually addressable memory space of the computing system's main memory. In various embodiments near memory 113 truly acts as a cache for far memory 114 rather than acting a last level CPU cache (generally, a CPU level cache is able to keep cache lines across the entirety of system memory addressing space that is made available to the processing cores 117 that are integrated on a samesemiconductor chip as the memory controller 116).For example, in various embodiments, system memory is implemented with dual inline memory module (DIMM) cards where a single DIMM card has both DRAM and (e.g., emerging) non volatile memory chips disposed in it. The DRAM chips effectively act as an on board cache for the non volatile memory chips on the DIMM card. Ideally, the more frequently accessed cache lines of any particular DIMM card will be found on that DIMM card's DRAM chips rather than its non volatile memory chips. Given that multiple DIMM cards are typically plugged into a working computing system and each DIMM card is only given a section of the system memory addresses made available to the processing cores 117 of the semiconductor chip that the DIMM cards are coupled to, the DRAM chips are acting as a cache for the non volatile memory that they share a DIMM card with rather than a last level CPU cache.In other configurations DIMM cards having only DRAM chips may be plugged into a same system memory channel (e.g., a DDR channel) with DIMM cards having only non volatile system memory chips. Ideally, the more frequently used cache lines of the channel will be found in the DRAM DIMM cards rather than the non volatile memory DIMM cards. Thus, again, because there are typically multiple memory channels coupled to a same semiconductor chip having multiple processing cores, the DRAM chips are acting as a cache for the non volatile memory chips that they share a same channel with rather than as a last level CPU cache.Although the above example referred to packaging solutions that included DIMM cards, it is pertinent to note that this is just one example and other embodiments may use other packaging solutions (e.g., stacked chip technology, one or more DRAM and phase change memories integrated on a same semiconductor die or at least within a same package as the processing core(s), etc.).In yet other embodiments, near memory 113 may act as a CPU level cache. The architecture of the near memory cache 113 may also vary from embodiment. According to one approach, the near memory cache 113 is implemented as a direct mapped cache in which multiple system memory addresses map to one cache line slot in near memory113. Other embodiments may implement other types of cache structures (e.g., set associative, etc.). Regardless of the specific cache architecture, different cache lines may compete for the same cache resources in near memory 113.For example, in the case of a direct mapped cache, when requests for two or more cache lines whose respective addresses map to the same near memory 113 cache line slot are concurrently received by the memory controller 116, the memory controller 116 will keep one of the cache lines in near memory cache 113 and cause the other cache line to be kept in far memory 114.Whenever a request for a cache line is received by the memory controller 216, the memory controller first checks for the cache line in near memory cache 113. If the result is a cache hit, the memory controller 113 services the request from the version of the cache line in near memory 113 (in the case of a read request, the version of the cache line in near memory cache is forwarded to the requestor; in the case of a write, the version of the cache line in near memory cache is written over and kept in the near memory cache). In the case of a cache miss, for both read and write requests, the cache line that is targeted by the request is called up from far memory 114 and stored in near memory cache 113. In order to make room for the new cache line in near memory cache 113, another cache line that competes with the targeted cache line is evicted from near memory cache 113 and sent to far memory 114.Data consistency problems may arise if care is not taken handling cache lines while in the process of evicting an old cache line from near memory 113 to far memory and filling the space created in near memory cache 113 by the eviction of the old cache line with the new cache line whose read or write request just suffered a cache miss. For example, if the evicted cache line is dirty (meaning it contains the most recent, up to date version of the cache line's data) and a write request is received for the evicted cache line before it is actually written to far memory114, the memory controller 116 needs to take appropriate action to make sure the dirty cache line is updated with the new data.Figs. 2a through 2f describe operation of an improved memory controller 201 that is able to keep track of both the actual eviction process for old cache lines being evicted, and, the actual filling process of new cache lines being inserted into near memory cache 202.Before beginning the discussion of Figs. 2a through 2f, however, it is pertinent to point out that the solution they describe may be particularly useful in the case of a sectored cache that caches super- lines composed of multiple cache lines. As is known in the art, a cache line typically includes multiple individually addressable (e.g., 32 bit or 64 bit) data or instruction items. For example, a typical cache line may be 64 bytes and contain eight 64 bit data units. The size of a cache line (the number of data/instruction items it contains) is typically coextensive with the width of the internal caches of the corresponding CPU core(s). By contrast, a super-line may consist, for example, of sixteen cache lines (= 16 x 64 bytes = 1024 bytes of information). In the case of a sectored cache that caches super-lines, a single read cache hit results in multiple (e.g., sixteen) cache lines being forwarded to the CPU.The data consistency problems mentioned just above are especially more likely to occur in the case of a sectored cache that moves entire super-lines between near memory and far memory (as opposed to a more traditional, nominally sized cache lines). For example, with the much larger size of a super-line, there is more data to move from near memory to far memory in the case of an eviction from near memory cache to far memory. This may result in more propagation delay, e.g., physically reading all of the data from near memory and then forwarding this data within the memory controller to a far memory interface. Additionally, again with the expansive size of the super line, there is a greater chance that an incoming write request will target a cache line within the super-line. Thus the likelihood that an incoming write request will target a cache line as it is in the process of moving between near memory and far memory becomes a more likely event in the case of a super-line and a sectored cache.As such, the discussion below will generally refer to a super-line although the reader should understand the approach of Figs. 2a through 2f can also be applied to smaller data units (e.g., nominally sized cache lines).As observed in Fig. 2a, the memory controller 201 includes a plurality of eviction/filling state tracker circuits 206_1 through 206_5 for super- line pairs having an eviction/filling relationship (i.e., the filling super-line consumes the space in near memory 202 made available by the super- line being evicted). For simplicity, only five such tracker circuits are depicted. In various embodiments, the number of such tracker circuits may be less than the size of the near memory cache but large enough to concurrently track large numbers of super- lines. For example, there may exist a number of tracker circuits equal to 20% of the size of the near memory cache, and, the tracker circuits are reused five times over to fully process the entire cache.Each tracker circuit 206 includes register space to hold state information for both the evicted and the filling super-lines. The state information may be kept, e.g., in memory such as dedicated (non cache) part of near memory 202. In an embodiment, the state information identifies a particular super-line by its address and two items of meta data that indicate whether the particular super-line is still formally residing in near memory cache 203, and, whether the particular super-line is in a modified state (M). Note that a single address may be used to identify a particular super-line (as suggested in Fig. 2a), or, depending on implementation, a single entry in tracker circuit may individually identify the address of each super- line in the corresponding super-line. Like-wise the meta data may be present only for the entire super-line, or, may be present for each super-line in the super-line. For simplicity, the following discussion assumes one address and meta-data instance per super-line.As is known in the art, a super-line in the M state is essentially a "dirty" super-line in that it holds the most recent, up to date data for the super-line. As will be described more clearly below, a pertinent feature of the memory controller 201 of Fig. 2a is that a new super- line is not permitted to fill the near memory 202 cache space created by an evicted super- line until the evicted super- line is actually evicted from the memory controller 201 and written into far memory 203. Here, note that movement of data may therefore include moving a copy of the data while the data remains in its original location.Fig, 2a shows the state of a memory controller 201 for five near memory sectored cache slots 207_1 through 207_5 and the corresponding tracker circuit 206_1 through 206_5 for each slot (i.e., tracker circuit 206_1 tracks the evicted/fill pairs for slot 207_1, tracker circuit 206_2 tracks the evicted/fill pairs for slot 207_2, etc.). For simplicity, Fig. 2a shows a memory controller state where no competing super-line requests have been received for sectored cache slots 207_1 through 207_5. As such, each tracker circuit 206_1 through 206_5 only shows an "old" super-line (no "new" competing super-lines have been received yet). Because each old super-line is presently residing within near memory cache 202 in its respective slot, each of the old super-lines has a corresponding "C" bit set. Here, for any entry in a tracker circuit, the C bit indicates whether or not the corresponding super-line is physically in near memory cache 202. Also observed in Fig. 2a is that some of the old super-lines are dirty (M bit is set) whereas other super- lines are not dirty (M bit is not set).Fig. 2b shows a moment in time after the situation of Fig. 2a in which four of the five super-line slots in the sectored near memory cache have been targeted by a new memory access request and the result was a cache miss resulting in four pairs of old and new super-lines. Here, slot 207_2, which corresponds to tracker circuit 206_2, has either not had any new memory access requests, or, has had a new memory access request that targeted the old super-line having address ADDR_2 (i.e., a cache hit resulted). Regardless, there is no new superfine to fill slot 207_2 and evict the super-line having address ADDR_2.In an embodiment, a tag array (not shown) resides within a memory region (not depicted) of the memory controller 201 to indicate whether or not a cache hit has resulted for any particular super-line. A tag array essentially includes an entry for each slot in the sectored near memory cache and keeps the "tag" (e.g., upper) address bits for the particular super-line that is presently occupying the slot in the sectored near memory cache. For each incoming request, hashing or lookup circuitry (also not shown) associated with the tag array respectively performs a hash or lookup operation to map the address of the request to the particular entry in the tag array that the address maps to. If the tag held in the entry of the tag array matches the corresponding tag of the request the result is a cache hit. Otherwise the result is a cache miss. Note that, in an embodiment, the "old" entries in the state tracker circuits 206 may mimic the address tag information in the tag array. If the number of state tracker circuits is less than the number of slots in the cache, information from the tag array is used to "fill" the "old" entries of the state tracker circuits 206.Continuing then with the present example, in contrast to slot 207_2, each of the other slots 207_1 and 207_3 through 207_5 have been newly targeted by a memory access request that resulted in a cache miss. As such, each of slots 207_1 and 207_3 through 207_5 have a corresponding old super- line that needs to be evicted and a new super-line that will fill the space created in near memory cache 202 by the evicted super-line. No actual eviction/filling activity has taken place as of Fig. 2b. As such, the old super- lines of Fig. 2b maintain the same state that they had in Fig. 2a (C bit is set to TRUE). Similarly, each of the new super- lines have not yet been actually written into near memory 202. As such, each of the new super-lines have their C bit set to FALSE. Each new super-line also is not dirty and therefore does not have its M bit set.As observed in Fig. 2c, logic circuitry associated with each of the request tracker circuits 206_1 and 206_3 through 206_5 having an old and new super-line pair generate a fill request 208_1 through 208_4 to a fill request handler circuit 204. Here, the sending of a fill request by a tracker circuit is triggered by the logic circuitry of a tracker circuit recognizing it has an old and new super- line pair.As observed in Fig. 2d, the fill request handler circuit 204 responds to the fill requests 208_1 through 208_4 by prioritizing the eviction of super- lines in the M state over super-lines that are not in the M state. That is, as observed from Figs. 2a through 2c, the super- lines having address ADDR_3 and ADDR_5 were in the M state while the super- lines having addresses ADDR_1 and ADDR_4 were not in the M state. As observed in Fig. 2d, the super- lines having addresses ADDR_3 and ADDR_5 have been placed ahead of the super- lines having addresses ADDR_1 and ADDR_4 in the far memory write queue 205. As a consequence, the super- lines having addresses ADDR_3 and ADDR_5 will be evicted from the memory controller 201 into far memory 203 before the super-lines having addresses ADDR_1 and ADDR_4. In various embodiments, as part of the placement of super-lines into the queue 205, the super lines within the M state may themselves be further prioritized according to any additional information (e.g., prioritizing super-lines that are less likely to soon be targeted by a request before super- lines that are more likely to soon be targeted by a request). Super-lines not within the M state may also be further prioritized to determine their order of entry in the queue according to a same or other basis.Note that because all four super-lines being evicted have not actually been evicted yet (they are still on the host side in the memory controller 201 and have not yet been written to far memory 203) their corresponding tracker entry still shows each super-line in the C state. That is, each of these super- lines still has a version of itself resident in its corresponding slot in near memory cache 202.Note that because super-lines ADDR_3 and ADDR_5 are dirty they should be evicted into far memory 203. Whether or not the super- lines ADDR_1 and ADDR_4 should actually be evicted depends on implementation. Specifically, super-lines that are not dirty (such as the super-lines having addresses ADDR_1 and ADDR_4) need only actually be evicted into far memory 203 if there does not exist a copy of them already in far memory 203. Here, systems may differ as between the exact content of the near memory cache 202 and far memory 203. Some systems may keep a copy in far memory 203 of any super-line in near memory cache 202. For these systems, it is not necessary to write back to far memory 203 an evicted super-line that is not in the M state. Other systems, however, may not keep a copy in far memory 203 of a super-line that is cached in near memory 202. These systems, by contrast, should write back "clean" (non M state) evicted super-lines to far memory 203 as observed in Fig. 2d.Fig. 2e shows the state of the memory controller 201 after the super-line having address ADDR_3 has been physically evicted from the host side and written to far memory 203. Here, after the eviction of the super- line, the fill request handler 204 permits the new super-line having address ADDR_7 to physically replace the old super-line having ADDR_3 in slot 207_3 of near memory cache 202. After the replacement, the corresponding tracker circuit 206_3 flips the state of the C bit for the two entries. That is, with the new super- line having been physically written into in near memory 202, the new super-line has its C bit set to TRUE to indicate the super- line having address ADDR_7 is now in near memory cache. Similarly, the tag array may now be updated to include the tag of ADDR_7 for slot 207_3. Finally, the entry for the old super-line has its C bit set to FALSE to indicate it is no longer in near memory cache 202.Fig. 2f shows the state of the memory controller 201 after all old super-lines requiring eviction have been evicted and replaced in near memory cache with their corresponding new replacement. Here, the new replacement super-lines are given old status in the tracker circuit which may set up another sequence similar to Figs. 2a through 2e for a next round of memory access requests.Figs 3a and 3b show a first set of scenarios that may transpire while a super- line is being evicted. Fig. 3a shows a memory controller state where an old super-line is being evicted but has not actually been evicted yet (it is sitting in the far memory write queue 305 waiting to be written to far memory 303). If in this state a read request is received 1 for the old super-line, the read request can be serviced by reading 2 the old super-line from near memory cache 302. Here, the memory controller can refer to the tracker circuit 306_1 which indicates that the old super- line (having address ADDR_1) is still physically resident in near memory cache because its C bit is still set to TRUE. As such the read request can be directed to near memory 302. The scenario of Fig. 3a does not care if the evicted super- line is in the M state or not.Fig. 3b shows the same situation as with Fig. 3a except that the newly received request 1 for the super-line being evicted is a write request. As observed in Fig. 3b, the write request may be serviced by writing 2 to the super- line directly in the outbound write queue 305. Alternatively, the write request itself may simply be entered in the outbound queue 305 so that the new data reflected in the write request eventually overwrites the super-line in far memory 303 (the new write request follows the eviction write request in FIFO order). The scenario of Fig. 3b explicitly shows the super-line being evicted as being in the M state. If the super-line were not in the M state and a copy of itself existed in far memory 303, the new write request could simply be added to the write queue 305 even if no evicted version of the super-line were written to far memory.Fig. 3c shows the situation of Fig. 3b after the evicted super-line is physically written 1 into far memory 303. After the super-line is physically written 1 into far memory 303, the fill request handler 304 writes 2 the new super-line into near memory and flips the C state of the respective super-line entries (C state of old super-line flips from TRUE to FALSE and C state of new super-line flips from FALSE to TRUE). If a subsequent request is received for the old super-line, whether read or write, it is simply entered into the far memory outbound queue 305 because the C state for the old super-line indicates that the old super-line is no longer in near memory.Fig. 4 shows a methodology performed by a memory controller described herein. As observed in Fig. 4, the method includes managing a multi-level system memory comprising a near memory and a far memory where the near memory comprises a sectored cache that caches super-lines and where the managing includes determining cache hits and cache misses in the near memory 401. The method also includes keeping track 402 of status information for an older request super-line and a newer request super-line that compete for a same slot within the sectored cache, the keeping track of status information including identifying which one of the older request super- line and the newer request super- line are currently stored in the slot.Fig. 5 shows a depiction of an exemplary computing system 500 such as a personal computing system (e.g., desktop or laptop) or a mobile or handheld computing system such as a tablet device or smartphone, or, a larger computing system such as a server computing system. As observed in Fig. 5, the basic computing system may include a central processing unit 501 (which may include, e.g., a plurality of general purpose processing cores and a main memory controller disposed on an applications processor or multi-core processor), system memory 502, a display 503 (e.g., touchscreen, flat-panel), a local wired point-to-point link (e.g., USB) interface 04, various network I O functions 505 (such as an Ethernet interface and/or cellular modem subsystem), a wireless local area network (e.g., WiFi) interface 506, a wireless point-to-point link (e.g., Bluetooth) interface 507 and a Global Positioning System interface 508, various sensors 509_1 through 509_N (e.g., one or more of a gyroscope, an accelerometer, amagnetometer, a temperature sensor, a pressure sensor, a humidity sensor, etc.), a camera 510, a battery 511, a power management control unit 512, a speaker and microphone 513 and an audio coder/decoder 514.An applications processor or multi-core processor 550 may include one or more general purpose processing cores 515 within its CPU 501, one or more graphical processing units 516, a memory management function 517 (e.g., a memory controller) and an I/O control function 518. The general purpose processing cores 515 typically execute the operating system and application software of the computing system. The graphics processing units 516 typically execute graphics intensive functions to, e.g., generate graphics information that is presented on the display 503. The memory control function 517 interfaces with the system memory 502. The system memory 502 may be a multi-level system memory such as the multi-level system memory discussed at length above. The memory controller may include tracker circuitry as described at length above. During operation, data and/or instructions are typically transferred between deeper non volatile (e.g., "disk") storage 520 and system memory 502. The power management control unit 512 generally controls the power consumption of the system 500.Each of the touchscreen display 503, the communication interfaces 504 - 507, the GPS interface 508, the sensors 509, the camera 510, and the speaker/microphone codec 513, 514 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the camera 510). Depending on implementation, various ones of these I/O components may be integrated on the applications processor/multi-core processor 550 or may be located off the die or outside the package of the applications processor/multi-core processor 550.Embodiments of the invention may include various processes as set forth above. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine -readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
An embodiment may include circuitry to select, at least in part, from a plurality of memories, at least one memory to store data. The memories may be associated with respective processor cores. The circuitry may select, at least in part, the at least one memory based at least in part upon whether the data is included in at least one page that spans multiple memory lines that is to be processed by at least one of the processor cores. If the data is included in the at least one page, the circuitry may select, at least in part, the at least one memory, such that the at least one memory is proximate to the at least one of the processor cores. Many alternatives, variations, and modifications are possible. |
CLAIMS What is claimed is: 1. An apparatus comprising: circuitry to select, at least in part, from a plurality of memories, at least one memory to store data, the plurality of memories being associated with respective processor cores, the circuitry being to select, at least in part, the at least one memory based at least in part upon whether the data is comprised in at least one page that spans multiple memory lines that is to be processed by at least one of the processor cores, and if the data is comprised in the at least one page, the circuitry being to select, at least in part, the at least one memory, such that the at least one memory is proximate to the at least one of the processor cores. 2. The apparatus of claim 1, wherein: the at least one page is allocated, at least in part, one or more physical memory addresses by at least one process executed, at least in part, by one or more of the processor cores; the one or more physical memory addresses are in a first physical memory region associated, at least in part, with one or more first data portions to be distributed to the memories based at least in part upon a page-by-page allocation; the at least one process is to allocate, at least in part, a second physical memory region associated, at least in part, with one or more second data portions to be distributed to the memories based at least in part upon a memory line -by-memory line allocation; and the circuitry is to select, at least in part, the at least one memory based at least in part upon the one or more physical addresses and in which of the physical memory regions the one or more physical memory addresses are located. 3. The apparatus of claim 2, wherein: the at least one process is to allocate, at least in part, the one or more physical memory addresses in response, at least in part, to and contemporaneous with invocation of a memory allocation function call; and the at least one process comprises at least one operating system kernel process. 4. The apparatus of claim 2, wherein: the circuitry comprises: first circuitry and second circuitry to concurrently generate, at least in part, respective values indicating, at least in part, the at least one memory, based at least in part upon the memory line -by-memory line allocation and the page-by-page allocation, respectively; and selector circuitry to select one of the respective values based at least in part upon the one or more physical addresses and in which of the physical memory regions the one or more physical memory addresses are located. 5. The apparatus of claim 1, wherein: the plurality of processor cores are communicatively coupled to each other via at least one network-on-chip; the at least one page comprises, at least in part, at least one packet received, at least in part, by a network interface controller, the at least one packet including the data; and the plurality of processor cores, the memories, and the network-on-chip are comprised in an integrated circuit chip. 6. The apparatus of claim 1, wherein: the at least one memory is local to the at least one of the processor cores and also is remote from one or more others of the processor cores; the at least one of the processor cores comprises multiple processor cores to execute respective application threads to utilize, at least in part, the at least one page; and the at least one page is allocated, at least in part, by at least one virtual machine monitor process. 7. A method comprising: selecting, at least in part, by circuitry, from a plurality of memories at least one memory to store data, the plurality of memories being associated with respective processor cores, the circuitry being to select, at least in part, the at least one memory based at least in part upon whether the data is comprised in at least one page that spans multiple memory lines that is to be processed by at least one of the processor cores, and if the data is comprised in the at least one page, the circuitry being to select, at least in part, the at least one memory, such that the at least one memory is proximate to the at least one of the processor cores. 8. The method of claim 7, wherein: the at least one page is allocated, at least in part, one or more physical memory addresses by at least one process executed, at least in part, by one or more of the processor cores; the one or more physical memory addresses are in a first physical memory region associated, at least in part, with one or more first data portions to be distributed to the memories based at least in part upon a page-by-page allocation; the at least one process is to allocate, at least in part, a second physical memory region associated, at least in part, with one or more second data portions to be distributed to the memories based at least in part upon a memory line -by-memory line allocation; and the circuitry is to select, at least in part, the at least one memory based at least in part upon the one or more physical addresses and in which of the physical memory regions the one or more physical memory addresses are located. 9. The method of claim 8, wherein: the at least one process is to allocate, at least in part, the one or more physical memory addresses in response, at least in part, to and contemporaneous with invocation of a memory allocation function call; and the at least one process comprises at least one operating system kernel process. 10. The method of claim 8, wherein: the circuitry comprises: first circuitry and second circuitry to concurrently generate, at least in part, respective values indicating, at least in part, the at least one memory, based at least in part upon the memory line -by-memory line allocation and the page-by-page allocation, respectively; and selector circuitry to select one of the respective values based at least in part upon the one or more physical addresses and in which of the physical memory regions the one or more physical memory addresses are located. 11. The method of claim 7, wherein: the plurality of processor cores are communicatively coupled to each other via at least one network-on-chip; the at least one page comprises, at least in part, at least one packet received, at least in part, by a network interface controller, the at least one packet including the data; and the plurality of processor cores, the memories, and the network-on-chip are comprised in an integrated circuit chip. 12. The method of claim 7, wherein: the at least one memory is local to the at least one of the processor cores and also is remote from one or more others of the processor cores; the at least one of the processor cores comprises multiple processor cores to execute respective application threads to utilize, at least in part, the at least one page; and the at least one page is allocated, at least in part, by at least one virtual machine monitor process. 13. Computer-readable memory storing one or more instructions that when executed by a machine result in performance of operations comprising: selecting, at least in part, by circuitry, from a plurality of memories at least one memory to store data, the plurality of memories being associated with respective processor cores, the circuitry being to select, at least in part, the at least one memory based at least in part upon whether the data is comprised in at least one page that spans multiple memory lines that is to be processed by at least one of the processor cores, and if the data is comprised in the at least one page, the circuitry being to select, at least in part, the at least one memory, such that the at least one memory is proximate to the at least one of the processor cores. 14. The computer-readable memory of claim 13, wherein: the at least one page is allocated, at least in part, one or more physical memory addresses by at least one process executed, at least in part, by one or more of the processor cores; the one or more physical memory addresses are in a first physical memory region associated, at least in part, with one or more first data portions to be distributed to the memories based at least in part upon a page-by-page allocation; the at least one process is to allocate, at least in part, a second physical memory region associated, at least in part, with one or more second data portions to be distributed to the memories based at least in part upon a memory line -by-memory line allocation; and the circuitry is to select, at least in part, the at least one memory based at least in part upon the one or more physical addresses and in which of the physical memory regions the one or more physical memory addresses are located. 15. The computer-readable memory of claim 14, wherein: the at least one process is to allocate, at least in part, the one or more physical memory addresses in response, at least in part, to and contemporaneous with invocation of a memory allocation function call; and the at least one process comprises at least one operating system kernel process. 16. The computer-readable memory of claim 14, wherein: the circuitry comprises: first circuitry and second circuitry to concurrently generate, at least in part, respective values indicating, at least in part, the at least one memory, based at least in part upon the memory line -by-memory line allocation and the page-by-page allocation, respectively; and selector circuitry to select one of the respective values based at least in part upon the one or more physical addresses and in which of the physical memory regions the one or more physical memory addresses are located. 17. The computer-readable memory of claim 13, wherein: the plurality of processor cores are communicatively coupled to each other via at least one network-on-chip; the at least one page comprises, at least in part, at least one packet received, at least in part, by a network interface controller, the at least one packet including the data; and the plurality of processor cores, the memories, and the network-on-chip are comprised in an integrated circuit chip. 18. The computer-readable memory of claim 13, wherein: the at least one memory is local to the at least one of the processor cores and also is remote from one or more others of the processor cores; the at least one of the processor cores comprises multiple processor cores to execute respective application threads to utilize, at least in part, the at least one page; and the at least one page is allocated, at least in part, by at least one virtual machine monitor process. |
CIRCUITRY TO SELECT, AT LEAST IN PART, AT LEAST ONE MEMORY FIELD This disclosure relates to circuitry to select, at least in part, at least one memory. BACKGROUND In one conventional computing arrangement, a host includes a host processor and a network interface controller. The host processor includes multiple processor cores. Each of the processor cores has a respective local cache memory. One of the cores manages a transport protocol connection implemented via the network interface controller. In this conventional arrangement, when an incoming packet that is larger than a single cache line is received by the network interface controller, a conventional direct cache access (DCA) technique is employed to directly transfer the packet to and store the packet in last- level cache in the memories. More specifically, in this conventional technique, data in the packet is distributed across multiple of the cache memories, including one or more such memories that are remote from the processor core that is managing the connection. Therefore, in order to be able to process the packet, the processor core that is managing the connection fetches the data that is stored in the remote memories and stores it in that core's local cache memory. This increases the amount of time involved in accessing and processing the packet's data. It also increases the amount of power consumed by the host processor. Other conventional techniques (e.g., flow-pinning employed by some operating system kernels in connection with receive-side scaling and interrupt request affinity techniques) have been employed in an effort to try to improve processor data locality and load balancing. However, these other conventional techniques may still result in incoming packet data being stored in one or more cache memories that are remote from the processor core that is managing the connection. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS Features and advantages of embodiments will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts, and in which: Figure 1 illustrates a system embodiment. Figure 2 illustrates features in an embodiment. Figure 3 illustrates features in an embodiment. Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. Accordingly, it is intended that the claimed subject matter be viewed broadly. DETAILED DESCRIPTION Figure 1 illustrates a system embodiment 100. System 100 may include host computer (HC) 10. In this embodiment, the terms "host computer," "host," "server," "client," "network node," and "node" may be used interchangeably, and may mean, for example, without limitation, one or more end stations, mobile internet devices, smart phones, media devices, input/output (I/O) devices, tablet computers, appliances, intermediate stations, network interfaces, clients, servers, and/or portions thereof. In this embodiment, data and information may be used interchangeably, and may be or comprise one or more commands (for example one or more program instructions), and/or one or more such commands may be or comprise data and/or information. Also in this embodiment, an "instruction" may include data and/or one or more commands. HC 10 may comprise circuitry 118. Circuitry 118 may comprise, at least in part, one or more multi-core host processors (HP) 12, computer-readable/writable host system memory 21, and/or network interface controller (NIC) 406. Although not shown in the Figures, HC 10 also may comprise one or more chipsets (comprising, e.g., memory, network, and/or input/output controller circuitry). HP 12 may be capable of accessing and/or communicating with one or more other components of circuitry 118, such as, memory 21 and/or NIC 406. In this embodiment, "circuitry" may comprise, for example, singly or in any combination, analog circuitry, digital circuitry, hardwired circuitry, programmable circuitry, co-processor circuitry, state machine circuitry, and/or memory that may comprise program instructions that may be executed by programmable circuitry. Also in this embodiment, a processor, central processing unit (CPU), processor core (PC), core, and controller each may comprise respective circuitry capable of performing, at least in part, one or more arithmetic and/or logical operations, and/or of executing, at least in part, one or more instructions. Although not shown in the Figures, HC 10 may comprise a graphical user interface system that may comprise, e.g., a respective keyboard, pointing device, and display system that may permit a human user to input commands to, and monitor the operation of, HC 10 and/or system 100. In this embodiment, memory may comprise one or more of the following types of memories: semiconductor firmware memory, programmable memory, non- volatile memory, read only memory, electrically programmable memory, random access memory, flash memory, magnetic disk memory, optical disk memory, and/or other or later- developed computer-readable and/or writable memory. One or more machine -readable program instructions 191 may be stored, at least in part, in memory 21. In operation of HC 10, these instructions 191 may be accessed and executed by one or more host processors 12 and/or NIC 406. When executed by one or more host processors 12, these one or more instructions 191 may result in one or more operating systems (OS) 32, one or more virtual machine monitors (VMM) 41, and/or one or more application threads 195 A . . . 195N being executed at least in part by one or more host processors 12, and becoming resident at least in part in memory 21. Also when instructions 191 are executed by one or more host processors 12 and/or NIC 406, these one or more instructions 191 may result in one or more host processors 12, NIC 406, one or more OS 32, one or more VMM 41, and/or one or more components thereof, such as, one or more kernels 51, one or more OS kernel processes 31, one or more VMM processes 43, performing operations described herein as being performed by these components of system 100. In this embodiment, one or more OS 32, VMM 41, kernels 51, processes 31, and/or processes 43 may be mutually distinct from each other, at least in part. Alternatively or additionally, without departing from this embodiment, one or more respective portions of one or more OS 32, VMM 41, kernels 51, processes 31, and/or processes 43 may not be mutually distinct, at least in part, from each other and/or may be comprised, at least in part, in each other. Likewise, without departing from this embodiment, NIC 406 may be distinct from one or more not shown chipsets and/or HP 12. Alternatively or additionally, NIC 406 and/or the one or more chipsets may be comprised, at least in part, in HP 12 or vice versa. In this embodiment, HP 12 may comprise an integrated circuit chip 410 that may comprise a plurality of PC 128, 130, 132, and/or 134, a plurality of memories 120, 122, 124, and/or 126, and/or memory controller 161communicatively coupled together by a network-on-chip 402. Alternatively, memory controller 161 may be distinct from chip 410 and/or may be comprised in the not shown chipset. Also additionally or alternatively, chip 410 may comprise a plurality of integrated circuit chips (not shown). In this embodiment, a portion or subset of an entity may comprise all or less than all of the entity. Also, in this embodiment, a process, thread, daemon, program, driver, operating system, application, kernel, and/or VMM each may (1) comprise, at least in part, and/or (2) result, at least in part, in and/or from, execution of one or more operations and/or program instructions. Thus, in this embodiment, one or more processes 31 and/or 43 may be executed, at least in part, by one or more of the PC 128, 130, 132, and/or 134. In this embodiment, an integrated circuit chip may be or comprise one or more microelectronic devices, substrates, and/or dies. Also in this embodiment, a network may be or comprise any mechanism, instrumentality, modality, and/or portion thereof that permits, facilitates, and/or allows, at least in part, two or more entities to be communicatively coupled together. In this embodiment, a first entity may be "communicatively coupled" to a second entity if the first entity is capable of transmitting to and/or receiving from the second entity one or more commands and/or data. Memories 120, 122, 124, and/or 126 may be associated with respective PC 128, 130, 132, and/or 134. In this embodiment, the memories 120, 122, 124, and/or 126 may be or comprise, at least in part, respective cache memories (CM) that may be primarily intended to be accessed and/or otherwise utilized by, at least in part, the respective PC 128, 130, 132, and/or 134 with which the respective memories may be associated, although one or more PC may also be capable of accessing and/or utilizing, at least in part, one or more of the memories 120, 122, 124, and/or 126 with which they may not be associated. For example, one or more CM 120 may be associated with one or more PC 128 as one or more local CM of one or more PC 128, while the other CM 122, 124, and/or 126 may be relatively more remote from one or more PC 128 (e.g., compared to one or more CM 120). Similarly, one or more CM 122 may be associated with one or more PC 130 as one or more local CM of one or more PC 130, while the other CM 120, 124, and/or 126 may be relatively more remote from one or more PC 130 (e.g., compared to one or more CM 122). Additionally, one or more CM 124 may be associated with one or more PC 132 as one or more local CM of one or more PC 132, while the other CM 120, 122, and/or 126 may be relatively more remote from one or more PC 132 (e.g., compared to one or more CM 124). Also, one or more CM 126 may be associated with one or more PC 134 as one or more local CM of one or more PC 134, while the other CM 120, 122, and/or 124 may be relatively more remote from one or more PC 134 (e.g., compared to one or more local CM 126). Network-on-chip 402 may be or comprise, for example, a ring interconnect having multiple respective stops (e.g., not shown respective communication circuitry of respective slices of chip 410) and circuitry (not shown) to permit data, commands, and/or instructions to be routed to the stops for processing and/or storage by respective PC and/or associated CM that may be coupled to the stops. For example, each respective PC and its respective associated local CM may be coupled to one or more respective stops. Memory controller 161, NIC 406, and/or one or more of the PC 128, 130, 132, and/or 134 may be capable of issuing commands and/or data to the network-on-chip 402 that may result, at least in part, in network-on-chip 402 routing such data to the respective PC and/or its associated local CM (e.g., via the one or more respective stops that they may be coupled to) that may be intended to process and/or store the data. Alternatively or additionally, network-on-chip 402 may comprise one or more other types of networks and/or interconnects (e.g., one or more mesh networks) without departing from this embodiment. In this embodiment, a cache memory may be or comprise memory that is capable of being more quickly and/or easily accessed by one or more entities (e.g., one or more PC) than another memory (e.g., memory 21). Although, in this embodiment, the memories 120, 122, 124, and/or 126 may comprise respective lower level cache memories, other and/or additional types of memories may be employed without departing from this embodiment. Also in this embodiment, a first memory may be considered to be relatively more local to an entity than a second memory if the first memory may be accessed more quickly and/or easily by the entity than second memory may be accessed by the entity. Additionally or alternatively, the first memory and the second memory may be considered to be a local memory and a remote memory, respectively, with respect to the entity if the first memory is intended to be accessed and/or utilized primarily by the entity but the second memory is not intended to be primarily accessed and/or utilized by the entity. One or more processes 31 and/or 43 may generate, allocate, and/or maintain, at least in part, in memory 21 one or more (and in this embodiment, a plurality of) pages 152A . . . 152N. Each of the pages 152A . . . 152N may comprise respective data. For example, in this embodiment, one or more pages 152A may comprise data 150. Data 150 and/or one or more pages 152A may be intended to be processed by one or more of the PC (e.g., PC 128) and may span multiple memory lines (ML) 160A . . . 160N of one or more CM 120 that may be local to and associated with the one or more PC 128. For example, in this embodiment, a memory and/or cache line of a memory may comprise an amount (e.g., the smallest amount) of data that may be discretely addressable when stored in the memory. Data 150 may be comprised in and/or generated based at least in part upon one or more packets 404 that may be received, at least in part, by NIC 406. Alternatively or additionally, data 150 may be generated, at least in part by, and/or as a result at least in part of the execution of one or more threads 195N by one or more PC 134. In either case, one or more respective threads 195 A may be executed, at least in part, by one or more PC 128. One or more threads 195 A and/or one or more PC 128 may be intended to utilize and/or process, at least in part, one or more pages 152A, data 150, and/or one or more packets 404. The one or more PC 128 may (but are not required to) comprise multiple PC that may execute respective threads comprised in one or more threads 195 A. Additionally, data 150 and/or one or more packets 404 may be comprised in one or more pages 152A. In this embodiment, circuitry 118 may comprise circuitry 301 (see Figure 3) to select, at least in part, from the memories 120, 122, 124, and/or 126, one or more memories (e.g., CM 120) to store data 150 and/or one or more pages 152A. Circuitry 301 may select, at least in part, these one or more memories 120 from among the plurality of memories based at least in part upon whether (1) the data 150 and/or one or more pages 152A span multiple memory lines (e.g., cache lines 160A . . . 160N), (2) the data 150 and/or one or more pages 152A are intended to be processed by one or more PC (e.g., PC 128) associated with the one or more memories 120, and/or (3) the data 150 are comprised in the one or more pages 152A. Circuitry 301 may select, at least in part, these one or more memories 120 in such a way and/or such that the one or more memories 120, thus selected, may be proximate to the PC 128 that is to process the data 150 and/or one or more pages 152A. In this embodiment, a memory may be considered to be proximate to a PC if the memory is local to the PC and/or is relatively more local to the PC than one or more other memories may be. In this embodiment, circuitry 301 may be comprised, at least in part, in chip 410, controller 161, the not shown chipset, and/or NIC 406. Of course, many modifications, alternatives, and/or variations are possible in this regard without departing from this embodiment, and therefore, circuitry 301 may be comprised elsewhere, at least in part, in circuitry 118. As shown in Figure 3, circuitry 301 may comprise circuitry 302 and circuitry 304. Circuitry 302 and circuitry 304 may concurrently generate, at least in part, respective output values 308 and 310 indicating, at least in part, one or more of the CM 120, 122, 124, and/or 126 to be selected by circuitry 301. Without departing from this embodiment, however, such generation may not be concurrent, at least in part. Circuitry 302 may generate, at least in part, one or more output values 308 based at least in part upon a (e.g., cache) memory line-by-memory line allocation algorithm. Circuitry 304 may generate, at least in part, one or more output values 310 based at least in part upon a page-by-page allocation algorithm. Both the memory line-by-memory line allocation algorithm and the page-by-page allocation algorithm may respectively generate, at least in part, the respective output values 308 and 310 based upon one or more physical addresses (PHYS ADDR) respectively input to the algorithms. The memory line-by-memory line allocation algorithm may comprise one or more hash functions to determine one or more stops (e.g., corresponding to the one or more of the CM selected) of the network-on-chip 402 to which to route the data 150 (e.g., in accordance with a cache line interleaving/allocation-based scheme that allocates data for storage/processing among the CM 120, 122, 124, 126 and/or PC 128, 130, 132, and/or 134 in HP 12). The page-by-page allocation algorithm may comprise one or more mapping functions to determine one or more stops (e.g., corresponding to the one or more of the CM selected) of the network-on-chip 402 to which to route the data 150 and/or one or more pages 152A (e.g., in accordance with a page- based interleaving/allocation scheme that allocates data and/or pages for storage/processing among the CM 120, 122, 124, 126 and/or PC 128, 130, 132, and/or 134 in HP 12). The page-based interleaving/allocation scheme may allocate the data 150 and/or one or more pages 152A to the one or more selected CM on a page-by-page basis (e.g., in units of one or more pages), in contradistinction to the cache line interleaving/allocation-based scheme, which latter scheme may allocate the data 150 among one or more selected CM on a cache-line-by-cache-line basis (e.g., in units of individual cache lines). In accordance with this page-based interleaving/allocation scheme, the one or more values 310 may be equal to the remainder (R) that results from the division of respective physical page number(s) (P) of one or more pages 152A by the aggregate number (N) of stops/slices corresponding to CM 120, 122, 124, 126. When put into mathematical terms, this may be expressed as: R = P mod N. Circuitry 301 may comprise selector circuitry 306. Selector circuitry 306 may select one set of the respective values 308, 310 to output from circuitry 301 as one or more values 350. The one or more values 350 output from circuitry 301 may select and/or correspond, at least in part, to one or more stops of the network-on-chip 402 to which to route the data 150 and/or one or more pages 152A. These one or more stops may correspond, at least in part, to (and therefore select) the one or more CM (e.g., CM 120) that is to store the data 150 and/or one or more pages 152A. For example, in response, at least in part, to the one or more output values 350, controller 161 and/or network-on-chip 402 may route the data 150 and/or one or more pages 152A to these one or more stops, and the one or more CM 120 that correspond to these one or more stops may store the data 150 and/or one or more pages 152A routed thereto. Circuitry 306 may select which of the one or more values 308, 310 to output from circuitry 301 as one or more values 350 based at least in part upon the one or more physical addresses PHYS ADDR and one or more physical memory regions in which these one or more physical addresses PHYS ADDR may be located. This latter criterion may be determined, at least in part, by comparator circuitry 311 in circuitry 301. For example, comparator 311 may receive, as inputs, the one or more physical addresses PHYS ADDR and one or more values 322 stored in one or more registers 320. The one or more values 322 may correspond to a maximum physical address (e.g., ADDR N in Figure 2) of one or more physical memory regions (e.g., MEM REG A in Figure 2). Comparator 311 may compare one or more physical addresses PHYS ADDR to one or more values 322. If the one or more physical addresses PHYS ADDR are less than or equal to one or more values 322 (e.g., if one or more addresses PHYS ADDR corresponds to ADDR A in one or more regions MEM REG A), comparator 311 may output one or more values 340 to selector 306 that may indicate that one or more physical addresses PHYS ADDR are located in one or more memory regions MEM REG A in Figure 2. This may result in selector 306 selecting, as one or more values 350, one or more values 310. Conversely, if the one or more physical addresses PHYS ADDR are greater than one or more values 322, comparator may output one or more values 340 to selector 306 that may indicate that one or more physical addresses PHYS ADDR are not located in one or more memory regions MEM REG A, but instead may be located in one or more other memory regions (e.g., in one or more of MEM REG B . . . N, see Figure 2). This may result in selector 306 selecting, as one or more values 350, one or more values 308. For example, as shown in Figure 2, one or more processes 31 and/or 43 may configure, allocate, establish, and/or maintain, at least in part, in memory 21 at runtime following restart of HC 10 memory regions MEM REG A . . . N. One or more (e.g., MEM REG A) of these regions MEM REG A . . . N may be devoted to storing one or more pages of data that are to be allocated and/or routed to, and/or stored in, one or more selected CM in accordance with the page-based interleaving/allocation scheme. Conversely, one or more others memory regions (e.g., MEM REG B . . . N) may be devoted to storing one or more pages of data that are to be allocated and/or routed to, and/or stored in, one or more selected CM in accordance with the cache line interleaving/allocation-based scheme. Contemporaneously with the establishment of memory regions MEM REG A . . . N, one or more processes 31 and/or 43 may store in one or more registers 320 one or more values 322. As seen previously, one or more physical memory regions MEM REG A may comprise one or more (and in this embodiment, a plurality of) physical memory addresses ADDR A . . . N. One or more memory regions MEM REG A and/or memory addresses ADDR A . . . N may be associated, at least in part, with (and/or store) one or more data portions (DP) 180A . . . 180N that are to be distributed to one or more of the CM based at least in part upon the page-based interleaving/allocation scheme (e.g., on a whole page-by- page allocation basis). Conversely, one or more memory regions MEM REG B may be associated, at least in part, with (and/or store) one or more other DP 204A . . . 204N that are to be distributed to one or more of the CM based at least in part upon the cache line interleaving/allocation- based scheme (e.g., on an individual cache memory line-by-cache -memory line allocation basis). By way of example, in operation, after one or more packets 404 are received, at least in part, by NIC 406, one or more processes 31, one or more processes 43, and/or one or more threads 195 A executed by one or more PC 128 may invoke a physical page memory allocation function call 190 (see Figure 2). In this embodiment, although many alternatives are possible, one or more threads 195 A may process packet 404 and/or data 150 in accordance with a Transmission Control Protocol (TCP) described in Internet Engineering Task Force (IETF) Request For Comments (RFC) 791 published September 1981. In response to, at least in part, and/or contemporaneous with the invocation of call 190 by one or more threads 195 A, one or more processes 31 and/or 43 may allocate, at least in part, physical addresses ADDR A . . . N in one or more regions MEM REG A, and may store DP 180A . . . 180N in one or more memory regions MEM REG A in association with (e.g., at) addresses ADDR A . . . N. In this example, DP 180A . . . 180N may be comprised in one or more pages 152A, and one or more pages 152A may be comprised in one or more memory regions MEM REG A. DP 180A . . . 180N may comprise respective subsets of data 150 and/or one or more packets 404 that when appropriately aggregated may correspond to data 150 and/or one or more packets 404. One or more processes 31 and/or 43 may select (e.g., via receive side scaling and/or interrupt request affinity mechanisms) which PC (e.g., PC 128) in HP 12 may execute one or more threads 195 A intended to process and/or consume data 150 and/or one or more packets 404. One or more processes 31 and/or 43 may select one or more pages 152A and/or addresses ADDR A . . . N in one or more regions MEM REG A to store DP 180A . . . 180N that may map (e.g., in accordance with the page -based interleaving/allocation scheme) to the CM (e.g., CM 120) associated with the PC 128 that executes one or more threads 195 A. This may result in circuitry 301 selecting, as one or more values 350, one or more values 310 that may result in one or more pages 152A being routed and stored, in their entirety, to one or more CM 120. As a result, one or more threads 195 A executed by one or more PC 128 may access, utilize, and/or process data 150 and/or one or more packets 404 entirely from one or more local CM 120. Advantageously, in this embodiment, this may permit all of the data 150 and/or the entirety of one or more packets 404 that are intended to be processed by one or more threads 195 A to be stored in the particular slice and/or one or more CM 120 that may be local with respect to the one or more PC 128 executing the one or more threads 195 A, instead of being distributed in one or more remote slices and/or CM. This may significantly reduce the time involved in accessing and/or processing data 150 and/or one or more packets 404 by one or more threads 195 A in this embodiment. Also, in this embodiment, this may permit one or more slices and/or PC other than the particular slice and PC 128 involved in executing one or more threads 195 A to be put into and/or remain in relatively low power states (e.g., relative to higher power and/or fully operational states). Advantageously, this may permit power consumption by the HP 12 to be reduced in this embodiment. Furthermore, in this embodiment, if data 150 and/or one or more packets 404 exceed the size of one or more CM 120, one or more other pages in one or more pages 152A may be stored, on a whole page-by-page basis, based upon CM proximity to one or more PC 128. Advantageously, in this embodiment, this may permit these one or more other pages to be stored in one or more other, relatively less remote CM (e.g., CM 122) than one or more of the other available CM (e.g., CM 124). Further advantageously, the foregoing teachings of this embodiment may be applied to improve performance of data consumer/producer scenarios other than and/or in addition to TCP/packet processing. Additionally, in this embodiment, in the case in where it may not be desired to impose affinity between data 150 and one or more PC intended to process data 150, data 150 may be stored in one or more memory regions other than one or more regions MEM REG A. This may result in circuitry 301 selecting, as one or more values 350, one or more values 308 that may result in data 150 being routed and stored in one or more CM in accordance with the cache line interleaving/allocation-based scheme. Thus, advantageously, this embodiment may exhibit improved flexibility in terms of the interleaving/allocation scheme that may be employed, depending upon the type of data that is to be routed. Further advantageously, in this embodiment, if it is desired, DCA still may be employed. Thus, an embodiment may include circuitry to select, at least in part, from a plurality of memories, at least one memory to store data. The memories may be associated with respective processor cores. The circuitry may select, at least in part, the at least one memory based at least in part upon whether the data is included in at least one page that spans multiple memory lines that is to be processed by at least one of the processor cores. If the data is included in the at least one page, the circuitry may select, at least in part, the at least one memory, such that the at least one memory is proximate to the at least one of the processor cores. Many modifications are possible. Accordingly, this embodiment should be viewed broadly as encompassing all such alternatives, modifications, and alternatives. |
Disclosed in some examples are methods, systems, machine-readable mediums, and NAND devices which create logical partitions when requested to create a physical partition. The controller on the NAND mimics the creation of the physical partition to the host device that requested the physical partition. Thus, the host device sees the logical partition as a physical partition. Despite this, the NAND does not incur the memory storage expense of creating a separate partition, and additionally the NAND can borrow cells for overprovisioning from another partition. In these examples, a host device operating system believes that a physical partition has been created, but the NAND manages the memory as a contiguous pool of resources. Thus, a logical partition is created at the NAND memory controller level - as opposed to at the operating system level. |
CLAI MS:1. A NAND memory device comprising;A NAND memory array including a first pool of memory;a controller, the controller executing instructions, to cause the controller to perform operations compri sing:receiving a command from a host to create a physical partition of the first pool of memory;creating a NAND-level logical partition utilizing the first pool of memory instead of creating the physical partition, the first pool of memory shared with a second logical partition ;sending a response to the host indicative that the physical partition has been created; translating a request from the host identifying the physical partition and a LogicalBlock Address (LB A) to a physical address of the first pool of memory; and executing the request on the physical address of the first pool of memory. 2. The NAND memory device of claim 1 , wherein the operations of sending the response to the host indicative that the physical partition has been created comprises providing a physical partition identifier and a range of LB As between zero and a number based upon a size provided by the command from the host to create the physical partition.3. The NAND memory device of claim 2, wherein the operations of translating the request from the host identifying the physical partition and a Logical Block Address of the request to a physical address of the first pool comprises mapping a received physical partition identifier and an LBA in the range of LB As to a physical location in the first pool of memory and performing a request from the host on the physical location.4. The NAND memory device of claim 2, wherein the physical partition is identified by a logical unit identifier number (LUN). The NAND memory device of claim 1 , wherein memory cells from the first pool of memory that are used to service the physical partition are configured to meet a requirement specified by the host in the command to create the physical partition.The NAND memory device of claim 5, wherein the requirement is that memory ceils be configured as a Single Layer Cell (SLC).The NAND memory device of claim 1 , wherein the NAND-level logical partition appears to the host to be a physical partition , and wherein sending a response to the host indicative that the physical partition has been created without creating the physical partition.8. A method comprising:recei ving a command from a host to create a physical partition of a first pool of memory on a NAND memory device;creating a NAND-level logical partition utilizing the first pool of memory instead of creating the physical partition, the first pool of memory shared with a second logical partition;sending a response to the host indicative that the physical partition has been created; translating a request from the host identifying the physical partition and a Logical Block Address (LB A) to a physical address of the first pool of memory; and executing the request on the physical address of the first pool of memory.The method of claim 8, wherein sending the response to the host indicative that the physical partition has been created comprises providing a physical partition identifier and a range of LB As between zero and a number based upon a size provided by the command from the host to create the physical partition. The method of claim 9, wherein translating the request from the host identifying the physical partition and a Logical Block Address of the request to a ph ysical address of the first pool comprises mapping a received physical partition identifier and an LB A in the range of LB As to a physical location in the first pool of memory and performing a request from the host on the physical location.The method of claim 9, wherein the physical partition is identified by a logical unit identifier number (LUN).The method of claim 8, wherein memory cells from the first pool of memory that are used to service the physical partition are configured to meet a requirement specified by the host in the command to create the physical partition.The method of claim 12, wherein the requirement is that memory cells be configured Single Layer Cell (SLC).The method of claim 8, wherein the NAND-level logical partition appears to the host to be a physical partition , and wherein sending a response to the host indicative that the physical partition has been created without creating the physical partition.15. A machine -readable medium comprising instructions, which when executed, causes a machine to perform operations comprising:receiving a command from a host to create a physical partition of a first pool of memory on a NAND device;creating a NAND -level logical partition utilizing the first pool of memory instead of creating the physical partition, the first pool of memory shared with a second logical partition;sending a response to the host indicative that the physical partition has been created; translating a request from the host identifying the physical partition and a Logical Block Address (LB A) to a physical address of the first pool of memory; andexecuting the request on the physical address of the first pool of memory.16. The machine-readable medium of claim 15, wherein the operations ofsending the response to the host indicative that the physical partition has been created comprises providing a physical partition identifier and a range of LB As between zero and a number based upon a size provided by the command from the host to create the physical partition.The machine-readable medium of claim 16, wherein the operations of translating the request from the host identifying the physical partition and a Logical Block Address of the request to a physical address of the first pool comprises mapping a received physical partition identifier and an LBA in the range of LB As to a physical location in the first pool of memory and performing a request from the host on the physical location.The machine-readable medium of claim 16, wherein the physical partition is identified by a logical unit identifier number (LUN).The machine-readable medium of claim 15, wherein memory cells from the first pool of memory that are used to service the physical partition are configured to meet a requirement specified by the host in the command to create the physical partition.The machine-readable medium of claim 19, wherein the requirement is that memory cells be configured as a Single Layer Cell (SLC). The raachine-readable medium of claim 15, wherein the NAND-level logic paitition appears to the host to be a physical partition, and wherein sending response to the host indicative that the physical partition has been created without creating the physical partition. |
COMMON POOL MANAGEME TPRIORITY APPLICATION[0001] This application claims the benefit of priority to U.S. Application Serial Number 15/799,508, filed October 31, 2017, which is incorporated herein by reference in its entirety.BACKGROUND[0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory.[0003] Volatile memory requires power to maintain its data, and includes random-access memory (RAM), dynamic random-access memory (DRAM), or synchronous dynamic random-access memory (SDRAM), among others.[0004] Non-volatile memory can retain stored data when not powered, and includes flash memory, read-only memory (ROM), electrically erasableprogrammable ROM (EEPROM), static RAM (SRAM), erasable programmable ROM (EPROM), resistance variable memory, such as phase-change random-access memory (PCRAM), resistive random-access memory (RRAM), magnetoresistive random-access memory (MRAM), or 3D XPoint™ memory, among others.[0005] Flash memory is utilized as non-volatile memory for a wide range of electronic applications. Flash memory devices typically include one or more groups of one-transistor, floating gate or charge trap memory ceils that allow for high memory densities, high reliability, and low power consumption.[0006] Two common types of flash memory array architectures include NAND and NOR architectures, named after the logic form in which the basic memory cell configuration of each is arranged. The memory ceils of the memory array are typically arranged in a matrix. In an example, the gates of each floating gate memory cell in a row of the array are coupled to an access line (e.g., a word line). !n a NOR architecture, the drains of each memory eel! in a column of the array are coupled to a data line (e.g., a bit line). In a NAND architecture, the drains of each memory cell in a string of the array are coupled together in series, source to drain, between a source line and a bit line.[0007] Both NOR and NAN D architecture semiconductor memory arrays are accessed through decoders that activate specific memory cells by selecting the word line coupled to their gates, in a NOR architecture semiconductor memory array, once activated, the selected memory cells place their data values on bit lines, causing different currents to flow depending on the state at which a particular cell is programmed. In a NAND architecture semiconductor memory array, a high bias voltage is applied to a drain-side select gate (SGD) line. Word lines coupled to the gates of the unseiected memory cells of each group are driven at a specified pass voltage (e.g., Vpass) to operate the unseiected memory cells of each group as pass transistors (e.g., to pass current in a manner that is unrestricted by their stored data values). Current then flows from the source line to the bit line through each series coupled group, restricted only by the selected memory ceils of each group, placing current encoded data values of selected memory ceils on the bit lines.[0008] Each flash memory cell in a NOR or NAND architecture semiconductor memory array can be programmed individually or collectively to one or a number of programmed states. For example, a single-level ceil (SLC) can represent one of two programmed states (e.g., 1 or 0), representing one bit of data.[0009] However, flash memory ceils can also represent one of more than two programmed states, allowing the manufacture of higher density memories without increasing the number of memory cells, as each ceil can represent more than one binary digit (e.g., more than one bit). Such ceils can be referred to as multi-state memory cells, multi-digit cells, or multi-level cells ( LCs). In certain examples, MLC can refer to a memory cell that can store two bits of data per cell (e.g., one of four programmed states), a triple-level ceil (TLC) can refer to a memory cell that can store three bits of data per cell (e.g., one of eight programmed states), and a quad- level cell (QLC) can store four bits of data per cell. MIC is used herein in its broader context, to can refer to any memory ceil that can store more than one bit of data per cell (i.e., that can represent more than two programmed states).[0010] Traditional memory arrays are two-dimensional (2D) structures arranged on a surface of a semiconductor substrate. To increase memory capacity for a given area, and to decrease cost, the size of the individual memory cells has decreased. However, there is a technological limit to the reduction in size of the individual memory cells, and thus, to the memory density of 2D memory arrays. In response, three-dimensional (3D) memory structures, such as 3D NAMD architecture semiconductor memory devices, are being developed to further increase memory density and lower memory cost.[0011] Such 3D MAND devices often include strings of storage cells, coupled in series (e.g., drain to source), between one or more source-side select gates (SGSs) proximate a source, and one or more drain-side select gates (SGDs) proximate a bit line. In an example, the SGSs or the SGDs can include one or more field-effect transistors (FETs) or metal-oxide semiconductor (MOS) structure devices, etc. in some examples, the strings will extend vertically, through multiple vertically spaced tiers containing respective word lines. A semiconductor structure (e.g., a poiysilicon structure) may extend adjacent a string of storage cells to form a channel for the storages cells of the string. In the example of a vertical string, the poiysilicon structure may be in the form of a vertically extending pillar, in some examples the string may be "folded," and thus arranged relative to a U-shaped pillar. In other examples, multiple vertical structures may be stacked upon one another to form stacked arrays of storage cell strings.[0012] Memory arrays or devices can be combined together to form a storage volume of a memory system, such as a solid-state drive (SSD), a Universal Flash Storage (UFS™) device, a MuitiMediaCard (MMC) solid-state storage device, an embedded MMC device (eMMC™), etc. An SSD can be used as, among other things, the main storage device of a computer, having advantages over traditional hard drives with moving parts with respect to, for example, performance, size, weight, ruggedness, operating temperatu re range, and power consumption. For example, SSDs can have reduced seek time, latency, or other delay associated with magnetic disk d rives (e.g., electromechanical, etc.). SSDs use non-volatile memory cells, such as flash memory cells to obviate internal battery su pply requirements, thus allowing the drive to be more versatile and compact.[0013] An SSD can include a number of memory devices, including a nu mber of dies or logical units (e.g., logical unit n umbers or LUNs), and can include one or more processors or other controllers performing logic fu nctions required to operate the memory devices or interface with external systems. Such SSDs may include one or more flash memory die, including a number of memory arrays and peripheral circuitry thereon. The flash memory arrays can include a number of blocks of memory ceils organized into a number of physical pages. In many examples, the SSDs will also include DRAM or SRAM (or other forms of memory die or other memory structu res). The SSD can receive commands from a host in association with memory operations, such as read or write operations to transfer data (e.g., user data and associated integrity data, such as error data and address data, etc.) between the memory devices and the host, or erase operations to erase data from the memory devices. BRIEF DESCRIPTION OF TH E DRAWINGS[0014] In the drawings, which are not necessarily d rawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.[0015] FIG. 1 illustrates an example of an environment including a memory device.[0016] FIGS. 2-3 illustrate schematic diagrams of an example of a 3D NAND architectu re semiconductor memory array.[0017] FIG. 4 illustrates an example block diagram of a memory module. [0018] FIG. 5 illustrates a flowchart of a method for creating a logical partition in response to a request to create a physical partition according to some examples of the present disclosure.[0019] FIG. 6 illustrates a flowchart of a method of a NAND controller processing a host command directed to a physical partition that was created by the NAND as a logical partition according to some examples of the present disclosure.[0020] FIG. 7 shows a schematic of a memory controller according to some examples of the present disclosure.[0021] FIG. 8 is a block diagram illustrating an example of a machine upon which one or more embodiments may be implemented.DETAILED DESCRIPTION[0022] Electronic devices, such as mobile electronic devices (e.g., smart phones, tablets, etc.), electronic devices for use in automotive applications (e.g., automotive sensors, control units, driver-assistance systems, passenger safety or comfort systems, etc.), and internet-connected appliances or devices (e.g., internet-of-things (ioT) devices, etc.), have varying storage needs depending on, among other things, the type of electronic device, use environment, performance expectations, etc.[0023] Electronic devices can be broken down into several main components: a processor (e.g., a central processing unit (CPU) or other main processor); memory (e.g., one or more volatile or non-volatile random-access memory (RAM) memory device, such as dynamic RAM (DRAM), mobile or low-power double-data-rate synchronous DRAM (DDR SDRAM), etc.); and a storage device (e.g., non-voiatiie memory (NVM) device, such as flash memory, read-only memory (ROM), an SSD, an MMC, or other memory card structure or assembly, etc.). In certain examples, electronic devices can include a user interface (e.g., a display, touch-screen, keyboard, one or more buttons, etc.), a graphics processing unit (GPU), a power management circuit, a baseband processor or one or more transceiver circuits, etc. [0024] FIG. 1 illustrates an example of an environment 100 including a host device 105 and a memory device 110 configured to communicate over a communication interface. The host device 105 or the memory device 110 may be included in a variety of products 150, such as Internet of Things (ioT) devices (e.g., a refrigerator or other appliance, sensor, motor or actuator, mobilecommunication device, automobile, d rone, etc.) to support processing,communications, or control of the product 150.[0025] The memory device 110 includes a memory controller 115 and a memory array 120 including, for example, a n umber of individ ual memory die (e.g., a stack of th ree-dimensional (3D) NAND die), in 3D architecture semiconductor memory technology, vertical structures are stacked, increasing the number of tiers, physical pages, and accordingly, the density of a memory device (e.g., a storage device), in an example, the memory device 110 can be a discrete memory or storage device component of the host device 105. in other examples, the memory device 110 can be a portion of an integrated circuit (e.g., system on a chip (SOC), etc.), stacked or otherwise included with one or more other components of the host device 105.[0026] One or more communication interfaces can be used to transfer data between the memory device 110 and one or more other components of the host device 105, such as a Serial Advanced Technology Attach ment (SATA) interface, a Peripheral Component Interconnect Express (PCIe) interface, a Universal Serial Bus (USB) interface, a Universal Flash Storage (UFS) interface, an eMMC™ interface, or one or more other connectors or interfaces. The host device 105 can include a host system, an electronic device, a processor, a memory card reader, or one or more other electronic devices external to the memory device 110. in some examples, the host 105 may be a machine having some portion, or all, of the components discussed in reference to the machine 800 of FIG. 8.[0027] The memory controller 115 can receive instructions from the host 105, and can communicate with the memory array, such as to transfer data to (e.g., write or erase) or from (e.g., read) one or more of the memory cells, planes, sub- blocks, blocks, or pages of the memory array. The memory controller 115 can include, among other things, circuitry or fi rmware, including one or more components or integrated circuits. For example, the memory controller 115 can include one or more memory control u nits, circuits, or components configu red to control access across the memory array 120 and to p rovide a translation layer between the host 105 and the memory device 110. The memory controller 115 can include one or more in put/output {I/O} circuits, li nes, or interfaces to transfer data to or from the memory a rray 120. The memory controller 115 can include a memory ma nager 125 and an array controller 135.[0028] The memory manager 125 can include, among other things, circuitry or firmware, such as a n umber of components or integrated circuits associated with various memory management fu nctions. For pu rposes of the present description examp le memory operation and ma nagement fu nctions will be described in the context of NAN D memory. Persons skilled in the art wil l recognize that other forms of non-volatile memory may have ana logous memory operations or ma nagement fu nctions. Such NAND management fu nctions include wea r leveling (e.g., garbage collection or recla mation ), error detection or correction, b lock retirement, or one or more other memory ma nagement fu nctions. The memory manager 125 can pa rse or format host commands (e.g., commands received from a host) into device commands (e.g., commands associated with operation of a memory array, etc.), or generate device commands (e.g., to accomplish various memory management fu nctions) for the a rray cont oller 135 or one or more other components of the memory device 110.[0029] The memory manager 125 can include a set of management tab les 130 configu red to maintain various information associated with one or more component of the memory device 110 (e.g., various information associated with a memory a rray or one or more memory ceils coupled to the memory controller 115). For example, the manageme nt tables 130 can include information rega rd ing block age, b lock erase cou nt, error history, or one or more error counts (e.g., a write operation error count, a read bit error cou nt, a read operation error cou nt, an erase error count, etc.) for one or more blocks of memory cells coupled to the memory controller 115. In certain examples, if the number of detected errors for one or more of the error counts is above a threshold, the bit error can be referred to as an uncorrectable bit error. The management tables 130 can maintain a count of correctable or uncorrectable bit errors, among other things.[0030] The array controller 135 can include, among other things, circuitry or components configured to control memory operations associated with writing data to, reading data from, or erasing one or more memory cells of the memory device 110 coupled to the memory controller 115. The memory operations can be based on, for example, host commands received from the host 105, or internally generated by the memory manager 125 (e.g., in association with wear leveling, error detection or correction, etc.).[0031] The array controller 135 can include an error correction code (ECC) component 140, which can include, among other things, an ECC engine or other circuitry configured to detect or correct errors associated with writing data to or reading data from one or more memory cells of the memory device 110 coupled to the memory controller 115. The memory controller 115 can be configured to actively detect and recover from error occu rrences (e.g., bit errors, operation errors, etc.) associated with various operations or storage of data, while maintaining integrity of the data transferred between the host 105 and the memory device 110, or maintaining integrity of stored data (e.g., using redundant RAID storage, etc.), and can remove (e.g., retire) failing memory resources (e.g., memory cells, memory arrays, pages, blocks, etc.) to prevent futu re errors.[0032] The memory array 120 can include several memory cells arranged in, for example, a nu mber of devices, planes, sub-blocks, blocks, or pages. As one example, a 48 GB TLC NAND memory device can include 18,592 bytes (B) of data per page (16,384 + 2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device. As another example, a 32 GB MLC memory device (storing two bits of data per cell (i.e., 4 programmable states)) can include 18,592 bytes (B) of data per page (16,384 + 2208 bytes), 1024 pages per block, 548 blocks per plane, and 4 planes per device, but with half the required write time and twice the program/erase (P/E) cycles as a corresponding TLC memory device. Other examples can include other nu mbers or arrangements. In some examples, a memory device, or a portion thereof, may be selectively operated in SLC mode, or in a desired MLC mode (such as TLC, QLC, etc,}.[0033] In operation, data is typically written to or read from the NAND memory device 110 in pages, and erased in blocks. However, one or more memory operations (e.g., read, write, erase, etc.) can be performed on larger or smaller groups of memory ceils, as desired. The data transfer size of a NAMD memory device 110 is typically referred to as a page, whereas the data transfer size of a host is typically referred to as a sector.[0034] Although a page of data can include a number of bytes of user data (e.g., a data payload including a number of sectors of data) and its corresponding metadata, the size of the page often refers only to the number of bytes used to store the user data. As an example, a page of data having a page size of 4 KB may include 4 KB of user data (e.g., 8 sectors assuming a sector size of 512 B) as well as a n umber of bytes (e.g., 32 B, 54 B, 224 B, etc.) of metadata corresponding to the user data, such as integrity data (e.g., error detecting or correcting code data), address data (e.g., logical address data, etc.), or other metadata associated with the user data.[0035] Different types of memory ceils or memory arrays 120 can provide for different page sizes, or may require different amou nts of metadata associated therewith. For example, different memory device types may have different bit error rates, which can lead to different amounts of metadata necessary to ensure integrity of the page of data (e.g., a memory device with a higher bit error rate may require more bytes of error correction code data than a memory device with a lower bit error rate). As an example, a mu lti-level ceil (MLC) NAMD flash device may have a higher bit error rate than a corresponding single-level cell (SLC) NAND flash device, As such, the MLC device may require more metadata bytes for error data than the corresponding SLC device. [0036] FIG. 2 illustrates an example schematic diagram of a 3D IMAND architectu re semiconductor memory array 200 including a number of strings of memory cells (e.g., first-third A0memory strings 205Α0-207Α0, first-third Anmemory strings 205An-207An, first-third B0memory strings 205B0-2Q7Bo, first-third Bnmemory strings 205Bn-207Bn;etc.), organized in blocks (e.g., block A 201A, block B 201B, etc.) and sub-blocks (e.g., sub-block Ao 201Ao, su b-block An201An, su b- block Bo 201B0, sub-block Bn201Bn, etc.). The memory array 200 represents a portion of a greater number of similar structu res that wou ld typically be found in a block, device, or other unit of a memory device.[0037] Each string of memory cells includes a number of tiers of charge storage transistors (e.g., floating gate transistors, charge-trapping structures, etc.) stacked in the Z direction, source to drain, between a sou rce line (SRC) 235 or a sou rce-side select gate (SGS) (e.g., first-third A0SGS 231A0-233A0, first-third AnSGS 231An- 233An, first-third B,3SGS 231B0-233Bo, first-third BnSGS 231Bn-233Bri, etc.) and a drain-side select gate (SGD) (e.g., first-third A0SG D 226Ao-228A0, first-third AnSGD 226An-228An, first-third B0SGD 226Bcr228B0, first-third BnSGD 226Bn-228Bn, etc.). Each string of memory ceils in the 3D memory array can be arranged along the X direction as data lines (e.g., bit lines (BL) BL0-BL2 220-222), and along the Y direction as physical pages.[0038] Within a physical page, each tier represents a row of memory ceils, and each string of memory cells represents a colu mn. A sub-block can include one or more physical pages. A block can include a nu mber of su b-blocks (or physical pages) (e.g., 128, 256, 384, etc.). Although illustrated herein as having two blocks, each block having two sub-blocks, each sub-block having a single physical page, each physical page having th ree strings of memory ceils, and each string having 8 tiers of memory cells, in other examples, the memory array 200 can include more or fewer blocks, sub-blocks, physical pages, strings of memory cells, memory cells, or tiers. For example, each string of memory cells can include more or fewer tiers (e.g., 16, 32, 64, 128, etc.), as well as one or more additional tiers of semiconductor material above or below the charge storage transistors (e.g., select gates, data lines, etc), as desired . As an example, a 48 GB TLC MAND memory device can include 18,592 bytes (B) of data per page (16,384 + 2208 bytes), 1536 pages per block, 548 blocks per plane, and 4 or more planes per device.[0039] Each memory cell in the memory array 200 includes a control gate (CG) coupled to (e.g., electrically or otherwise operatively connected to) an access line (e.g., word lines (WL) WL00-WL70210A-217A, WL0rWL7i 210B-217B, etc.), which collectively couples the control gates (CGs) across a specific tier, or a portion of a tier, as desired. Specific tiers in the 3D memory array, and accordingly, specific memory cells in a string, can be accessed or controlled using respective access lines. Groups of select gates can be accessed using various select lines. For example, first-third A0SG D 226Α0-228Α0can be accessed using an A0SGD lineSGDAo 225Ao, first-third AnSGD 226Ari-228AI1can be accessed using an AnSGD line SGDAn 225An, first-third B0SGD 226B0-228B0can be accessed using an B0SGD line SGDBQ 225Bo, and first-third BnSG D 226Bn-228Bncan be accessed using an BnSG D line SGDBri225B„. First-third A0SGS 231A0-233A0and first-third AnSGS 231An- 233Ancan be accessed using a gate select line SGS0230A, and first-third B0SGS 231B0~233BQ and first-third BnSGS 231Br,-233Bncan be accessed using a gate select line SGS] 230B.[0040] In an example, the memory array 200 can include a n umber of levels of semiconductor material (e.g., polysilicon, etc.) configured to couple the control gates (CGs) of each memory cell or select gate (or a portion of the CGs or select gates) of a respective tier of the array. Specific strings of memory cells in the array can be accessed, selected, or controlled using a combination of bit lines (BLs) and select gates, etc., and specific memory cells at one or more tiers in the specific strings can be accessed, selected, or controlled using one or more access lines (e.g., word lines).[0041] FIG. 3 illustrates an example schematic diagram of a portion of a NAN D architectu re semiconductor memory array 300 including a plu rality of memory cells 302 arranged in a two-dimensional array of strings (e.g., first-third strings 305- 307) and tiers (e.g., illustrated as respective word lines (WL) WL0-WL7 310-317, a drain-side select gate (SG D) line 325, a source-side select gate (SGS) line 330, etc,}, and sense amplifiers or devices 360. For example, the memory array 300 can illustrate an example schematic diagram of a portion of one physical page of memory cells of a 3D NAMD architecture semicond uctor memory device, such as illustrated in FIG. 2.[0042] Each string of memory cells is coupled to a source line (SRC) using a respective source-side select gate (SGS) (e.g., first-third SGS 331-333), and to a respective data line (e.g., first-third bit lines (BL) BL0-BL2 320-322) using a respective d rain-side select gate (SG D) (e.g., first-third SGD 326-328). Although illustrated with 8 tiers (e.g., using word lines (WL) WL0-WL7 310-317) and three data lines (BL0-BL2 326-328) in the example of FIG. 3, other examples can include strings of memory cells having more or fewer tiers or data lines, as desired.[0043] In a NAND architecture semiconductor memory array, such as the example memory array 300, the state of a selected memory cell 302 can be accessed by sensing a current or voltage variation associated with a particu lar data line containing the selected memory cell. The memory array 300 can be accessed (e.g., by a control circuit, one or more processors, digital logic, etc.) using one or more drivers. In an example, one or more d rivers can activate a specific memory cell, or set of memory ceils, by driving a particu lar potential to one or more data lines (e.g., bit lines BL0-BL2), access lines (e.g., word lines WL0-WL7), or select gates, depending on the type of operation desired to be performed on the specific memory cell or set of memory ceils.[0044] To program or write data to a memory ceil, a programming voltage (Vpgm) (e.g., one or more programming pulses, etc.) can be applied to selected word lines (e.g., WL4), and thus, to a control gate of each memory cell coupled to the selected word lines (e.g., first-third control gates (CGs) 341-343 of the memory cells coupled to WL4). Programming pulses can begin, for example, at or near 15V, and, in certain examples, can increase in magnitude during each programming pu lse application. While the program voltage is applied to the selected word lines, a potential, such as a ground potential (e.g., Vss), can be applied to the data lines (e.g., bit lines) and substrates (and thus the channels, between the sources and drains) of the memory cells targeted for programming, resulting in a charge transfer (e.g., direct injection or Fowler-Nordheim (FN) tunneling, etc) from the channels to the floating gates of the targeted memory cells.[0045] In contrast, a pass voltage (Vpass) can be applied to one or more word lines having memory cells that are not targeted for programming, or an inhibit voltage (e.g., Vcc) can be applied to data lines (e.g., bit lines) having memory cells that are not targeted for programming, for example, to inhibit charge from being transferred from the channels to the floating gates of such non-targeted memory ceils. The pass voltage can be variable, depending, for example, on the proximity of the appiied pass voltages to a word line targeted for programming. The inhibit voltage can include a supply voltage (Vcc), such as a voltage from an external source or supply (e.g., a battery, an AC~to-DC converter, etc.), relative to a ground potential (e.g., Vss).[0046] As an example, if a programming voltage (e.g., 15V or more) is applied to a specific word line, such as WL4, a pass voltage of 10V can be appiied to one or more other word lines, such as WL3, WL5, etc., to inhibit programming of non- targeted memory cells, or to retain the values stored on such memory cells not targeted for programming. As the distance between an appiied program voltage and the non-targeted memory ceils increases, the pass voltage required to refrain from programming the non-targeted memory cells can decrease. For example, where a programming voltage of 15V is applied to WL4, a pass voltage of 10V can be applied to WL3 and WL5, a pass voltage of 8V can be applied to WL2 and WL6, a pass voltage of 7V can be applied to WL1 and WL7, etc. In other examples, the pass voltages, or number of word lines, etc., can be higher or lower, or more or less.[0047] The sense amplifiers 360, coupled to one or more of the data lines (e.g., first, second, or third bit lines (BL0-BL2) 320-322), can detect the state of each memory cell in respective data lines by sensing a voltage or cu rrent on a particular data line. [0048] Between applications of one or more programming pu lses (e.g., Vpgm), a verify operation can be performed to determine if a selected memory cell has reached its intended programmed state. If the selected memory ceil has reached its intended programmed state, it can be inhibited from fu rther programming. If the selected memory cell has not reached its intended programmed state, additional programming pulses can be applied. If the selected memory cell has not reached its intended programmed state after a particu lar nu mber of programming pu lses (e.g., a maximum number), the selected memory cell, or a string, block, or page associated with such selected memory cell, can be marked as defective.[0049] To erase a memory cell or a group of memory cells (e.g., erasu re is typically performed in blocks or sub-blocks), an erasure voltage (Vers) (e.g., typically Vpgm) can be applied to the substrates (and thus the channels, between the sources and drains) of the memory cells targeted for erasure (e.g., using one or more bit lines, select gates, etc.), while the word lines of the targeted memory cells are kept at a potential, such as a ground potential (e.g., Vss), resu lting in a charge transfer (e.g., direct injection or Fowler-Nordheim (FN) tunneling, etc.) from the floating gates of the targeted memory cells to the channels.[0050] FIG. 4 illustrates an example block diagram of a memory device 400 including a memory array 402 having a plurality of memory ceils 404, and one or more circuits or components to provide communication with, or perform one or more memory operations on, the memory array 402. The memory device 400 can include a row decoder 412, a column decoder 414, sense amplifiers 420, a page buffer 422, a selector 424, an input/output {l/Q} circuit 426, and a memory control unit 430.[0051] The memory cells 404 of the memory array 402 can be arranged in blocks, such as first and second blocks 402A, 402B. Each block can include sub- blocks. For example, the first block 402A can include first and second sub-blocks 402A0, 402An, and the second block 402B can include first and second sub-blocks 402B0, 402Bn. Each sub-block can include a number of physical pages, each page including a nu mber of memory cells 404. Although illustrated herein as having two blocks, each block having two su b-blocks, and each sub-block having a nu mber of memory cells 404, in other examples, the memory array 402 can include more or fewer blocks, sub-blocks, memory ceils, etc. In other examples, the memory cells 404 can be arranged in a number of rows, columns, pages, sub-blocks, blocks, etc., and accessed using, for example, access lines 406, first data lines 410, or one or more select gates, source lines, etc.[0052] The memory control unit 430 can control memory operations of the memory device 400 according to one or more signals or instructions received on control lines 432, including, for example, one or more clock signals or control signals that indicate a desired operation (e.g., write, read, erase, etc.), or add ress signals (AO-AX) received on one or more address lines 416. One or more devices external to the memory device 400 can control the values of the control signals on the control lines 432, or the address signals on the add ress line 416. Examples of devices external to the memory device 400 can include, but are not limited to, a host, a memory controller, a processor, or one or more circuits or components not illustrated in FIG. 4.[0053] The memory device 400 can use access lines 406 and first data lines 410 to transfer data to (e.g., write or erase) or from (e.g., read) one or more of the memory cells 404. The row decoder 412 and the column decoder 414 can receive and decode the address signals (AO-AX) from the add ress line 416, can determine which of the memory ceils 404 are to be accessed, and can provide signals to one or more of the access lines 406 (e.g., one or more of a plurality of word lines (WL0- WLm)) or the first data lines 410 (e.g., one or more of a plurality of bit lines (BL0- BLn)), such as described above.[0054] The memory device 400 can include sense circuitry, such as the sense amplifiers 420, configured to determine the values of data on (e.g., read), or to determine the values of data to be written to, the memory ceils 404 using the first data lines 410. For example, in a selected string of memory cells 404, one or more of the sense amplifiers 420 can read a logic level in the selected memory ceil 404 in response to a read cu rrent flowing in the memory array 402 through the selected string to the data lines 410.[0055] One or more devices external to the memory device 400 cancommunicate with the memory device 400 using the I/O lines (DQO-DQN) 408, address lines 416 (AO-AX), or control lines 432. The input/output (I/O) circuit 426 can transfer values of data in or out of the memory device 400, such as in or out of the page buffer 422 or the memory array 402, using the I/O lines 408, according to, for example, the control lines 432 and address lines 416. The page buffer 422 can store data received from the one or more devices external to the memory device 400 before the data is programmed into relevant portions of the memory array 402, or can store data read from the memory array 402 before the data is transmitted to the one or more devices external to the memory device 400.[0056] The column decoder 414 can receive and decode address signals (AO- AX) into one or more column select signals (CSELl-CSELn). The selector 424 (e.g., a select circuit) can receive the column select signals (CSELl-CSELn) and select data in the page buffer 422 representing values of data to be read from or to be programmed into memory cells 404. Selected data can be transferred between the page buffer 422 and the I/O circuit 426 using second data lines 418.[0057] The memory control unit 430 can receive positive and negative supply signals, such as a supply voltage (Vcc) 434 and a negative supply (Vss) 436 (e.g., a ground potential), from an external sou rce or supply (e.g., an internal or external battery, an AC-to-DC converter, etc.). In certain examples, the memory control u nit 430 can include a regulator 428 to internally provide positive or negative supply- signals.[0058] Computing device man ufacturers who incorporate NAND memory devices into their devices often request the storage space of the NAND memory device be partitioned into two or more chunks of memory as a "physical partition" where the NAND breaks up the physical space. Each physical partition is separately managed with its own set of logical block add resses that map to separately managed physical address space on the NAND. Those physical memory blocks that correspond to those physical addresses always belong to the physical partition (unless repartitioned, which requires the data on the partition to be erased).Garbage collection and other operations are performed on each physical partition separately. Essentially, the NAND manages it as a separate device.[0059] One alternative type of partition is a logical partition in which an operating system of a host device (e.g., a device in which the NAND is installed) partitions a pool of storage at the O/S level. The applications above the O/S see the NAND device as two separate storage pools (e.g., each partition), but the entire space occupied by both pools is contiguously managed by the NAND device itself. In these examples, the NAND device may have no knowledge of the existence of the logical partition.[0060] Certain device manufacturers feel that physical partitions can better guarantee certain performance properties, such as speed, security, longevity, and the like. For example, the device manufacturers may have service level agreements that specify security requirements, the makeup of the memory cells making up the partition (e.g., whether they are SLC, MIX, TLC, or the like), a size, anoverprovisioning of the partition, and the like.[0061] The provisioning of these partitions can be resource intensive on the NAND. For example, various management data needs to be physically stored along with the partition on the NAND memory device. This takes space on the NAND that could otherwise be used to store user data. Once a physical partition is created, there is no way to move memory cells between partitions. Thus, for example, if a high usage for a particular physical partition results in memory ceils in that partition that wear out at a higher than normal rate, the possibility may occur that too many memory cells have gone bad and as a result, the partition may become unusable even though space is available in other partitions. This could prevent the device from booting if the boot partition is affected. While the user data in the other partition may be fine (and indeed, there may be available space in the other partition), the device may not boot and user data may be irretrievable. [0062] Disclosed in some examples are methods, systems, machine-readable mediums, and NAND devices which create a NAND-level logical partition instead of a requested physical partition. The NAND device manages it as one contiguous memory pool, but the operating system on the host device sees this NAND-ievel logical partition as a physical partition. Thus, for example, the operating system on the host device may issue host commands to a physical partition (giving a partition identifier) with a logical block address (LBA) in a range corresponding to the physical partition, but the NAND memory device translates these requests into physical addresses in a common pool shared by other partitions. The NAND controller may ensure that service level guarantees are respected. For example, when allocating physical blocks to the partition, if the physical partition service level guarantees specify that the memory is to be an SLC cell, if the memory block is not SLC, the controller may reconfigure the memory to be SLC (from TLC, LC, QLC, and the like).[0063] As a result, the NAND can give the operating system the service level guarantees it asks for without incurring the memory storage expenses and management burdens of creating a separate partition. Thus, a logical partition is created at the NAND memory controller level - as opposed to at the operating system level.[0064] FIG. 5 illustrates a flowchart of a method 500 for creating a logical partition in response to a request to create a physical partition according to some examples of the present disclosure. At operation 510 the controller may receive a command from a host device over a host interface to create a physical partition. For example, an operating system, manufacturing process machinery, or the like. The command may include one or more service-level requirements for the partition, such as security requirements, composition requirements (e.g., whether to use SLC, TLC, MLC), and the like.[0065] At operation 515 the NAND device may create a logical NAND-ievel partition instead. For example, the device may initialize one or more data structures in volatile memory on the NAND itself that is used by the controller, or stored in memory cells of the NAN D managed by the controller to track the logical partition and to ensure that the service level agreements are met. The controller may create a partition identifier and a logical block address (LBA) range for the partition and may provide this to the host device. The controller may also update a logical to physical (L2P) table indicating the partition identifier and logical block address tuples assigned to the partition and the corresponding physical memory locations from the common memory pool. When the host device wishes to read, write, or erase data on this partition, it passes in the partition identifier and the LBA of the block it wishes to read, write, or erase. The NAND may then convert this to physicai addresses using the L2P table.[0066] At operation 520, the NAND may send a response to the host. The response may include the partition identifier (e.g., a namespace, a Logical Unit Number (LUN), and the like), a status (whether the partition was created), and the LBA range. At operation 525 the NAND device translates host requests directed at the physical partition to instead be directed at the logical partition. For example, the host may provide a partition ID and a partition-specific LBA in a host command (e.g., read, write, erase). The NAND may use these values as a lookup in a table that then provides the assigned physical address which is then used to service the request.[0067] In some examples, when the partition is created, the NAND device reserves the space without allocating any actual physical resources. Thus, the NAND device keeps track of how many blocks are allocated to each partition without actually assigning particular physical add resses to particu lar LBAs in the L2P table. This prevents the host device operating system from creating partitions that in sum exceed the storage capacity of the NAND. Once the host starts writing to the partition, the NAND allocates physical space to the partition th rough the L2P table.[0068] For example, if the host operating system req uests two physical partitions, a first partition that is 10 GB and a second that is 5 GB, the NAND may create two NAN D-level logical partitions and assign the first partition a Logical Unit Number of 1 and the second partition a LUN of 2. Each LUN may have a number of valid LBAs assigned to it, For example, LUN 1 may have 2048 LBAs that start at 0 and end at 2047. LUN 2 may have 1024 LBAs that start at 0 and end at 1024. After creation, none of these LBAs may be mapped to a physical address. However, the NAND memory device recognizes that 15 GB of space is already committed to the two partitions, so if the total capacity of the NAND is 25 GB, the NAND will reject an attempt to create a third partition of 15 GB. Once a write arrives, the NAND device may assign a physical add ress to an LBA and thus to a particular NAND-level logical partition . An operation to modify a value stored in the NAND may cause the NAND to find a free block in the pool of memory and assign the LBA to that free block. That free block then becomes assigned to that logical NAND-level partition. The old block is then marked as invalid. Once garbage collection happens, the old block (and all other invalid blocks) may retu rn to the pool for later a llocation to any of the NAND-level partitions, in some examples, to assign a particular memory cell to a partition, the NAN D may reconfigure it from a first configuration (e.g., an SLC, MLC, or TLC) to a different configu ration (SLC, LC, or TLC).[0069] As noted, the memory cells of the NAND may be managed as a shared pool and may be dynamically allocated between the logical partitions. Garbage collection may be done across the entire device (rather than on a single partition), and overprovisioning may also be across the entire pool.[0070] FIG. 6 illustrates a flowchart of a method 600 of a NAND controller processing a host command directed to a physical partition that was created by the NAND as a logical partition . Method 600 may be an example of operation 525 of FIG. 5. At operation 615 the controller receives the command from the host for an operation. For example, the command may be received over a host interface such as a UF5 interface. At operation 620 the controller may translate the partition ID and the LBA into a physical address using the L2P table. If the LBA of the partition was never written before, a page in the common pool may be utilized and may be assigned to that LBA and that partition. The controller may then write the host data to that physical page. As previously noted, if the LBA is already assigned, the system may mark the physical address currently assigned to that (LBA, partition ID) tuple invalid, locate a free block in the common pool, assign that free block to the (LBA, partition ID) tuple in the L2P table, and write the data to that block, if the host command is an erase, the physical block corresponding to that (LBA, partition ID) tuple may be marked as invalid. Once that block is garbage collected, it may be reallocated to any partition. At operation 625 the controller may service the host request using the physical address and may return a status to the host. For example, a read, a write, an erase, or the like.[0071] FIG. 7 shows a schematic of a memory controller 715 according to some examples of the present disclosure. Memory controller 715 is an example of memory controller 115, memory manager 725 is an example of memory manager125, management tables 730 may be an example of management table 130.Controller 735 and ECC 740 may be an example of controller 135 and ECC 140 ofFIG. 1. Controller 735 includes a PM component 760 that may handle the creation and management of the logical partitions in response to requests to create a physical partition. For example, the PM component 760 may implement the methods of FIG. 5 and 6.[0072] FIG. 8 illustrates a block diagram of an example machine 800 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform, in alternative embodiments, the machine 800 may operate as a standalone device or may be connected (e.g., networked) to other machines, in a networked deployment, the machine 800 may operate in the capacity of a server machine, a client machine, or both in server-client network environments, in an example, the machine 800 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 800 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, an ioT device, automotive system, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.[0073] Examples, as described herein, may include, or may operate by, logic, components, devices, packages, or mechanisms. Circuitry is a collection (e.g., set) of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuitry membership may be flexible over time and underlying hardware variability. Circuitries include members that may, alone or in combination, perform specific tasks when operating. In an exampie, hardware of the circuitry may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuitry may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g.,magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation, in connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable participating hardware (e.g., the execution units or a loading mechanism) to create members of the circuitry in hardware via the variable connections to carry out portions of the specific tasks when in operation.Accordingly, the computer readable medium is communicatively coupled to the other components of the circuitry when the device is operating. In an exampie, any of the physical components may be used in more than one member of more than one circuitry. For example, under operation, execution units may be used in a first circuit of a first circuitry at one point in time and reused by a second circuit in the first circuitry, or by a third circuit in a second circuitry at a different time.[0074] The machine (e.g., computer system) 800 (e.g., the host device 105, the memory device 110, etc.) may include a hardware processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof, such as the memory controller 115, etc.), a main memory 804 and a static memory 806, some or all of wh ich may communicate with each other via an interlin k (e.g., bus) 808, The machine 800 may further include a display unit 810, an alphan umeric in put device 812 (e.g., a keyboard), and a user interface (Ul) navigation device 814 (e.g., a mouse). In an example, the display unit 810, in put device 812 and Ul navigation device 814 may be a touch screen display. The machine 800 may additionally include a storage device (e.g., drive unit) 816, a signal generation device 818 (e.g., a speaker), a network interface device 820, and one or more sensors 816, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 800 may include an output controller 828, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).[0075] The storage device 816 may include a machine readable medium 822 on which is stored one or more sets of data structures or instructions 824 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 824 may also reside, completely or at least partially, within the main memory 804, within static memory 806, or within the hardware processor 802 during execution thereof by the machine 800. In an example, one or any combination of the hardware processor 802, the main memory 804, the static memory 806, or the storage device 816 may constitute the machine readable mediu m 822.[0076] While the machine readable medium 822 is illustrated as a single mediu m, the term "machine readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 824.[0077] The term "machine readable medium" may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 800 and that cause the machine 800 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-!imiting machine readable mediu m examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine readable medium comprises a machine-readable mediu m with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine readable media may include: nonvolatile memory, such as semiconductor memory devices (e.g., ElectricallyProgrammable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)} and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.[0078] The instructions 824 (e.g., software, programs, an operating system (OS), etc.) or other data are stored on the storage device 821, can be accessed by the memory 804 for use by the processor 802. The memory 804 (e.g., DRAM) is typically fast, but volatile, and th us a different type of storage than the storage device 821 (e.g., an SSD), which is suitable for long-term storage, including while in an "off" condition. The instructions 824 or data in use by a user or the machine 800 are typically loaded in the memory 804 for use by the processor 802. When the memory 804 is fu ll, virtual space from the storage device 821 can be allocated to supplement the memory 804; however, because the storage 821 device is typically slower than the memory 804, and write speeds are typically at least twice as slow as read speeds, use of virtual memory can greatly reduce user experience due to storage device latency (in contrast to the memory 804, e.g., DRAM). Further, use of the storage device 821 for virtual memory can greatly reduce the usable lifespan of the storage device 821.[0079] In contrast to virtual memory, virtual memory compression (e.g., the Linux®kernel feature "ZRAM") uses part of the memory as compressed block storage to avoid paging to the storage device 821. Paging takes place in the compressed block until it is necessary to write such data to the storage device 821. Virtual memory compression increases the usable size of memory 804, while reducing wear on the storage device 821.[0080] Storage devices optimized for mobile electronic devices, or mobile storage, traditionally include MMC solid-state storage devices (e.g., micro Secure Digital (microSD™) cards, etc.). MMC devices include a number of parallel interfaces (e.g., an 8-bit parallel interface) with a host device, and are often removable and separate components from the host device. In contrast, eMMC™ devices are attached to a circuit board and considered a component of the host device, with read speeds that rival serial ATA™ (Serial AT (Advanced Technology) Attachment, or SATA) based SSD devices. However, demand for mobile device performance continues to increase, such as to fully enable virtual or augmented- reality devices, utilize increasing networks speeds, etc. In response to this demand, storage devices have shifted from parallel to serial communication interfaces. Universal Flash Storage (UFS) devices, including controllers and firmware, communicate with a host device using a low-voltage differential signaling (LVDS) serial interface with dedicated read/write paths, further advancing greater read/write speeds.[0081] The instructions 824 may further be transmitted or received over a communications network 826 using a transmission medium via the network interface device 820 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 820 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 826, in an example, the network interface device 820 may include a plurality of antennas to wirelessiy communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output fMISO) techniques. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 800, and includes digital or analog communications signals or other intangible medium to facilitatecommunication of such software,[0082] The above detailed description includes references to theaccompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as "examples". Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[0083] In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" may include "A but not B," "B but not A," and "A and B," unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein". Also, in the following claims, the terms "including" and "comprising" are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.[0084] In various examples, the components, controllers, processors, units, engines, or tables described herein can include, among other things, physical circuitry or firmware stored on a physical device. As used herein, "processor" means any type of computational circuit such as, but not limited to, amicroprocessor, a microcontroller, a graphics processor, a digital signal processor (DSP), or any other type of processor or processing circuit, including a group of processors or multi-core devices.[0085] The term "horizontal" as used in this document is defined as a plane parallel to the conventional plane or surface of a substrate, such as that underlying a wafer or die, regardless of the actual orientation of the substrate at any point in time. The term "vertical" refers to a direction perpendicular to the horizontal as defined above. Prepositions, such as "on," "over," and "under" are defined with respect to the conventional plane or surface being on the top or exposed surface of the substrate, regardless of the orientation of the substrate; and while "on" is intended to suggest a direct contact of one structure relative to another structure which it lies "on"{in the absence of an express indication to the contrary); the terms "over" and "under" are expressly intended to identify a relative placement of structures (or layers, features, etc.), which expressly includes-but is not limited to-direct contact between the identified structures unless specifically identified as such. Similarly, the terms "over" and "under" are not limited to horizontal orientations, as a structure may be "over" a referenced structure if it is, at some point in time, an outermost portion of the construction under discussion, even if such structure extends vertically relative to the referenced structure, rather than in a horizontal orientation.[0086] The terms "wafer" and "substrate" are used herein to refer generally to any structure on which integrated circuits are formed, and also to such structures during various stages of integrated circuit fabrication. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the various embodiments is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled.[0087] Various embodiments according to the present disclosure and described herein include memory utilizing a vertical structure of memory ceils (e.g., NAND strings of memory ceils). As used herein, directional adjectives will be taken relative a surface of a substrate upon which the memory cells are formed (i.e., a vertical structure will be taken as extending away from the substrate surface, a bottom end of the vertical structure will be taken as the end nearest the substrate surface and a top end of the vertical structure will be taken as the end farthest from the substrate surface).[0088] As used herein, directional adjectives, such as horizontal, vertical, normal, parallel, perpendicular, etc., can refer to relative orientations, and are not intended to require strict adherence to specific geometric properties, unless otherwise noted. For example, as used herein, a vertical structure need not be strictly perpendicular to a surface of a substrate, but may instead be generally perpendicular to the surface of the substrate, and may form an acute angle with the surface of the substrate (e.g., between 60 and 120 degrees, etc.).[0089] In some embodiments described herein, different doping configurations may be applied to a source-side select gate (SGS), a control gate (CG), and a drain- side select gate (SGD), each of which, in this example, may be formed of or at least include polysilicon, with the result such that these tiers (e.g., polysilicon, etc.) may have different etch rates when exposed to an etching solution. For example, in a process of forming a monolithic pillar in a 3D semiconductor device, the SGS and the CG may form recesses, while the SGD may remain less recessed or even not recessed. These doping configurations may thus enable selective etching into the distinct tiers (e.g., SGS, CG, and SGD) in the 3D semiconductor device by using an etching solution (e.g., tetramethylammonium hydroxide (TMCH)).[0090] Operating a memory cell, as used herein, includes reading from, writing to, or erasing the memory cell. The operation of placing a memory cell in an intended state is referred to herein as "programming," and can include both writing to or erasing from the memory cell (e.g., the memory cell may be programmed to an erased state).[0091] According to one or more embodiments of the present disclosure, a memory controller (e.g., a processor, controller, firmware, etc.) located internal or external to a memory device, is capable of determining (e.g., selecting, setting, adjusting, computing, changing, clearing, communicating, adapting, deriving, defining, utilizing, modifying, applying, etc.) a quantity of wear cycles, or a wear state (e.g., recording wear cycles, counting operations of the memory device as they occur, tracking the operations of the memory device it initiates, evaluating the memory device characteristics corresponding to a wear state, etc.)[0092] According to one or more embodiments of the present disclosure, a memory access device may be configured to provide wear cycle information to the memory device with each memory operation. The memory device control circuitry (e.g., control logic) may be programmed to compensate for memory device performance changes corresponding to the wear cycle information. The memory device may receive the wear cycle information and determine one or more operating parameters (e.g., a value, characteristic) in response to the wear cycle information.[0093] It will be understood that when an element is referred to as being "on," "connected to" or "coupled with" another element, it can be directly on, connected, or coupled with the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly connected to" or "directly coupled with" another element, there are no intervening elements or layers present. If two elements are shown in the drawings with a line connecting them, the two elements can be either be coupled, or directly coupled, unless otherwise indicated.[0094] Method examples described herein can be machine or computer- implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, the code can be tangibly stored on one or more volatile or non-volatile tangible computer- readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact discs and digital video disks}, magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), solid state drives (SSDs), Universal Flash Storage (UFS) device, embedded MMC {eMMC} device, and the like.[0095] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.[0096] Other Notes and Examples[0097] Example 1 is a NAND memory device comprising: A NAND memory array including a first pool of memory; a controller, the controller executing instructions, to cause the controller to perform operations comprising: receiving a command from a host to create a physical partition of the first pool of memory; creating a NAND-level logical partition utilizing the first pool of memory instead of creating the physical partition, the first pool of memory shared with a second logical partition; sending a response to the host indicative that the physical partition has been created; translating a request from the host identifying the physical partition and a Logical Block Address (LBA) to a physical address of the first pool of memory; and executing the request on the physical address of the first pool of memory.[0098] In Example 2, the subject matter of Example 1 optionally includes wherein the operations of sending the response to the host indicative that the physical partition has been created comprises providing a physical partition identifier and a range of LBAs between zero and a number based upon a size provided by the command from the host to create the physical partition.[0099] In Example 3, the subject matter of Example 2 optionally includes wherein the operations of translating the req uest from the host identifying the physical partition and a Logical Block Address of the request to a physical address of the first pool comprises mapping a received physical partition identifier and an LBA in the range of LBAs to a physical location in the first pool of memory and performing a request from the host on the physical location.[0100] In Example 4, the subject matter of any one or more of Examples 2-3 optionally include wherein the physical partition is identified by a logical unit identifier number (LUN).[0101] In Example 5, the subject matter of any one or more of Examples 1-4 optionally include wherein memory cells from the first pool of memory that are used to service the physical partition are configured to meet a requirement specified by the host in the command to create the physical partition.[0102] In Example 6, the subject matter of Example 5 optionally includes wherein the requirement is that memory ceils be configured as a Single Layer Cell (SLQ, [0103] In Example 7, the subject matter of any one or more of Examples 1-6 optionally include wherein the NAND-ievel logical partition appears to the host to be a physical partition , and wherein sending a response to the host indicative that the physical partition has been created without creating the physical partition.[0104] Example 8 is a method comprising: receiving a command from a host to create a physical partition of a first pool of memory on a NAND memory device; creating a NAND-level logical partition utilizing the first pool of memory instead of creating the physical partition, the first pool of memory shared with a second logical partition; sending a response to the host indicative that the physical partition has been created; translating a request from the host identifying the physical partition and a Logical Block Address (LBA) to a physical address of the first pool of memory; and executing the request on the physical address of the first pool of memory,[0105] In Example 9, the subject matter of Example 8 optionally includes wherein sending the response to the host indicative that the physical partition has been created comprises providing a physical partition identifier and a range of LBAs between zero and a number based upon a size provided by the command from the host to create the physical partition.[0106] In Example 10, the subject matter of Example 9 optionally includes wherein translating the request from the host identifying the physical partition and a Logical Block Address of the request to a physical address of the first pool comprises mapping a received physical partition identifier and an LBA in the range of LBAs to a physical location in the first pool of memory and performing a request from the host on the physical location.[0107] In Example 11, the subject matter of any one or more of Examples 9-10 optionally include wherein the physical partition is identified by a logical unit identifier number (LUM).[0108] In Example 12, the subject matter of any one or more of Examples 8-11 optionally include wherein memory cells from the first pool of memory that are used to service the physical partition are configured to meet a requirement specified by the host in the command to create the physical partition.[0109] In Example 13, the subject matter of Example 12 optionally includes wherein the requirement is that memory ceils be configured as a Single Layer Ceil (SLC),[0110] In Example 14, the subject matter of any one or more of Examples 8-13 optionally include wherein the SMAMD-ievel logical partition appears to the host to be a physical partition , and wherein sending a response to the host indicative that the physical partition has been created without creating the physical partition.[0111] Example 15 is a machine-readable medium comprising instructions, which when executed, causes a machine to perform operations comprising:receiving a command from a host to create a physical partition of a first pool of memory on a NAND device; creating a NAND-ievel logical partition utilizing the first pool of memory instead of creating the physical partition, the first pool of memory shared with a second logical partition; sending a response to the host indicative that the physical partition has been created; translating a request from the host identifying the physical partition and a Logical Block Address (LBA) to a physical address of the first pool of memory; and executing the request on the physical address of the first pool of memory.[0112] In Example 16, the subject matter of Example 15 optionally includes wherein the operations of sending the response to the host indicative that the physical partition has been created comprises providing a physical partition identifier and a range of LBAs between zero and a number based upon a size provided by the command from the host to create the physical partition.[0113] In Example 17, the subject matter of Example 16 optionally includes wherein the operations of translating the request from the host identifying the physical partition and a Logical Block Address of the request to a physical address of the first pool comprises mapping a received physical partition identifier and an LBA in the range of LBAs to a physical location in the first pool of memory and performing a request from the host on the physical location. [0114] In Example 18, the subject matter of any one or more of Examples 16- 17 optionally include wherein the physical partition is identified by a logical unit identifier number (LUN).[0115] In Example 19, the subject matter of any one or more of Examples 15- 18 optionally include wherein memory cells from the first pool of memory that are used to service the physical partition are configured to meet a requirement specified by the host in the command to create the physical partition,[0116] In Example 20, the subject matter of Example 19 optionally includes wherein the requirement is that memory cells be configured as a Single Layer Cell (SLC).[0117] In Example 21, the subject matter of any one or more of Examples 15- 20 optionally include wherein the NAND-ieve! logical partition appears to the host to be a physical partition , and wherein sending a response to the host indicative that the physical partition has been created without creating the physical partition.[0118] Example 22 is a device comprising: means for receiving a command from a host to create a physical partition of a first pool of memory on a NAND memory device; means for creating a NAND-levei logical partition utilizing the first pool of memory instead of creating the physical partition, the first pool of memory shared with a second logical partition; means for sending a response to the host indicative that the physical partition has been created; means for translating a request from the host identifying the physical partition and a Logical Block Address (LBA) to a physical address of the first pool of memory; and means for executing the request on the physical address of the first pool of memory.[0119] In Example 23, the subject matter of Example 22 optionally includes wherein the means for sending the response to the host indicative that the physical partition has been created comprises means for providing a physical partition identifier and a range of LBAs between zero and a number based upon a size provided by the command from the host to create the physical partition.[0120] in Example 24, the subject matter of Example 23 optionally includes wherein the means for translating the request from the host identifying the physical partition and a Logical Block Address of the request to a physical address of the first pool comprises means for mapping a received physical partition identifier and an LBA in the range of LBAs to a physical location in the first pool of memory and performing a request from the host on the physical location.[0121] In Example 25, the subject matter of any one or more of Examples 23-24 optionally include wherein the physical partition is identified by a logical unit identifier number (LUN).[0122] In Example 26, the subject matter of any one or more of Examples 22-25 optionally include wherein memory cells from the first pool of memory that are used to service the physical partition are configured to meet a requirement specified by the host in the command to create the physical partition.[0123] In Example 27, the subject matter of Example 26 optionally includes wherein the requirement is that memory ceils be configured as a Single Layer Cell fSLC),[Q124] In Example 28, the subject matter of any one or more of Examples 22- 27 optionally include wherein the NAND-level logical partition appears to the host to be a physical partition , and wherein sending a response to the host indicative that the physical partition has been created without creating the physical partition. |
Systems, methods, and apparatuses relating to 16-bit floating-point matrix dot product instructions are described. In one embodiment, a processor includes fetch circuitry to fetch a single instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, an M by K first source matrix, and a K by N second source matrix, the source matrices having elements that each comprise a pair of half-precision floating-point values, the opcode to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted single-precision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix, decode circuitry to decode the fetched instruction, and the execution circuitry to respond to the decoded instruction as specified by the opcode. |
CLAIMS1. An apparatus comprising: fetch circuitry to fetch a single instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, an M by K first source matrix, and a K by N second source matrix, the source matrices having elements that each comprise a pair of half-precision floating-point values, the opcode to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted singleprecision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix; decode circuitry to decode the fetched instruction; and the execution circuitry to respond to the decoded instruction as specified by the opcode.2. The apparatus of claim 1, wherein the half-precision floating-point format is specified by the opcode of the single instruction.3. The apparatus of claim 1, wherein M, N, and K are specified by the single instruction.4. The apparatus of claim 1, where the execution circuitry is to cause a matrix operations accelerator to perform at least the multiplications and the accumulation.5. The apparatus of claim 4, wherein M, N, and K are specified by a configuration of the matrix operations accelerator to be programmed by execution of a matrix accelerator configuration instruction before executing the single instruction.6. The apparatus of claim 1, wherein the execution circuitry is further to cause saturation of execution results, as necessary.7. The apparatus of claim 1, wherein the single instruction is further to specify a writemask comprising M x N bits, each bit to control whether to mask a corresponding element of the destination matrix.8. The apparatus of any one of claims 1-7, wherein the execution circuitry is further to generate a fault when a fault condition occurs, the fault condition selectable from: the destination matrix having a fewer number of rows than a number of rows of the first source matrix; and the destination matrix having a fewer number of columns than a number of columns of the second source matrix.9. A method comprising: fetching, by fetch circuitry of a processor, a single instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, an M by K first source matrix, and a K by N second source matrix, the source matrices having elements that each comprise a pair of half-precision floating-point values, the opcode to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted singleprecision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix; decoding, by decode circuitry of the processor, the fetched instruction into a decoded single instruction; and executing, by the execution circuitry of the processor, the decoded single instruction according to the opcode.10. The method of claim 9, wherein the half-precision floating-point format is specified by the opcode of the single instruction.11. The method of claim 9, wherein M, N, and K are specified by the single instruction.12. The method of claim 9, where the execution circuitry causes a matrix operations accelerator to perform at least the multiplications and the accumulation.13. The method of claim 12, further comprising executing, by the execution circuitry of the processor before executing the single instruction, a matrix accelerator configuration instruction that programs a configuration of the matrix operations accelerator specifying M, N, and K.14. The method of claim 9, wherein the executing comprises saturating the execution results.15. The method of claim 9, wherein the single instruction further specifies a writemask comprising M x N bits, each bit controlling whether to mask a corresponding element of the destination matrix.16. The method of any one of claims 9-15, wherein the executing generates a fault when a fault condition occurs, the fault condition selectable from: the destination matrix having a fewer number of rows than a number of rows of the first source matrix; and the destination matrix having a fewer number of columns than a number of columns of the second source matrix.17. A non-transitory machine readable medium that stores program code that when executed by a machine causes the machine to perform a method comprising: fetching, by fetch circuitry of a processor, a single instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, an M by K first source matrix, and a K by N second source matrix, the source matrices having elements that each comprise a pair of half-precision floating-point values, the opcode to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted singleprecision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix; decoding, by decode circuitry of the processor, the fetched instruction into a decoded single instruction; and executing, by the execution circuitry of the processor, the decoded single instruction according to the opcode.18. The non-transitory machine readable medium of claim 17, wherein the half-precision floating-point format is specified by the opcode of the single instruction.19. The non-transitory machine readable medium of claim 17, wherein M, N, and K are specified by the single instruction.20. The non-transitory machine readable medium of claim 17, where the executing comprises the execution circuitry causing a matrix operations accelerator to perform at least the multiplications and the accumulation.21. The non-transitory machine readable medium of claim 20, wherein the method further comprises executing, by the execution circuitry of the processor before executing the single instruction, a matrix accelerator configuration instruction that programs a configuration of the matrix operations accelerator specifying M, N, and K.22. The non-transitory machine readable medium of claim 17, wherein the executing comprises saturating the execution results.23. The non-transitory machine readable medium of claim 17, wherein the single instruction further specifies a writemask comprising M x N bits, each bit controlling whether to mask a corresponding element of the destination matrix.24. The non-transitory machine readable medium of any one of claims 17-23, wherein the executing generates a fault when a fault condition occurs, the fault condition selectable from: the destination matrix having a fewer number of rows than a number of rows of the first source matrix; and the destination matrix having a fewer number of columns than a number of columns of the second source matrix. |
APPARATUSES, METHODS, AND SYSTEMS FOR INSTRUCTIONS FOR 16-BIT FLOATING-POINT MATRIX DOT PRODUCT INSTRUCTIONSTECHNICAL FIELD[0001] The disclosure relates generally to computer processor architecture, and, more specifically, to systems and methods for performing 16-bit floating-point matrix dot product instructions.BACKGROUND[0002] Matrices are increasingly important in many computing tasks such as machine learning and other bulk data processing. Deep Learning is a class of machine learning algorithms. Deep learning architectures, such as deep neural networks, have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design.[0003] Inference and training, two tools used for deep learning, are tending towards low precision arithmetic. Maximizing throughput of deep learning algorithms and computations may assist in meeting the needs of deep learning processors, for example, those performing deep learning in a data center.[0004] Matrix-matrix multiplication (a.k.a., GEMM or General Matrix Multiplication) is a common compute-heavy operation on modem processors. Special hardware for matrix multiplication (e.g., GEMM) is a good option for improving the peak compute (and energy efficiency) of certain applications, such as deep learning.[0005] Some of these applications, including deep learning, can operate on input data elements with relatively few bits without losing accuracy, as long as the output elements have enough bits (i.e., more than the inputs).BRIEF DESCRIPTION OF THE DRAWINGS[0006] The present disclosure is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which: [0007] Figure 1A illustrates an embodiment of configured tiles;[0008] Figure IB illustrates an embodiment of configured tiles;[009] Figure 2 illustrates several examples of matrix storage;[0010] Figure 3 illustrates an embodiment of a system utilizing a matrix (tile) operations accelerator;[0012] Figures 4 and 5 show different embodiments of how memory is shared using a matrix operations accelerator;
[0013] Figure 6 illustrates an embodiment of matrix multiply accumulate operation using tiles (“TMMA”);[0014] Figure 7 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply accumulate instruction;[0015] Figure 8 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply accumulate instruction;[0016] Figure 9 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply accumulate instruction;[0017] Figure 10 illustrates an embodiment of a subset of the execution of an iteration of chained fused multiply accumulate instruction;[0018] Figure 11 illustrates power-of-two sized SIMD implementations wherein the accumulators use input sizes that are larger than the inputs to the multipliers according to an embodiment;[0019] Figure 12 illustrates an embodiment of a system utilizing matrix operations circuitry;[0020] Figure 13 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles;[0021] Figure 14 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles;[0022] Figure 15 illustrates an example of a matrix expressed in row major format and column major format;[0023] Figure 16 illustrates an example of usage of matrices (tiles);[0024] Figure 17 illustrates an embodiment a method of usage of matrices (tiles);[0025] Figure 18 illustrates support for configuration of the usage of tiles according to an embodiment;[0026] Figure 19 illustrates an embodiment of a description of the matrices (tiles) to be supported;[0027] Figures 20(A)-(D) illustrate examples of register(s);[0028] Figure 21 A is a block diagram illustrating use of a TILEDPFP16PS instruction to accelerate matrix multiplication, according to some embodiments;[0029] Figure 21B is a block diagram illustrating example execution circuitry to execute a TILEDPFP16PS instruction, according to some embodiments;[0030] Figure 22A is pseudocode illustrating execution of a TILEDPFP16PS instruction according to some embodiments;[0031] Figure 22B is pseudocode illustrating helper functions for use by the pseudocode of Figure 22A, according to some embodiments;
[0032] Figure 23 illustrates an embodiment of a processor executing a flow to process a TILEDPFP16PS instruction;[0033] Figure 24 is a block diagram illustrating a format of a TILEDPFP16PS instruction according to some embodiments;[0034] Figures 25A-25B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments;[0035] Figure 25A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments;[0036] Figure 25B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments;[0037] Figure 26A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments;[0038] Figure 26B is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the full opcode field according to one embodiment;[0039] Figure 26C is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the register index field according to one embodiment;[0040] Figure 26D is a block diagram illustrating the fields of the specific vector friendly instruction format that make up the augmentation operation field according to one embodiment;[0041] Figure 27 is a block diagram of a register architecture according to one embodiment;[0042] Figure 28A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments;[0043] Figure 28B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments;[0044] Figures 29A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip;[0045] Figure 29A is a block diagram of a single processor core, along with its connection to the on-die interconnect network and with its local subset of the Level 2 (L2) cache, according to embodiments;[0046] Figure 29B is an expanded view of part of the processor core in Figure 29A according to embodiments;[0047] Figure 30 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments;[0048] Figures 31-34 are block diagrams of exemplary computer architectures;
[0049] Figure 31 shown a block diagram of a system in accordance with one embodiment of the present disclosure;[0050] Figure 32 is a block diagram of a first more specific exemplary system in accordance with an embodiment of the present disclosure;[0051] Figure 33 is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present disclosure;[0052] Figure 34 is a block diagram of a System-on-a-Chip (SoC) in accordance with an embodiment of the present disclosure; and[0053] Figure 35 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments.DETAILED DESCRIPTION[0054] In the following description, numerous specific details are set forth. However, it is understood that embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.[0055] References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.[0056] In many mainstream processors, handling matrices is a difficult and/or instruction intensive task. For example, rows of a matrix could be put into a plurality of packed data (e.g., SIMD or vector) registers and then operated on individually. For example, an add two 8x2 matrices may require a load or gather into four packed data registers depending upon data sizes. Then a first add of packed data registers corresponding to a first row from each matrix is performed and a second add of packed data registers corresponding to a second row from each matrix is performed. Then the resulting packed data registers are scattered back to memory. While for small matrices this scenario may be acceptable, it is often not acceptable with larger matrices.
DISCUSSION[0057] Described herein are mechanisms to support matrix operations in computer hardware such as central processing units (CPUs), graphic processing units (GPUs), and accelerators. The matrix operations utilize 2-dimensional (2-D) data structures representing one or more packed regions of memory such as registers. Throughout this description, these 2-D data structures are referred to as tiles. Note that a matrix may be smaller than a tile (use less than all of a tile) or utilize a plurality of tiles (the matrix is larger than the size of any one tile). Throughout the description, matrix (tile) language is used to indicate operations performed using tiles that impact a matrix; whether or not that matrix is larger than any one tile is not typically relevant.[0058] Each tile may be acted upon by different operations such as those that are detailed herein and include, but are not limited to: matrix (tile) multiplication, tile add, tile subtract, tile diagonal, tile zero, tile transform, tile dot product, tile broadcast, tile row broadcast, tile column broadcast, tile multiplication, tile multiplication and accumulation, tile move, etc. Additionally, support for operators such as the use of a scale and/or bias may be used with these operations or in support of non-numeric applications in the future, for instance, OpenCL “local memory,” data compression/decompression, etc. Also described herein are instructions for performing matrix (tile) 16-bit tile dot product (TILEDPFP16PS) instructions.[0059] Portions of storage (such as memory (non-volatile and volatile), registers, cache, etc.) are arranged into tiles of different horizontal and vertical dimensions. For example, a tile may have horizontal dimension of 4 (e.g., four rows of a matrix) and a vertical dimension of 8 (e.g., 8 columns of the matrix). Typically, the horizontal dimension is related to element sizes (e.g., 2-, 4-, 8-, 16-, 32-, 64-, 128-bit, etc.). Multiple datatypes (single precision floating-point, double precision floating-point, integer, etc.) may be supported.EXEMPLARY USAGE OF CONFIGURED TILES[0060] In some embodiments, tile parameters can be configured. For example, a given tile may be configured to provide tile options. Exemplary tile options include but are not limited to: a number of rows of the tile, a number of columns of the tile, whether the tile is VALID, and/or whether the tile consists of a PAIR of equal-sized tiles.[0061] Figure 1A illustrates an embodiment of configured tiles. As shown, 4 kB of application memory 102 have stored thereon 4 IkB titles, tile tO 104, tile tl 106, tile t2 108, and tile t3 110. In this example, the 4 tiles do not consist of pairs, and each have elements arranged in rows and columns. Tile tO 104 and tile tl 106 have K rows and N columns of 4-byte elements (e.g., single precision data), where K equals 8 and N=32. Tile t2 108 and tile t3 110 have K rows and N/2 columns of 8-byte elements (e.g., double precision data). As the double precision operands are
twice the width of single precision, this configuration is consistent with a palette, used to provide tile options, supplying at least 4 names with total storage of at least 4 kB. In operation, the tiles can be loaded from and stored to memory using load and store operations. Depending upon the instruction encoding scheme used, the amount of available application memory, as well as the size, number, and configuration of available tiles varies.[0062] Figure IB illustrates an embodiment of configured tiles. As shown, 4 kB of application memory 122 have stored thereon 2 pairs of IkB-titles, the first pair being tile t4L 124 and tile t4R 126, and the second pair being tile t5L 128 and tile t5R 130. As shown the pairs of tiles are divided into a left tile and a right tile. In other embodiments, the pair of tiles are divided into an even tile and an odd tile. In this example, the 4 tiles each have elements arranged in rows and columns. Tile t4L 124 and tile t4R 126 have K rows and N columns of 4-byte elements (e.g., single precision floating-point data), where K equals 8 and N equals 32. Tile t5L 128 and tile t5R 130 have K rows and N/2 columns of 8-byte elements (e.g., double precision floating-point data). As the double precision operands are twice the width of single precision, this configuration is consistent with a palette, used to provide tile options, supplying at least 2 names with total storage of at least 4 kB. The four tiles of Figure 1A use 4 names, each naming a 1 kB tile, whereas the 2 pairs of tiles in Figure IB can use 2 names to specify the paired tiles. In some embodiments, tile instructions accept a name of a paired tile as an operand. In operation, the tiles can be loaded from and stored to memory using load and store operations. Depending upon the instruction encoding scheme used, the amount of available application memory, as well as the size, number, and configuration of available tiles varies.[0063] In some embodiments, tile parameters are definable. For example, a “palette” is used to provide tile options. Exemplary options include, but are not limited to: the number of tile names, the number of bytes in a row of storage, the number of rows and columns in a tile, etc. For example, a maximum “height” (number of rows) of a tile may be defined as:[0064] Tile Max Rows = Architected Storage / (The Number of Palette Names * The Number of Bytes per row).[0065] As such, an application can be written such that a fixed usage of names will be able to take advantage of different storage sizes across implementations.[0066] Configuration of tiles is done using a matrix (tile) configuration (“TILECONFIG”) instruction, where a particular tile usage is defined in a selected palette. This declaration includes the number of tile names to be used, the requested number of rows and columns per name (tile), and, in some embodiments, the requested datatype of each tile. In some embodiments, consistency checks are performed during the execution of a TILECONFIG instruction to determine that it matches the restrictions of the palette entry.
EXEMPLARY TILE STORAGE TYPES[0067] Figure 2 illustrates several examples of matrix storage. In (A), a tile is stored in memory. As shown, each “row” consists of four packed data elements. To get to the next “row,” a stride value is used. Note that rows may be consecutively stored in memory. Strided memory accesses allow for access of one row to then next when the tile storage does not map the underlying memory array row width.[0068] Tile loads from memory and stores to memory are typically strided accesses from the application memory to packed rows of data. Exemplary TILELOAD and TILESTORE instructions, or other instruction references to application memory as a TILE operand in load-op instructions, are, in some embodiments, restartable to handle (up to) 2*rows of page faults, unmasked floating-point exceptions, and/or interrupts per instruction.[0069] In (B), a matrix is stored in a tile comprised of a plurality of registers such as packed data registers (single instruction, multiple data (SIMD) or vector registers). In this example, the tile is overlaid on three physical registers. Typically, consecutive registers are used, however, this need not be the case.[0070] In (C), a matrix is stored in a tile in non-register storage accessible to a fused multiply accumulate (FMA) circuit used in tile operations. This storage may be inside of an FMA, or adjacent to it. Additionally, in some embodiments, discussed below, the storage may be for a data element and not an entire row or tile.[0071] The supported parameters for the TMMA architecture are reported via CPUID. In some embodiments, the list of information includes a maximum height and a maximum SIMD dimension. Configuring the TMMA architecture requires specifying the dimensions for each tile, the element size for each tile and the palette identifier. This configuration is done by executing the TILECONFIG instruction.[0072] Successful execution of a TILECONFIG instruction enables subsequent TILE operators. A TILERELEASEALL instruction clears the tile configuration and disables the TILE operations (until the next TILECONFIG instructions executes). In some embodiments, XSAVE, XSTORE, etc. are used in context switching using tiles. In some embodiments, 2 XCR0 bits are used in XSAVE, one for TILECONFIG metadata and one bit corresponding to actual tile payload data. [0073] TILECONFIG not only configures the tile usage, but also sets a state variable indicating that the program is in a region of code with tiles configured. An implementation may enumerate restrictions on other instructions that can be used with a tile region such as no usage of an existing register set, etc.
[0074] Exiting a tile region is typically done with the TILERELEASEALL instruction. It takes no parameters and swiftly invalidates all tiles (indicating that the data no longer needs any saving or restoring) and clears the internal state corresponding to being in a tile region.[0075] In some embodiments, tile operations will zero any rows and any columns beyond the dimensions specified by the tile configuration. For example, tile operations will zero the data beyond the configured number of columns (factoring in the size of the elements) as each row is written. For example, with 64-byte rows and a tile configured with 10 rows and 12 columns, an operation writing FP32 elements would write each of the first 10 rows with 12*4 bytes with output/result data and zero the remaining 4*4 bytes in each row. Tile operations also fully zero any rows after the first 10 configured rows. When using IK tile with 64-byte rows, there would be 16 rows, so in this example, the last 6 rows would also be zeroed.[0076] In some embodiments, a context restore instruction (e.g., XRSTOR), when loading data, enforces that the data beyond the configured rows for a tile will be maintained as zero. If there is no valid configuration, all rows are zeroed. XRSTOR of tile data can load garbage in the columns beyond those configured. It should not be possible for XRSTOR to clear beyond the number of columns configured because there is not an element width associated with the tile configuration.[0077] Context save (e.g., XSAVE) exposes the entire TILE storage area when writing it to memory. If XRSTOR loaded garbage data into the rightmost part of a tile, that data will be saved by XSAVE. XSAVE will write zeros for rows beyond the number specified for each tile.[0078] In some embodiments, tile instructions are restartable. The operations that access memory allow restart after page faults. The computational instructions that deal with floating-point operations also allow for unmasked floating-point exceptions, with the masking of the exceptions controlled by a control and/or status register.[0079] To support restarting instructions after these events, the instructions store information in the start registers detailed below.MATRIX (TILE) OPERATION SYSTEMSEXEMPLARY HARDWARE SUPPORT[0080] Figure 3 illustrates an embodiment of a system utilizing a matrix (tile) operations accelerator. In this illustration, a host processor/processing system 301 communicates commands 311 (e.g., matrix manipulation operations such as arithmetic or matrix manipulation operations, or load and store operations) to a matrix operations accelerator 307. However, this is shown this way for discussion purposes only. As detailed later, this accelerator 307 may be a part of a processing core. Typically, commands 311 that are tile manipulation operator
instructions will refer to tiles as register-register (“reg-reg”) or register-memory (“reg-mem”) format. Other commands such as TILESTORE, TILELOAD, TILECONFIG, etc., do not perform data operations on a tile. Commands may be decoded instructions (e.g., micro-ops) or macro-instructions for the accelerator 307 to handle.[0081] In this example, a coherent memory interface 303 is coupled to the host processor/processing system 301 and matrix operations accelerator 307 such that they can share memory. Figures 4 and 5 show different embodiments of how memory is shared using a matrix operations accelerator. As shown in Figure 4, the host processor 401 and matrix operations accelerator circuitry 405 share the same memory 403. Figure 5 illustrates an embodiment where the host processor 501 and matrix operations accelerator 505 do not share memory but can access each other’s memory. For example, processor 501 can access tile memory 507 and utilize its host memory 503 as normal. Similarly, the matrix operations accelerator 505 can access host memory 503, but more typically uses its own memory 507. Note these memories may be of different types.[0082] In some embodiments, tiles are supported using an overlay over physical registers. For example, a tile may utilize 16 1 ,024-bit registers, 32 512-bit registers, etc. depending on the implementation. In some embodiments, the matrix operations utilize 2-dimensional (2-D) data structures representing one or more packed regions of memory such as registers. Throughout this description, these 2-D data structures are referred to as tiles or tile registers.[0083] In some embodiments, the matrix operations accelerator 307 includes a plurality of FMAs 309 coupled to data buffers 305 (in some implementations, one or more of these buffers 305 are stored in the FMAs of the grid as shown). The data buffers 305 buffer tiles loaded from memory and/or tiles to be stored to memory (e.g., using a tileload or tilestore instruction). Data buffers may be, for example, a plurality of registers. Typically, these FMAs are arranged as a grid of chained FMAs 309 which are able to read and write tiles. In this example, the matrix operations accelerator 307 is to perform a matrix multiply operation using tiles TO, Tl, and T2. At least one of tiles is housed in the FMA grid 309. In some embodiments, all tiles in an operation are stored in the FMA grid 309. In other embodiments, only a subset is stored in the FMA grid 309. As shown, Tl is housed and TO and T2 are not. Note that A, B, and C refer to the matrices of these tiles which may or may not take up the entire space of the tile.[0084] Figure 6 illustrates an embodiment of matrix multiply accumulate operation using tiles (“TMMA”).[0085] The number of rows in the matrix (TILE A 601) matches the number of serial (chained) FMAs comprising the computation’s latency. An implementation is free to recirculate on a grid of smaller height, but the computation remains the same.
[0086] The source/destination vector comes from a tile of N rows (TILE C 605) and the grid of FMAs 611 performs N vector-matrix operations resulting in a complete instruction performing a matrix multiplication of tiles. Tile B 603 is the other vector source and supplies “broadcast” terms to the FMAs in each stage.[0087] In operation, in some embodiments, the elements of matrix B (stored in a tile B 603) are spread across the rectangular grid of FMAs. Matrix B (stored in tile A 601) has its elements of a row transformed to match up with the columnar dimension of the rectangular grid of FMAs. At each FMA in the grid, an element of A and B are multiplied and added to the incoming summand (from above in the Figure) and the outgoing sum is passed to the next row of FMAs (or the final output).[0088] The latency of a single step is proportional to K (row height of matrix B) and dependent TMMAs typically have enough source-destination rows (either in a single tile or across tile) to hide that latency. An implementation may also split the SIMD (packed data element) dimension M (row height of matrix A) across time steps, but this simply changes the constant that K is multiplied by. When a program specifies a smaller K than the maximum enumerated by the TMACC, an implementation is free to implement this with “masking” or “early outs.”[0089] The latency of an entire TMMA is proportional to N*K. The repeat rate is proportional to N. The number of MACs per TMMA instruction is N*K*M.[0090] Figure 7 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the chained fused multiply accumulate is operating on signed sources wherein the accumulator is 2x the input data size.[0091] A first signed source (source 1 701) and a second signed source (source 2 703) each have four packed data elements. Each of these packed data elements stores signed data such as floating-point data. A third signed source (source 3 709) has two packed data elements, each of which stores signed data. The sizes of the first and second signed sources 701 and 703 are half that of the third signed source (initial value or previous result) 709. For example, the first and second signed sources 701 and 703 could have 32-bit packed data elements (e.g., single precision floating-point) while the third signed source 709 could have 64-bit packed data elements (e.g., double precision floating-point).[0092] In this illustration, only the two most significant packed data element positions of the first and second signed sources 701 and 703 and the most significant packed data element position of the third signed source 709 are shown. Of course, the other packed data element positions would also be processed.
[0093] As illustrated, packed data elements are processed in pairs. For example, the data of the most significant packed data element positions of the first and second signed sources 701 and 703 are multiplied using a multiplier circuit 705, and the data from second most significant packed data element positions of the first and second signed sources 701 and 703 are multiplied using a multiplier circuit 707. In some embodiments, these multiplier circuits 705 and 707 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 709. The results of each of the multiplications are added using addition circuitry 711.[0094] The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of the signed source 3 709 (using a different adder 713 or the same adder 711).[0095] Finally, the result of the second addition is either stored into the signed destination 715 in a packed data element position that corresponds to the packed data element position used from the signed third source 709 or passed on to the next iteration if there is one. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.[0096] Figure 8 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the chained fused multiply accumulate is operating on signed sources wherein the accumulator is 2x the input data size.[0097] A first signed source (source 1 801) and a second signed source (source 2 803) each have four packed data elements. Each of these packed data elements stores signed data such as integer data. A third signed source (source 3 809) has two packed data elements, each of which stores signed data. The sizes of the first and second signed sources 801 and 803 are half that of the third signed source 809. For example, the first and second signed sources 801 and 803 could have 32-bit packed data elements (e.g., single precision floating-point) the third signed source 809 could have 64-bit packed data elements (e.g., double precision floating-point).[0098] In this illustration, only the two most significant packed data element positions of the first and second signed sources 801 and 803 and the most significant packed data element position of the third signed source 809 are shown. Of course, the other packed data element positions would also be processed.[0099] As illustrated, packed data elements are processed in pairs. For example, the data of the most significant packed data element positions of the first and second signed sources 801 and
803 are multiplied using a multiplier circuit 805, and the data from second most significant packed data element positions of the first and second signed sources 801 and 803 are multiplied using a multiplier circuit 807. In some embodiments, multiplier circuits 805 and 807 perform the multiplications with infinite precision without saturation and use adder/saturation circuitry 813 to saturate the results of the accumulation to plus or minus infinity in case of an overflow and to zero in case of any underflow. In other embodiments, multiplier circuits 805 and 807 perform the saturation themselves. In some embodiments, these multiplier circuits 805 and 807 are reused for other packed data element positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source (initial value or previous iteration result) 809. The results of each of the multiplications are added to the signed third source 809 using addition/saturation circuitry 813.[00100] Addition/saturation (accumulator) circuitry 813 preserves a sign of an operand when the addition results in a value that is too big. In particular, saturation evaluation occurs on the infinite precision result between the multi-way-add and the write to the destination or next iteration. When the accumulator 813 is floating-point and the input terms are integer, the sum of products and the floating-point accumulator input value are turned into infinite precision values (fixed point numbers of hundreds of bits), the addition of the multiplication results and the third input is performed, and a single rounding to the actual accumulator type is performed.[00101] Unsigned saturation means the output values are limited to a maximum unsigned number for that element width (all Is). Signed saturation means a value is limited to the be in the range between a minimum negative number and a max positive number for that element width (for bytes for example, the range is from -128 (= - 2A7) to 127(=2A7-1)).[00102] The result of the addition and saturation check is stored into the signed result 815 in a packed data element position that corresponds to the packed data element position used from the signed third source 809 or passed on to the next iteration if there is one. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.[00103] Figure 9 illustrates an embodiment of a subset of the execution of an iteration of a chained fused multiply accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the chained fused multiply accumulate is operating on a signed source and an unsigned source wherein the accumulator is 4x the input data size.[00104] A first signed source (source 1 901) and a second unsigned source (source 2 903) each have four packed data elements. Each of these packed data elements has data such as floating-
point or integer data. A third signed source (initial value or result 915) has a packed data element of which stores signed data. The sizes of the first and second sources 901 and 903 are a quarter of the third signed source 915. For example, the first and second sources 901 and 903 could have 16-bit packed data elements (e.g., word) and the third signed source 915 could have 64-bit packed data elements (e.g., double precision floating-point or 64-bit integer).[00105] In this illustration, the four most significant packed data element positions of the first and second sources 901 and 903 and the most significant packed data element position of the third signed source 915 are shown. Of course, other packed data element positions would also be processed if there are any.[00106] As illustrated, packed data elements are processed in quadruplets. For example, the data of the most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 905, data from second most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 907, data from third most significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 909, and data from the least significant packed data element positions of the first and second sources 901 and 903 are multiplied using a multiplier circuit 911. In some embodiments, the signed packed data elements of the first source 901 are sign extended and the unsigned packed data elements of the second source 903 are zero extended prior to the multiplications.[00107] In some embodiments, these multiplier circuits 905-911 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of the signed third source 915. The results of each of the multiplications are added using addition circuitry 913.[00108] The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of the signed source 3 915 (using a different adder 917 or the same adder 913).[00109] Finally, the result 919 of the second addition is either stored into the signed destination in a packed data element position that corresponds to the packed data element position used from the signed third source 915 or passed to the next iteration. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.[00110] Figure 10 illustrates an embodiment of a subset of the execution of an iteration of chained fused multiply accumulate instruction. In particular, this illustrates execution circuitry of an iteration of one packed data element position of the destination. In this embodiment, the
chained fused multiply accumulate is operating on a signed source and an unsigned source wherein the accumulator is 4x the input data size.[00111] A first signed source 1001 and a second unsigned source 1003 each have four packed data elements. Each of these packed data elements stores data such as floating-point or integer data. A third signed source 1015 (initial or previous result) has a packed data element of which stores signed data. The sizes of the first and second sources are a quarter of the third signed source 1015 (initial or previous result). For example, the first and second sources could have 16- bit packed data elements (e.g., word) and the third signed source 1015 (initial or previous result) could have 64-bit packed data elements (e.g., double precision floating-point or 64-bit integer). [00112] In this illustration, the four most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 and the most significant packed data element position of the third signed source 1015 are shown. Of course, other packed data element positions would also be processed if there are any.[00113] As illustrated, packed data elements are processed in quadruplets. For example, the data of the most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1005, data from second most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1007, data from third most significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1009, and data from the least significant packed data element positions of the first signed source 1001 and the second unsigned source 1003 are multiplied using a multiplier circuit 1011. In some embodiments, the signed packed data elements of the first signed source 1001 are sign extended and the unsigned packed data elements of the second unsigned source 1003 are zero extended prior to the multiplications.[00114] In some embodiments, these multiplier circuits 1005-1011 are reused for other packed data elements positions. In other embodiments, additional multiplier circuits are used so that the packed data elements are processed in parallel. In some contexts, parallel execution is done using lanes that are the size of third signed source 1015 (initial or previous result). The result of the addition of the results of the multiplications is added to the data from most significant packed data element position of third signed source 1015 (initial or previous result) using adder/saturation 1013 circuitry.[00115] Addition/saturation (accumulator) circuitry 1013 preserves a sign of an operand when the addition results in a value that is too big or too small for signed saturation. In particular, saturation evaluation occurs on the infinite precision result between the multi-way-add and the write to the destination. When the accumulator 1013 is floating-point and the input terms are
integer, the sum of products and the floating-point accumulator input value are turned into infinite precision values (fixed point numbers of hundreds of bits), the addition of the multiplication results and the third input is performed, and a single rounding to the actual accumulator type is performed.[00116] The result 1019 of the addition and saturation check is stored into the signed destination in a packed data element position that corresponds to the packed data element position used from third signed source 1015 (initial or previous result) or passed to the next iteration. In some embodiments, a writemask is applied to this storage such that if a corresponding writemask (bit) is set, the storage happens, and, if not set, the storage does not happen.[00117] Figure 11 illustrates power-of-two sized SIMD implementations wherein the accumulators use input sizes that are larger than the inputs to the multipliers according to an embodiment. Note the source (to the multipliers) and accumulator values may be signed or unsigned values. For an accumulator having 2X input sizes (in other words, the accumulator input value is twice the size of the packed data element sizes of the sources), table 1101 illustrates different configurations. For byte sized sources, the accumulator uses word or halfprecision floating-point (HPFP) values that are 16-bit in size. For word sized sources, the accumulator uses 32-bit integer or single-precision floating-point (SPFP) values that are 32-bit in size. For SPFP or 32-bit integer sized sources, the accumulator uses 64-intenger or doubleprecision floating-point (DPFP) values that are 64-bit in size.[00118] For an accumulator having 4X input sizes (in other words, the accumulator input value is four times the size of the packed data element sizes of the sources), table 1103 illustrates different configurations. For byte sized sources, the accumulator uses 32-bit integer or singleprecision floating-point (SPFP) values that are 32-bit in size. For word sized sources, the accumulator uses 64-bit integer or double-precision floating-point (DPFP) values that are 64-bit in size in some embodiments.[00119] For an accumulator having 8X input sizes (in other words, the accumulator input value is eight times the size of the packed data element sizes of the sources), table 1105 illustrates a configuration. For byte sized sources, the accumulator uses 64-bit integer.[00120] As hinted at earlier, matrix operations circuitry may be included in a core, or as an external accelerator. Figure 12 illustrates an embodiment of a system utilizing matrix operations circuitry. In this illustration, multiple entities are coupled with a ring interconnect 1245.[00121] A plurality of cores, core 0 1201, core 1 1203, core 2 1205, and core N 1207 provide non-tile-based instruction support. In some embodiments, matrix operations circuitry 1251 is provided in a core 1203, and in other embodiments matrix operations circuitry 1211 and 1213 are accessible on the ring interconnect 1245.
[00122] Additionally, one or more memory controllers 1223-1225 are provided to communicate with memory 1233 and 1231 on behalf of the cores and/or matrix operations circuitry.[00123] Figure 13 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles. Branch prediction and decode circuitry 1303 performs branch predicting of instructions, decoding of instructions, and/or both from instructions stored in instruction storage 1301. For example, instructions detailed herein may be stored in instruction storage. In some implementations, separate circuitry is used for branch prediction and in some embodiments, at least some instructions are decoded into one or more micro-operations, microcode entry points, microinstructions, other instructions, or other control signals using microcode 1305. The branch prediction and decode circuitry 1303 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc.[00124] The branch prediction and decode circuitry 1303 is coupled to allocate/rename 1307 circuitry which is coupled, in some embodiments, to scheduler circuitry 1309. In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functionality by performing one or more of: 1) renaming logical operand values to physical operand values (e.g., a register alias table in some embodiments), 2) allocating status bits and flags to the decoded instruction, and 3) scheduling the decoded instruction for execution on execution circuitry out of an instruction pool (e.g., using a reservation station in some embodiments).[00125] The scheduler circuitry 1309 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler circuitry 1309 is coupled to, or includes, physical register file(s) 1315. Each of the physical register file(s) 1315 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), tiles, etc. In one embodiment, the physical register file(s) 1315 comprises vector registers circuitry, write mask registers circuitry, and scalar registers circuitry. These register circuits may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) 1315 is overlapped by a retirement circuit 1317 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of
registers; etc.). The retirement circuit 1317 and the physical register file(s) 1315 are coupled to the execution circuitry 1311.[00126] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[00127] The execution circuitry 1311 is a set of one or more execution circuits, including scalar circuitry 1321, vector/SIMD circuitry 1323, and matrix operations circuitry 1327, as well as memory access circuitry 1325 to access cache 1313. The execution circuits perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scalar circuitry 1321 performs scalar operations, the vector/SIMD circuitry 1323 performs vector/SIMD operations, and matrix operations circuitry 1327 performs matrix (tile) operations detailed herein.[00128] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement a pipeline as follows: 1) an instruction fetch circuit performs fetch and length decoding stages; 2) the branch and decode circuitry 1303 performs a decode stage; 3) the allocate/rename 1307 circuitry performs an allocation stage and renaming stage; 4) the scheduler circuitry 1309 performs a schedule stage; 5) physical register file(s) (coupled to, or included in, the scheduler circuitry 1309 and allocate/rename 1307 circuitry and a memory unit perform a register read/memory read stage; the execution circuitry 1311 performs an execute stage; 6) a memory unit and the physical register file(s) unit(s) perform a write back/memory write stage; 7) various units may be involved in the exception handling stage; and 8) a retirement unit and the physical register file(s) unit(s) perform a commit stage.[00129] The core may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1390 includes logic to support a packed data instruction set
extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[00130] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[00131] Figure 14 illustrates an embodiment of a processor core pipeline supporting matrix operations using tiles. Branch prediction and decode circuitry 1403 performs branch predicting of instructions, decoding of instructions, and/or both from instructions stored in instruction storage 1401. For example, instructions detailed herein may be stored in instruction storage. In some implementations, separate circuitry is used for branch prediction and in some embodiments, at least some instructions are decoded into one or more micro-operations, microcode entry points, microinstructions, other instructions, or other control signals using microcode 1405. The branch prediction and decode circuitry 1403 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc.[00132] The branch prediction and decode circuitry 1403 is coupled to allocate/rename 1407 circuitry which is coupled, in some embodiments, to scheduler circuitry 1409. In some embodiments, these circuits provide register renaming, register allocation, and/or scheduling functionality by performing one or more of 1) renaming logical operand values to physical operand values (e.g., a register alias table in some embodiments), 2) allocating status bits and flags to the decoded instruction, and 3) scheduling the decoded instruction for execution on execution circuitry out of an instruction pool (e.g., using a reservation station in some embodiments).[00133] The scheduler circuitry 1409 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) scheduler circuitry 1409 is coupled to, or includes, physical register file(s) 1415. Each of the physical register file(s) 1415 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), tiles, etc. In one embodiment, the physical register file(s) 1415 comprises vector registers circuitry, write mask registers circuitry, and scalar
registers circuitry. These register circuits may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) 1415 is overlapped by a retirement circuit 1417 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement circuit 1417 and the physical register file(s) 1415 are coupled to the execution circuitry 1411.[00134] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor may also include separate instruction and data cache units and a shared L2 cache unit, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.[00135] The execution circuitry 1411 a set of one or more execution circuits 1427 and a set of one or more memory access circuits 1425 to access cache 1413. The execution circuits 1427 perform matrix (tile) operations detailed herein.[00136] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement a pipeline as follows: 1) an instruction fetch circuit performs fetch and length decoding stages; 2) the branch and decode circuitry 1403 performs a decode stage; 3) the allocate/rename 1407 circuitry performs an allocation stage and renaming stage; 4) the scheduler circuitry 1409 performs a schedule stage; 5) physical register file(s) (coupled to, or included in, the scheduler circuitry 1409 and allocate/rename 1407 circuitry and a memory unit perform a register read/memory read stage; the execution circuitry 1411 performs an execute stage; 6) a memory unit and the physical register file(s) unit(s) perform a write back/memory write stage; 7) various units may be involved in the exception handling stage; and 8) a retirement unit and the physical register file(s) unit(s) perform a commit stage.[00137] The core may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1490 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.
[00138] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).LAYOUT[00139] Throughout this description, data is expressed using row major data layout. Column major users should translate the terms according to their orientation. Figure 15 illustrates an example of a matrix expressed in row major format and column major format. As shown, matrix A is a 2x3 matrix. When this matrix is stored in row major format, the data elements of a row are consecutive. When this matrix is stored in column major format, the data elements of a column are consecutive. It is a well-known property of matrices that d7* BT= (BA)T, where superscript T means transform. Reading column major data as row major data results in the matrix looking like the transform matrix.[00140] In some embodiments, row-major semantics are utilized in hardware, and column major data is to swap the operand order with the result being transforms of matrix, but for subsequent column-major reads from memory it is the correct, non-transformed matrix.[00141] For example, if there are two column-major matrices to multiply: a b g i k ag+bh ai+bj ak+bl c d * h j 1 = cg+dh ci+dj ck+dl e f eg+fh ei+fj ek+fl(3x2) (2x3) (3x3)[00142] The input matrices would be stored in linear memory (column-major) as: a c e b d f and g h i j k l.[00143] Reading those matrices as row-major with dimensions 2x3 and 3x2, they would appear as: a c e and g h b d f i j k l[00144] Swapping the order and matrix multiplying: g h a c e ag+bh cg+dh eg+fh
i j * b d f = ai+bj ci+dj ei+fj k 1 ak+bl ck+dl ek+fl[00145] The transform matrix is out and can then be stored in in row-major order: ag+bh cg+dh eg+fh ai+bj ci+dj ei+fj ak+bl ck+dl ek+fl[00146] and used in subsequent column major computations, it is the correct un-transformed matrix: ag+bh ai+bj ak+bl cg+dh ci+dj ck+dl eg+fh ei+fj ek+flEXEMPLARY USAGE[00147] Figure 16 illustrates an example of usage of matrices (e.g., tiles). In this example, matrix C 1601 includes two tiles, matrix A 1603 includes one tile, and matrix B 1605 includes two tiles. This figure shows an example of the inner loop of an algorithm to compute a matrix multiplication. In this example, two result tiles, tmmO and tmml, from matrix C 1601 are used to accumulate the intermediate results. One tile from the matrix A 1603 (tmm2) is re-used twice as it multiplied by two tiles from matrix B 1605. Pointers to load a new A matrix (tile) and two new B matrices (e.g., tiles) from the directions indicated by the arrows. An outer loop, not shown, adjusts the pointers for the C tiles.[00148] The exemplary code as shown includes the usage of a tile configuration instruction and is executed to configure tile usage, load tiles, a loop to process the tiles, store tiles to memory, and release tile usage.[00149] Figure 17 illustrates an embodiment of usage of matrices (e.g., tiles). At 1701, tile usage is configured. For example, a TILECONFIG instruction is executed to configure tile usage including setting a number of rows and columns per tile. Typically, at least one matrix (tile) is loaded from memory at 1703. At least one matrix (tile) operation is performed at 1705 using the matrices (e.g., tiles). At 1707, at least one matrix (tile) is stored out to memory and a context switch can occur at 1709.EXEMPLARY CONFIGURATIONTILE CONFIGURATION HARDWARE SUPPORT[00150] As discussed above, tile usage typically needs to be configured prior to use. For example, full usage of all rows and columns may not be needed. Not only does not configuring these rows and columns save power in some embodiments, but the configuration may be used to
determine if an operation will generate an error. For example, a matrix multiplication of the form (N x M) * (L x N) will typically not work if M and L are not the same.[00151] Prior to using matrices using tiles, in some embodiments, tile support is to be configured. For example, how many rows and columns per tile, tiles that are to be used, etc. are configured. A TILECONFIG instruction is an improvement to a computer itself as it provides for support to configure the computer to use a matrix accelerator (either as a part of a processor core, or as an external device). In particular, an execution of the TILECONFIG instruction causes a configuration to be retrieved from memory and applied to matrix (tile) settings within a matrix accelerator.TILE USAGE CONFIGURATION[00152] Figure 18 illustrates support for configuration of the usage of tiles according to an embodiment. A memory 1801 contains the tile description 1803 of the matrices (e.g., tiles) to be supported.[00153] Instruction execution resources 1811 of a processor/core 1805 stores aspects of a tile description 1803 into tile configurations 1817. The tile configurations 1817 include palette table 1813 to detail what tiles for a palette are configured (the number of rows and columns in each tile) and a marking that matrix support is in use. In particular, instruction execution resources 1811 are configured to use tiles as specified by the tile configurations 1817. The instruction execution resources 1811 may also include a machine specific register or configuration register to indicate tile usage. Additional values such as in-use and start values are also set. The tile configurations 1817 utilize register(s) 1819 to store tile usage and configuration information.[00154] Figure 19 illustrates an embodiment of a description of the matrices (e.g., tiles) to be supported. This is the description that is to be stored upon an execution of a STTILECFG instruction. In this example, each field is a byte. In byte [0], a palette ID 1901 is stored. The palette ID is used to index a palette table 1813 which stores, per palette ID, a number of bytes in a tile, and bytes per row of the tiles that are associated with this ID as defined by the configuration.[00155] Byte 1 stores a value to be stored in a “startRow” register 1903 and byte 2 stores a value to be stored in a register, startP 1905. To support restarting instructions after these events, the instructions store information these registers. To support restarting instructions after break events such as those detailed above, the instructions store information in these registers. The startRow value indicates the row that should be used for restart. The startP value indicates the position within the row for store operations when pairs are used and, in some embodiments,
indicates the lower half of the row (in the lower tile of a pair) or higher half of the row (in the higher tile of a pair). Generally, the position in the row (the column) is not needed.[00156] With the exception of TILECONFIG and STTILECFG, successfully executing matrix (tile) instructions will set both startRow and startP to zero.[00157] Any time an interrupted matrix (tile) instruction is not restarted, it is the responsibility of software to zero the startRow and startP values. For example, unmasked floating-point exception handlers might decide to finish the operation in software and change the program counter value to another instruction, usually the next instruction. In this case the software exception handler must zero the startRow and startP values in the exception presented to it by the operating system before resuming the program. The operating system will subsequently reload those values using a restore instruction.[00158] Byte 3 stores an indication of pairs (lb per tile) of tiles 1907.[00159] Bytes 16-17 store the number of rows 1913 and columns 1915 for tile 0, bytes 18-19 store the number of rows and columns for tile 1, etc. In other words, each 2-byte group specifies a number of rows and columns for a tile. If a group of 2 bytes is not used to specify tile parameters, they should have the value zero. Specifying tile parameters for more tiles than the implementation limit or the palette limit results in a fault. Unconfigured tiles are set to an initial state with 0 rows, 0 columns.[00160] Finally, the configuration in memory typically ends with an ending delineation such as all zeros for several consecutive bytes.EXEMPLARY TILE AND TILE CONFIGURATION STORAGE[00161] Figures 20(A)-(D) illustrate examples of register(s) 1819. Figure 20(A) illustrates a plurality of registers 1819. As shown each tile (TMM0 2001 ... TMMN 2003) has a separate register with each register storing a row and column size for that particular tile. StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (e.g., TILES CONFIGURED = 1) to indicate tiles are configured for use.[00162] Figure 20(B) illustrates a plurality of registers 1819. As shown each tile has separate registers for its rows and columns. For example, TMM0 rows configuration 2021, TMM0 columns configuration 2023, StartP 2011 and StartRow 2013 are stored in separate registers. One or more status registers 2015 are set (e.g., TILES CONFIGURED = 1) to indicate tiles are configured for use.[00163] Figure 20(C) illustrates a single register 1819. As shown, this register stores tile configurations (rows and columns per tile) 2031, StartP 2011, and StartRow 2013 are stored in
single register as packed data registers. One or more status registers 2015 are set (e.g., TILES CONFIGURED = 1) to indicate tiles are configured for use.[00164] Figure 20(D) illustrates a plurality of registers 1819. As shown, a single register stores tile configuration (rows and columns per tile) 2031. StartP and StartRow are stored in separate registers 2011 and 2013. One or more status registers 2015 are set (e.g., TILES CONFIGURED = 1) to indicate tiles are configured for use.[00165] Other combinations are contemplated such as combining the start registers into a single register where they are shown separately, etc.TILEDPFP16PS[00166] As mentioned above, special hardware for General Matrix Multiplication (a.k.a., GEMM), is a good option for improving the peak compute performance (and energy efficiency) of certain applications, such as deep learning. Some of these applications, including deep learning, can operate on input data elements with relatively few bits without losing accuracy, as long as the output elements have enough bits (i.e., more than the inputs).[00167] Accordingly, disclosed methods and systems perform a 16-bit floating-point matrix dot product operation (TILEDPFP16PS) that takes source matrices (e.g., tiles) having 16-bit floating-point elements, performs dot product multiplications, and accumulates the resulting products with a 32-bit single-precision destination.[00168] In certain embodiments, one 16-bit floating point format is a sixteen bit wide Institute of Electrical and Electronics Engineers (IEEE) (e.g., IEEE 754 standard) half-precision binary floating-point format (IEEE floatl6) having a sign field (one bit wide), an exponent field (five bits wide), and a mantissa (significand precision) field (eleven bits implicitly stored, i.e., ten bits wide explicitly stored). In certain embodiments, another 16-bit floating point format is a sixteen bit wide, brain floating point format (bfloatl6) having a sign field (one bit wide), an exponent field (eight bits wide), and a mantissa (significand precision) field (eight bits implicitly stored, i.e., seven bits wide explicitly stored). In certain embodiments, a mantissa (significand precision) field is presumed to have an implicit leading bit with value of one, unless the exponent field is stored with all zeros. Further, a 32-bit floating-point format may include binary32 (according to an IEEE standard), which is sometimes referred to herein as “single-precision” or “fp32”, e.g., having a sign field (one bit wide), an exponent field (eight bits wide), and a mantissa (significand precision) field (twenty four bits implicitly stored, i.e., twenty three bits wide explicitly stored).[00169] In certain embodiments, the disclosed TILEDPFP16PS instruction is to be executed by a processor that includes fetch circuitry to fetch an instruction having fields to specify an opcode
and locations of a M by N destination matrix (tile) having single-precision elements, a M by K first source matrix (tile), and a K by N second source matrix (tile), elements of the specified first and second source matrices including a pair of even (e.g., having a pair index of zero in the (0,1) pairs in each element of the sources in Figure 21 A) and odd (e.g., having a pair index of one in the (0,1) pairs in each element of the sources in Figure 21 A) 16-bit floating-point values, wherein the opcode is to indicate execution circuitry is to, for each element (e.g., each of MxN number of elements) of the specified destination matrix (e.g., tile), convert K pairs of elements from row M of the specified first source matrix (e.g., tile) and K corresponding pairs of elements from column N of the specified second source matrix (e.g., tile) to single-precision values, multiply the converted even elements from the two specified source matrices (e.g., tiles) and separately multiply the converted odd elements from the specified source matrices (e.g., tiles), and then accumulate those products separately, one sum of the even products and one sum of the odd products with previous contents of the element (M,N).[00170] In certain embodiments, the disclosed TILEDPFP16PS instruction is to be executed by a processor that includes fetch circuitry to fetch an instruction having fields to specify an opcode that indicates execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floatingpoint values to single-precision values, a multiplication of converted single-precision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix.[00171] In certain embodiments, the processor will also include other supporting hardware, such as decode circuitry to decode the fetched instruction, and execution circuitry to respond to the decoded instruction as specified by the opcode, e.g., execution circuitry that causes a matrix operations accelerator (e.g., matrix operations accelerator 307 in Figure 3) to perform one or more (e.g., all) of the actions of the TILEDPFP16PS instruction.[00172] Figure 21 A is a block diagram illustrating use of a TILEDPFP16PS instruction to accelerate matrix multiplication, according to some embodiments. As shown, instruction 2101 includes fields to specify an opcode 2102 (e.g., TILEDPFP16PS) and locations of a M by N destination matrix (e.g., tile) 2104 having single-precision elements, a M by K first source matrix (e.g., tile) 2106, and a K by N second source matrix (e.g., tile) 2108, the specified source matrices having elements that each comprise a pair of 16-bit (e.g., IEEE half-precision) floatingpoint values. A format of the TILEDPFP16PS instruction, according to some embodiments, is further illustrated and described at least with respect to Figures 24, 25A-B, and 26A-D.
[00173] Here, the specified first source matrix (e.g., tile) 2112A has dimensions of M=4 by K=3. The specified second source matrix (e.g., tile) 2112B has dimensions of K=3 by N=5. K, M, and N are shown as having different values for illustrative purposes, but in other embodiments, they can be equal.[00174] In one embodiment of operation, processor 2100 is to respond to opcode 2102 (TILEDPFP16PS) by, for each element (M,N) of the specified destination matrix (e.g., tile) 2122, convert, using convert circuit 2116A, K pairs of elements from row M of the specified first source matrix (e.g., tile) 2112A, and, using convert circuit 2116B, K pairs of elements from column N of the specified second source matrix (e.g., tile) 2112B to single-precision, e.g., binary32 single-precision floating-point as specified by IEEE 794. Processor 2100 is then to multiply, using multiply circuit 2118, the K converted even values together and the K converted odd values together, and accumulate, using accumulate circuit 2120, the K products with previous contents of the element (M,N).[00175] Performance of the TILEDPFP16PS instruction is illustrated here for setting the destination element at matrix (e.g., tile) location, e.g., the index of (row, column) being (1, 0) in Ci,o. In Figure 21 A, the “.0” refers to the first value of a pair and the “.1” refers to the second value of the pair, e.g., such that Ai,o.o is the first value of a pair of values stored in element Ai,o. In certain embodiments, processor 2100 is to convert, using convert circuits 2116A and 2116B, K (=3) pairs of elements from row M (=1) of the specified first source matrix (e.g., tile) 2112A and K (=3) pairs of elements from column N (=0) of the specified second source matrix (e.g., tile) 2112B to single-precision. In certain embodiments, processor 2100 (for example, matrix operations circuitry, e.g., as part of a matrix operations accelerator) is then to use multiply circuit 2118 to multiply the converted even elements from the two specified source matrices (e.g., tiles) and separately multiply the converted odd elements from the specified source matrices (e.g., tiles), and then use accumulate circuit 2120 to accumulate those products separately, one sum of the even products and one sum of the odd products, with previous contents of the element (M,N), e.g., shown in the example here as the FP32 value from element C(l,0).[00176] In one embodiment, each of the convert circuits 2116A and 2116B first converts the half-precision values (e.g., IEEE float 16 format) to a floatl9 format having a sign field (one bit wide), an exponent field (eight bits wide), and a mantissa (significand precision) field (ten bits implicitly stored, i.e., eleven bits wide explicitly stored), and then from floatl9 format (e.g., as an intermediate format) to full-precision format (e.g., IEEE float 32 format) (e.g., by appending zeros in the thirteen lowest significant bits (LSBs) of the mantissa (significand precision) value of the full-precision value to the mantissa (significand precision) value of the float!9 value).
[00177] As shown, three arrows travel from each of the specified first and second source matrices (e.g., tiles), to indicate that the conversions and multiplications occur in parallel. In some embodiments, the processor responds to the decoded instruction by generating and storing results into every element of the specified destination matrix (e.g., tile) in parallel. In some embodiments, new values are generated and stored into the destination on a row-at-a-time or a column-at-a-time basis.[00178] Disclosed embodiments improve upon alternative approaches by allowing software to perform a TILEDPFP16PS instruction with reduced source element sizes, which allows less memory space and less memory bandwidth to be used and improves the peak compute performance (and energy efficiency) of certain applications. Some applications, such as deep learning, can operate on input data elements with relatively few bits without losing accuracy, as long as the output elements have enough bits (e.g., more than the inputs).[00179] Figure 21B is a block diagram illustrating example execution circuitry 2114 (e.g., matrix operations circuitry) to execute a TILEDPFP16PS (TDPFP16PS) instruction, according to some embodiments. Example execution circuity 2114 includes a first data width data path (for example, 16 bits wide, e.g., according to a half-precision format) and second, wider data path width (for example, 32 bits wide, e.g., according to a full-precision format), for example, with the black lines being 16 bits wide (e.g., floatl6) and the gray lines being 32 bits wide (e.g., float32). For example, with convert (CVT) half-precision (FP16) to full-precision (FP32) circuits, and full precision (FP32) fused multiply and accumulate (FMA) circuits. For example, using the FMA circuits in Figure 6. Systems and methods to execute a TILEDPFP16PS instruction are further illustrated and described, at least with respect to Figures 22A-B, 23, and 28A-B[00180] In certain embodiments, a TILEDPFP16PS instruction is part of a tile (e.g., AMX) architecture extension to an ISA that includes two-dimensional (2D) registers (for example, with each tile register identified as a single “tile register” (e.g., a single pointer to a single tile register), e.g., in contrast to vector (e.g., ZMM, YMM, or XMM) registers), and the ISA may include separate instructions to load/ store 2D blocks from memory (e.g., strided sets of contiguous locations), instructions to perform matrix-matrix multiplication on three registers (e.g., matrix C updated = matrix A x matrix B + matrix C previous), and/or instructions to perform elementwise arithmetic operations on two (or three) source tiles. In one embodiment, a source matrix or source matrices are first loaded (e.g., via a host processor) into a cache (e.g., a first level (LI) data cache) and are then loaded (e.g., via execution of a tile load instruction) into a tile register (e.g., of a matrix operations accelerator) from the cache, e.g., via coherent memory interface 303 in Figure 3.
EXEMPLARY EXECUTION[00181] Figure 22A is pseudocode illustrating exemplary execution of a TILEDPFP16PS instruction according to some embodiments. As shown, instruction 2201 includes an opcode 2202 (e.g. TILEDPFP16PS) and (e.g., tile) locations of a M by N destination matrix 2204 having single-precision elements, a M by K first source matrix 2206, and a K by N second source matrix 2208, the specified source matrices having elements comprising a pair of 16-bit floating-point values. Opcode 2202 (TILEDPFP16PS) indicates that the processor is to, as shown in pseudocode 2200, for each element (M,N) of the specified destination matrix (e.g., tile), convert K pairs of elements from row M of the specified first source matrix (e.g., tile) and K pairs of elements from column N of the specified second source matrix (e.g., tile) to single-precision, multiply the converted even elements from the two specified source matrices (e.g., tiles) and separately multiply the converted odd elements from the two specified source matrices (e.g., tiles), and then accumulate those products separately, one sum of the even products and one sum of the odd products, with previous contents of the element (M,N). In other embodiments, the multiplications take place before the conversions.[00182] In one embodiment, an architectural machine specific register (MSR) (e.g., as one of registers 1315 in Figure 13) (e.g., MXCSR register storing control and/or status information for an SSE register) is read (e.g., as part of the execution of an instruction), e.g., to determine exception information. DAZ may refer to a “denormals-are-zero” control (e.g., in an MSR)). In certain embodiments, half-precision (e.g., FP16) values are able to be processed having denormal/subnormal values.[00183] In one embodiment, an architectural machine specific register (MSR) MXCSR register (e.g., MXCSR register storing control and/or status information for an SSE register) is not read (e.g., is not inspected and/or is not updated) (e.g., as part of the execution of an instruction). In certain embodiments, exception information for an instruction is implicit in the instruction, for example, DAZ=1 being implied for a bfloatl6 operation (e.g., without consulting MXCSR) and/or DAZ=0 being implied for a FP16 operation (e.g., without consulting MXCSR) (e.g., for a TILEDPFP16PS instruction).[00184] In operation, M, K, and N may be specified in one or more of several ways: as operands to the TILEDPFP16PS instruction (e.g., shown as tile registers “t” here), as suffixes or prefixes to the specified opcode (an asterisk is used herein as a shorthand to refer to those optional suffixes and prefixes), as part of an immediate provided with the instruction (e.g., K, M, and N each to be specified as a difference. g., 8) bits of an (e.g., 32-bit) immediate), as part of control registers programmed by software (e.g., XTILECONFIG is a register loaded by either a matrix
accelerator configuration instruction, such as TILECFG or a XRSTORE* instructions, and is stored by matrix save instruction, such as XSAVE*), or even as architectural default values. [00185] Instruction 2201 further specifies destination matrix (e.g., tile) location 2204, first source matrix (e.g., tile) location 2206, and second source matrix (e.g., tile) location 2208. Each specified matrix (e.g., tile) locations can point to any of a memory location, a collection of vector registers, and a collection of tile registers.[00186] Figure 22B is pseudocode 2220 for exemplary helper functions for use with a TILEDPFP16PS instruction, according to some embodiments. As shown, pseudocode 2220 defines a make_fp32 () function, a write_row_and_zero() function, a zero_upper_rows() function, and a zero_tileconfig_start() function, all of which may be used by TILEDPFP16PS pseudocode of Figure 22A.[00187] Execution of a TILEDPFP16PS instruction is further illustrated and described with respect to Figures 21, 22A-B, 23, 28A-B, and 29A-B. Example formats of TILEDPFP16PS instructions is further illustrated and described with respect to Figures 24-26D.EXEMPLARY METHOD(S) OF EXECUTION[00188] Figure 23 is a block flow diagram illustrating a processor responding to a TILEDPFP16PS instruction. As shown in flow diagram 2300, at 2301, the processor is to fetch, using fetch circuitry, an instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, a M by K first source matrix, and a K by N second source matrix, the specified source matrices having elements comprising a pair of 16-bit half-precision floating-point values.[00189] In embodiments that use a processor’s physical register file (for example, or use one or more two-dimensional (2D) (e.g., AMX) tile registers, e.g., tile registers formed from data buffers 305 in Figure 3, which in certain embodiments are separate from any scalar and/or vector (e.g., one dimensional array) registers) to store matrices (e.g., tiles), since the destination elements are twice as wide as the source elements, having a pair of 16-bit single-precision floating-point format values in the source allows efficient use, e.g., when matrices (e.g., tiles) are a collection of vector registers, of the same type of vector register, be it a 128-bit xmm register, a 256-bit ymm register, or a 512-bit zmm registers. Such efficient use can also be achieved when the matrices are stored in (e.g., AMX) tile registers. In other embodiments, a single source vector having 16-bit floating-point elements is converted into 32-bit elements stored in a destination vector having half the width of the source vector.[00190] In certain embodiments, the specified opcode is to indicate that execution circuitry is to, for each element (M,N) of the specified destination matrix, convert k pairs of elements from row
M of the specified first source matrix and k pairs of elements from column n of the specified second source matrix to single-precision, multiply the converted even elements from the two specified source matrices (e.g., tiles) and separately multiply the converted odd elements from the two specified source matrices (e.g., tiles), and then accumulate those products separately, one sum of the even products and one sum of the odd products, with previous contents of the element (m, n).[00191] In certain embodiments, the specified opcode is to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted single-precision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix.[00192] At 2303, the processor is to decode, using decode circuitry, the fetched instruction. For example, the fetched TILEDPFP16PS instruction is decoded by decode circuitry such as that detailed herein. In the context of the illustrated system, decode circuitry may be that illustrated and described at least with respect to Figures 13, 14, and 28A-B.[00193] At 2305, execution of the decoded instruction is scheduled (as needed), which is optional (as indicated by its dashed border) insofar as it may occur at a different time, or not at all. At 2307, the processor is to respond, using execution circuitry, to the decoded instruction as specified by the opcode.[00194] In some embodiments, the instruction is committed or retired at 2309, which is optional (as indicated by its dashed border) insofar as it may occur at a different time, or not at all.[00195] Example execution circuitry is further illustrated and described with respect to Figures 3-14. In some embodiments, execution circuitry caused execution by (e.g., offload to) a matrix operations accelerator, such as that illustrated and described as accelerator 307 (Figure 3). In some embodiments, execution circuitry is a matrix operations circuit, such as matrix operations circuitry 405 (Figure 4), 505 (Figure 5), or 1213 (Figure 12), and 1327 (Figure 13).EXEMPLARY INSTRUCTION FORMAT(S)[00196] Figure 24 is a block diagram illustrating a format of a TILEDPFP16PS instruction, according to some embodiments. As shown, TILEDPFP16PS instruction 2400 includes fields to specify an opcode 2402 (TILEDPFP16PS*), which indicates that the processor is to, for each element (M,N) of the specified destination matrix, convert K pairs of elements from row M of the specified first source matrix and K pairs of elements from column N of the specified second
source matrix to single-precision, multiply the converted even elements from the two specified source matrices (e.g., tiles) and separately multiply the converted odd elements from the two specified source matrices (e.g., tiles), and then accumulate those products separately, one sum of the even products and one sum of the odd products, with previous contents of the element (m, n). [00197] Instruction 2400 further includes destination matrix (e.g., tile) location 2404, first source matrix (e.g., tile) location 2406, and second source matrix (e.g., tile) location 2408. Each of the specified source and destination matrix locations can be in any of a memory location, a collection of vector registers, and a collection of (e.g., AMX) tile registers.[00198] TILEDPFP16PS instruction 2400 further includes several optional parameters to control the processor’s behavior, including source element format 2410, k (mask control) and/or z (zeroing control) 2412, M 2414, and N 2416. In some embodiments, N and M are each any one of 4, 8, 16, and 32 (e.g., any number for either M or N, which could be 32, 64, or larger). In some embodiments, N and M are each an integer larger than or equal to 4.[00199] Opcode 2402 is shown including an asterisk, which is to convey that additional prefixes and/or suffixes may be added to specify instruction behavior. One or more of instructions modifiers 2410, 2412, 2414, and 2416 may be specified using prefixes or suffixes to opcode 2402, e.g., a prefixes and/or suffix that indicates an instruction is to be executed with a matrix operations accelerator (e.g., including a FMA grid).[00200] In some embodiments, one or more of optional instructions modifiers 2410, 2412, 2414, and 2416 are encoded in an immediate field (not shown) optionally included with the instruction 2400. In some embodiments, one or more of optional instructions modifiers 2410, 2412, 2414, and 2416 are specified via a configuration/ status register (e.g., XTILECONFIG).[00201] In some embodiments, instructions modifier 2410 includes a mask {k} 208 (e.g., writemask) and/or a zeroing control {z}, e.g., with mask {k} to control which destination elements are to be updated and/or zeroing control {z} to control whether to apply zeroing (or merging) to masked destination elements.[00202] When any one or more of optional modifiers 2410, 2412, 2414, or 2416 are not specified by the instruction, they may use default values or implicit parameters, e.g., that are inherited from other parts of the tile architecture.DETAILED EXEMPLARY SYSTEMS, PROCESSORS, AND EMULATION[00203] Detailed herein are examples of hardware, software, etc. to execute the above described instructions. For example, what is described below details aspects of instruction execution including various pipeline stages such as fetch, decode, schedule, execute, retire, etc.
INSTRUCTION SETS[00204] An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or subformats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format’s fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (sourcel/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of SIMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer’s Manual, September 2014; and see Intel® Advanced Vector Extensions Programming Reference, October 2014).EXEMPLARY INSTRUCTION FORMATS[00205] Embodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.[00206] Generic Vector Friendly Instruction Format[00207] A vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.[00208] Figures 25A-25B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments. Figure 25A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates
thereof according to embodiments; while Figure 25B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments. Specifically, a generic vector friendly instruction format 2500 for which are defined class A and class B instruction templates, both of which include no memory access 2505 instruction templates and memory access 2520 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.[00209] While embodiments will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doublewordsize elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).[00210] The class A instruction templates in Figure 25A include: 1) within the no memory access 2505 instruction templates there is shown a no memory access, full round control type operation 2510 instruction template and a no memory access, data transform type operation 2515 instruction template; and 2) within the memory access 2520 instruction templates there is shown a memory access, temporal 2525 instruction template and a memory access, non-temporal 2530 instruction template. The class B instruction templates in Figure 25B include: 1) within the no memory access 2505 instruction templates there is shown a no memory access, write mask control, partial round control type operation 2512 instruction template and a no memory access, write mask control, vsize type operation 2517 instruction template; and 2) within the memory access 2520 instruction templates there is shown a memory access, write mask control 2527 instruction template.[00211] The generic vector friendly instruction format 2500 includes the following fields listed below in the order illustrated in Figures 25A-25B.[00212] Format field 2540 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the
sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.[00213] Base operation field 2542 - its content distinguishes different base operations.[00214] Register index field 2544 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While one embodiment supports up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).[00215] Modifier field 2546 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 2505 instruction templates and memory access 2520 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.[00216] Augmentation operation field 2550 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment, this field is divided into a class field 2568, an alpha field 2552, and a beta field 2554. The augmentation operation field 2550 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.[00217] Scale field 2560 - its content allows for the scaling of the index field’s content for memory address generation (e.g., for address generation that uses 2scale* index + base).[00218] Displacement Field 2562A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale* index + base + displacement).[00219] Displacement Factor Field 2562B (note that the juxtaposition of displacement field 2562A directly over displacement factor field 2562B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale* index + base + scaled displacement). Redundant low- order bits are ignored and hence, the displacement factor field’s content is multiplied by the
memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 2574 (described later herein) and the data manipulation field 2554C. The displacement field 2562A and the displacement factor field 2562B are optional in the sense that they are not used for the no memory access 2505 instruction templates and/or different embodiments may implement only one or none of the two.[00220] Data element width field 2564 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.[00221] Write mask field 2570 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support mergingwritemasking, while class B instruction templates support both merging- and zeroingwritemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 2570 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments are described in which the write mask field’s 2570 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field’s 2570 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field’s 2570 content to directly specify the masking to be performed.[00222] Immediate field 2572 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.
[00223] Class field 2568 - its content distinguishes between different classes of instructions. With reference to Figures 25A-B, the contents of this field select between class A and class B instructions. In Figures 25A-B, rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 2568A and class B 2568B for the class field 2568 respectively in Figures 25A-B).INSTRUCTION TEMPLATES OF CLASS A[00224] In the case of the non-memory access 2505 instruction templates of class A, the alpha field 2552 is interpreted as an RS field 2552A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 2552A.1 and data transform 2552A.2 are respectively specified for the no memory access, round type operation 2510 and the no memory access, data transform type operation 2515 instruction templates), while the beta field 2554 distinguishes which of the operations of the specified type is to be performed. In the no memory access 2505 instruction templates, the scale field 2560, the displacement field 2562A, and the displacement scale filed 2562B are not present.NO-MEMORY ACCESS INSTRUCTION TEMPLATES - FULL ROUND CONTROL TYPE OPERATION[00225] In the no memory access full round control type operation 2510 instruction template, the beta field 2554 is interpreted as a round control field 2554A, whose content(s) provide static rounding. While in the described embodiments the round control field 2554 A includes a suppress all floating-point exceptions (SAE) field 2556 and a round operation control field 2558, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 2558).[00226] SAE field 2556 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field’s 2556 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler.[00227] Round operation control field 2558 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round- to-nearest). Thus, the round operation control field 2558 allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field’s 2550 content overrides that register value.
No MEMORY ACCESS INSTRUCTION TEMPLATES - DATA TRANSFORM TYPE OPERATION[00228] In the no memory access data transform type operation 2515 instruction template, the beta field 2554 is interpreted as a data transform field 2554B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).[00229] In the case of a memory access 2520 instruction template of class A, the alpha field 2552 is interpreted as an eviction hint field 2552B, whose content distinguishes which one of the eviction hints is to be used (in Figure 25A, temporal 2552B.1 and non-temporal 2552B.2 are respectively specified for the memory access, temporal 2525 instruction template and the memory access, non-temporal 2530 instruction template), while the beta field 2554 is interpreted as a data manipulation field 2554C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 2520 instruction templates include the scale field 2560, and optionally the displacement field 2562A or the displacement scale field 2562B.[00230] Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.MEMORY ACCESS INSTRUCTION TEMPLATES - TEMPORAL[00231] Temporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.MEMORY ACCESS INSTRUCTION TEMPLATES - NON-TEMPORAL[00232] Non-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1 st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.INSTRUCTION TEMPLATES OF CLASS B[00233] In the case of the instruction templates of class B, the alpha field 2552 is interpreted as a write mask control (Z) field 2552C, whose content distinguishes whether the write masking controlled by the write mask field 2570 should be a merging or a zeroing.
[00234] In the case of the non-memory access 2505 instruction templates of class B, part of the beta field 2554 is interpreted as an RL field 2557A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 2557A.1 and vector length (VSIZE) 2557A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 2512 instruction template and the no memory access, write mask control, VSIZE type operation 2517 instruction template), while the rest of the beta field 2554 distinguishes which of the operations of the specified type is to be performed. In the no memory access 2505 instruction templates, the scale field 2560, the displacement field 2562A, and the displacement scale filed 2562B are not present.[00235] In the no memory access, write mask control, partial round control type operation 2510 instruction template, the rest of the beta field 2554 is interpreted as a round operation field 2559A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler).[00236] Round operation control field 2559A - just as round operation control field 2558, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 2559A allows for the changing of the rounding mode on a per instruction basis. In one embodiment where a processor includes a control register for specifying rounding modes, the round operation control field’s 2550 content overrides that register value.[00237] In the no memory access, write mask control, VSIZE type operation 2517 instruction template, the rest of the beta field 2554 is interpreted as a vector length field 2559B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).[00238] In the case of a memory access 2520 instruction template of class B, part of the beta field 2554 is interpreted as a broadcast field 2557B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 2554 is interpreted the vector length field 2559B. The memory access 2520 instruction templates include the scale field 2560, and optionally the displacement field 2562A or the displacement scale field 2562B.[00239] With regard to the generic vector friendly instruction format 2500, a full opcode field 2574 is shown including the format field 2540, the base operation field 2542, and the data element width field 2564. While one embodiment is shown where the full opcode field 2574 includes all of these fields, the full opcode field 2574 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 2574 provides the operation code (opcode).
[00240] The augmentation operation field 2550, the data element width field 2564, and the write mask field 2570 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.[00241] The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths. [00242] The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the disclosure). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general -purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general-purpose cores may be high-performance general-purpose cores with out of order execution and register renaming intended for general- purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.EXEMPLARY SPECIFIC VECTOR FRIENDLY INSTRUCTION FORMAT[00243] Figure 26A is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments. Figure 26A shows a specific vector friendly instruction format 2600 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 2600 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and
extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from Figure 25 into which the fields from Figure 26A map are illustrated.[00244] It should be understood that, although embodiments are described with reference to the specific vector friendly instruction format 2600 in the context of the generic vector friendly instruction format 2500 for illustrative purposes, the disclosure is not limited to the specific vector friendly instruction format 2600 except where claimed. For example, the generic vector friendly instruction format 2500 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 2600 is shown as having fields of specific sizes. By way of specific example, while the data element width field 2564 is illustrated as a one-bit field in the specific vector friendly instruction format 2600, the disclosure is not so limited (that is, the generic vector friendly instruction format 2500 contemplates other sizes of the data element width field 2564).[00245] The generic vector friendly instruction format 2500 includes the following fields listed below in the order illustrated in Figure 26A.[00246] EVEX Prefix 2602 (Bytes 0-3)- is encoded in a four-byte form.[00247] Format Field 2540 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 2540 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment).[00248] The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.[00249] REX field 2605 (EVEX Byte 1, bits [7-5]) - consists of an EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 2557BEX byte 1, bit [5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using Is complement form, i.e. ZMM0 is encoded as 111 IB, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.[00250] REX’ field 2510 - this is the first part of the REX’ field 2510 and is the EVEX.R’ bit field (EVEX Byte 1, bit [4] - R’) that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments do not store this
and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R’Rrrr is formed by combining EVEX.R’, EVEX.R, and the other RRR from other fields.[00251] Opcode map field 2615 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (OF, OF 38, or OF 3).[00252] Data element width field 2564 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32- bit data elements or 64-bit data elements).[00253] EVEX.vvvv 2620 (EVEX Byte 2, bits [6:3]-ww)- the role of EVEX.vvvv may include the following: 1) EVEX.vvvv encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in Is complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vvvv field 2620 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.[00254] EVEX.U 2568 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.[00255] Prefix encoding field 2625 (EVEX byte 2, bits [l :0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder’s PL A (so the PL A can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field’s content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 -bit SIMD prefix encodings, and thus not require the expansion.[00256] Alpha field 2552 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also illustrated with a) - as previously described, this field is context specific.
[00257] Beta field 2554 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.S2-0, EVEX.n-o, EVEX.rrl, EVEX.LLO, EVEX.LLB; also illustrated with PPP) - as previously described, this field is context specific.[00258] REX’ field 2510 - this is the remainder of the REX’ field and is the EVEX.V’ bit field (EVEX Byte 3, bit [3] - V’) that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V’ VVVV is formed by combining EVEX.V’, EVEX.vvvv.[00259] Write mask field 2570 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).[00260] Real Opcode Field 2630 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.[00261] MOD R/M Field 2640 (Byte 5) includes MOD field 2642, Reg field 2644, and R/M field 2646. As previously described, the MOD field’s 2642 content distinguishes between memory access and non-memory access operations. The role of Reg field 2644 can be summarized to two situations: encoding either the destination register operand or a source register operand or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 2646 may include the following: encoding the instruction operand that references a memory address or encoding either the destination register operand or a source register operand.[00262] Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the content of SIB 2650 is used for memory address generation. SIB.xxx 2654 and SIB.bbb 2656 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb. [00263] Displacement field 2562A (Bytes 7-10) - when MOD field 2642 contains 10, bytes 7-10 are the displacement field 2562A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.[00264] Displacement factor field 2562B (Byte 7) - when MOD field 2642 contains 01, byte 7 is the displacement factor field 2562B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and +127-byte offsets; in terms of 64-byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to
disp8 and disp32, the displacement factor field 2562B is a reinterpretation of disp8; when using displacement factor field 2562B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8*N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement assumes that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 2562B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 2562B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8*N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 2572 operates as previously described.FULL OPCODE FIELD[00265] Figure 26B is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the full opcode field 2574 according to one embodiment. Specifically, the full opcode field 2574 includes the format field 2540, the base operation field 2542, and the data element width (W) field 2564. The base operation field 2542 includes the prefix encoding field 2625, the opcode map field 2615, and the real opcode field 2630.[00266] Register Index Field[00267] Figure 26C is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the register index field 2544 according to one embodiment. Specifically, the register index field 2544 includes the REX 2605 field, the REX’ 2610 field, the MODR/M.reg field 2644, the MODR/M.r/m field 2646, the VVVV field 2620, xxx field 2654, and the bbb field 2656.AUGMENTATION OPERATION FIELD[00268] Figure 26D is a block diagram illustrating the fields of the specific vector friendly instruction format 2600 that make up the augmentation operation field 2550 according to one embodiment. When the class (U) field 2568 contains 0, it signifies EVEX.U0 (class A 2568A); when it contains 1, it signifies EVEX.U1 (class B 2568B). When U=0 and the MOD field 2642 contains 11 (signifying a no memory access operation), the alpha field 2552 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 2552A. When the rs field 2552A contains a 1 (round 2552A.1), the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control
field 2554A. The round control field 2554A includes a one-bit SAE field 2556 and a two-bit round operation field 2558. When the rs field 2552 A contains a 0 (data transform 2552A.2), the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three-bit data transform field 2554B. When U=0 and the MOD field 2642 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 2552 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 2552B and the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three- bit data manipulation field 2554C.[00269] When U=l, the alpha field 2552 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 2552C. When U=1 and the MOD field 2642 contains 11 (signifying a no memory access operation), part of the beta field 2554 (EVEX byte 3, bit [4]- So) is interpreted as the RL field 2557A; when it contains a 1 (round 2557A.1) the rest of the beta field 2554 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 2559A, while when the RL field 2557A contains a 0 (VSIZE 2557A.2) the rest of the beta field 2554 (EVEX byte 3, bit [6-5]- S2- 1) is interpreted as the vector length field 2559B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 2642 contains 00, 01, or 10 (signifying a memory access operation), the beta field 2554 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 2559B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 2557B (EVEX byte 3, bit [4]- B).EXEMPLARY REGISTER ARCHITECTURE[00270] Figure 27 is a block diagram of a register architecture 2700 according to one embodiment. In the embodiment illustrated, there are 32 vector registers 2710 that are 512 bits wide; these registers are referenced as zmmO through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymmO-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 2600 operates on these overlaid register file as illustrated in the below tables.[00271] In other words, the vector length field 2559B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 2559B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 2600 operate on packed or scalar single/double- precision floating-point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in a zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.[00272] Write mask registers 2715 - in the embodiment illustrated, there are 8 write mask registers (kO through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 2715 are 16 bits in size. As previously described, in one embodiment, the vector mask register kO cannot be used as a write mask; when the encoding that would normally indicate kO is used for a write mask, it selects a hardwired write mask of OxFFFF, effectively disabling write masking for that instruction.[00273] General-purpose registers 2725 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.[00274] Scalar floating-point stack register file (x87 stack) 2745, on which is aliased the MMX packed integer flat register file 2750 - in the embodiment illustrated, the x87 stack is an eightelement stack used to perform scalar floating-point operations on 32/64/80-bit floating-point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.[00275] Alternative embodiments may use wider or narrower registers. Additionally, alternative embodiments may use more, less, or different register files and registers.[00276] Exemplary Core Architectures, Processors, and Computer Architectures[00277] Processor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high-performance general purpose out-of-order core intended for general -purpose computing; 3) a special purpose core
intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.EXEMPLARY CORE ARCHITECTURESIN-ORDER AND OUT-OF-ORDER CORE BLOCK DIAGRAM[00278] Figure 28A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments. Figure 28B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments. The solid lined boxes in Figures 28A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.[00279] In Figure 28A, a processor pipeline 2800 includes a fetch stage 2802, a length-decode stage 2804, a decode stage 2806, an allocation stage 2808, a renaming stage 2810, a scheduling (also known as a dispatch or issue) stage 2812, a register read/memory read stage 2814, an execute stage 2816, a write back/memory write stage 2818, an exception handling stage 2822, and a commit stage 2824.[00280] Figure 28B shows processor core 2890 including a front-end unit 2830 coupled to an execution engine unit 2850, and both are coupled to a memory unit 2870. The core 2890 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet
another option, the core 2890 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.[00281] The front-end unit 2830 includes a branch prediction unit 2832 coupled to an instruction cache unit 2834, which is coupled to an instruction translation lookaside buffer (TLB) 2836, which is coupled to an instruction fetch unit 2838, which is coupled to a decode unit 2840. The decode unit 2840 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 2840 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 2890 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 2840 or otherwise within the frontend unit 2830). The decode unit 2840 is coupled to a rename/allocator unit 2852 in the execution engine unit 2850.[00282] The execution engine unit 2850 includes the rename/allocator unit 2852 coupled to a retirement unit 2854 and a set of one or more scheduler unit(s) 2856. The scheduler unit(s) 2856 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 2856 is coupled to the physical register file(s) unit(s) 2858. Each of the physical register file(s) units 2858 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating-point, packed integer, packed floating-point, vector integer, vector floating-point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 2858 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general-purpose registers. The physical register file(s) unit(s) 2858 is overlapped by the retirement unit 2854 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 2854 and the physical register file(s) unit(s) 2858 are coupled to the execution cluster(s) 2860. The execution cluster(s) 2860 includes a set of one or more execution units 2862 and a set of one or more memory access units 2864. The execution units 2862 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar
floating-point, packed integer, packed floating-point, vector integer, vector floating-point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 2856, physical register file(s) unit(s) 2858, and execution cluster(s) 2860 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating-point/packed integer/packed floating-point/vector integer/vector floating-point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 2864). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of- order issue/execution and the rest in-order.[00283] The set of memory access units 2864 is coupled to the memory unit 2870, which includes a data TLB unit 2872 coupled to a data cache unit 2874 coupled to a level 2 (L2) cache unit 2876. In one exemplary embodiment, the memory access units 2864 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 2872 in the memory unit 2870. The instruction cache unit 2834 is further coupled to a level 2 (L2) cache unit 2876 in the memory unit 2870. The L2 cache unit 2876 is coupled to one or more other levels of cache and eventually to a main memory.[00284] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 2800 as follows: 1) the instruction fetch 2838 performs the fetch and length decoding stages 2802 and 2804; 2) the decode unit 2840 performs the decode stage 2806; 3) the rename/allocator unit 2852 performs the allocation stage 2808 and renaming stage 2810; 4) the scheduler unit(s) 2856 performs the schedule stage 2812; 5) the physical register file(s) unit(s) 2858 and the memory unit 2870 perform the register read/memory read stage 2814; the execution cluster 2860 perform the execute stage 2816; 6) the memory unit 2870 and the physical register file(s) unit(s) 2858 perform the write back/memory write stage 2818; 7) various units may be involved in the exception handling stage 2822; and 8) the retirement unit 2854 and the physical register file(s) unit(s) 2858 perform the commit stage 2824. [00285] The core 2890 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 2890 includes logic to support a packed data
instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.[00286] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[00287] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 2834/2874 and a shared L2 cache unit 2876, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.SPECIFIC EXEMPLARY IN-ORDER CORE ARCHITECTURE[00288] Figures 29A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.[00289] Figure 29A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 2902 and with its local subset of the Level 2 (L2) cache 2904, according to embodiments. In one embodiment, an instruction decoder 2900 supports the x86 instruction set with a packed data instruction set extension. An LI cache 2906 allows low- latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 2908 and a vector unit 2910 use separate register sets (respectively, scalar registers 2912 and vector registers 2914) and data transferred between them is written to memory and then read back in from a level 1 (LI) cache 2906, alternative embodiments may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).
[00290] The local subset of the L2 cache 2904 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 2904. Data read by a processor core is stored in its L2 cache subset 2904 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 2904 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1012-bits wide per direction.[00291] Figure 29B is an expanded view of part of the processor core in Figure 29A according to embodiments. Figure 29B includes an LI data cache 2906A part of the LI cache 2904, as well as more detail regarding the vector unit 2910 and the vector registers 2914. Specifically, the vector unit 2910 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 2928), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 2920, numeric conversion with numeric convert units 2922A-B, and replication with replication unit 2924 on the memory input. Write mask registers 2926 allow predicating resulting vector writes.[00292] Figure 30 is a block diagram of a processor 3000 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments. The solid lined boxes in Figure 30 illustrate a processor 3000 with a single core 3002A, a system agent 3010, a set of one or more bus controller units 3016, while the optional addition of the dashed lined boxes illustrates an alternative processor 3000 with multiple cores 3002A-N, a set of one or more integrated memory controller unit(s) 3014 in the system agent unit 3010, and special purpose logic 3008.[00293] Thus, different implementations of the processor 3000 may include: 1) a CPU with the special purpose logic 3008 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 3002A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 3002A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 3002A-N being a large number of general purpose in-order cores. Thus, the processor 3000 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor
may be implemented on one or more chips. The processor 3000 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[00294] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 3006, and external memory (not shown) coupled to the set of integrated memory controller units 3014. The set of shared cache units 3006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring-based interconnect unit 3012 interconnects the special purpose logic 3008 (integrated graphics logic is an example of and is also referred to herein as special purpose logic), the set of shared cache units 3006, and the system agent unit 3010/integrated memory controller unit(s) 3014, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 3006 and cores 3002A-N.[00295] In some embodiments, one or more of the cores 3002A-N are capable of multithreading. The system agent 3010 includes those components coordinating and operating cores 3002A-N. The system agent unit 3010 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 3002A-N and the special purpose logic 3008. The display unit is for driving one or more externally connected displays.[00296] The cores 3002A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 3002A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.EXEMPLARY COMPUTER ARCHITECTURES[00297] Figures 31-34 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.
[00298] Referring now to Figure 31, shown is a block diagram of a system 3100 in accordance with one embodiment of the present disclosure. The system 3100 may include one or more processors 3110, 3115, which are coupled to a controller hub 3120. In one embodiment the controller hub 3120 includes a graphics memory controller hub (GMCH) 3190 and an Input/Output Hub (IOH) 3150 (which may be on separate chips); the GMCH 3190 includes memory and graphics controllers to which are coupled memory 3140 and a coprocessor 3145; the IOH 3150 couples input/output (I/O) devices 3160 to the GMCH 3190. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 3140 and the coprocessor 3145 are coupled directly to the processor 3110, and the controller hub 3120 in a single chip with the IOH 3150. Memory 3140 may include matrix acceleration code 3140 A, for example, that stores code that when executed causes a processor to perform any method of this disclosure.[00299] The optional nature of additional processors 3115 is denoted in Figure 31 with broken lines. Each processor 3110, 3115 may include one or more of the processing cores described herein and may be some version of the processor 3000.[00300] The memory 3140 may be, for example, dynamic random-access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 3120 communicates with the processor(s) 3110, 3115 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 3195.[00301] In one embodiment, the coprocessor 3145 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 3120 may include an integrated graphics accelerator.[00302] There can be a variety of differences between the physical resources 3110, 3115 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.[00303] In one embodiment, the processor 3110 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 3110 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 3145. Accordingly, the processor 3110 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 3145. Coprocessor s) 3145 accept and execute the received coprocessor instructions.
[00304] Referring now to Figure 32, shown is a block diagram of a first more specific exemplary system 3200 in accordance with an embodiment of the present disclosure. As shown in Figure 32, multiprocessor system 3200 is a point-to-point interconnect system, and includes a first processor 3270 and a second processor 3280 coupled via a point-to-point interconnect 3250. Each of processors 3270 and 3280 may be some version of the processor 3000. In one embodiment, processors 3270 and 3280 are respectively processors 3110 and 3115, while coprocessor 3238 is coprocessor 3145. In another embodiment, processors 3270 and 3280 are respectively processor 3110 coprocessor 3145.[00305] Processors 3270 and 3280 are shown including integrated memory controller (IMC) units 3272 and 3282, respectively. Processor 3270 also includes as part of its bus controller units point-to-point (P-P) interfaces 3276 and 3278; similarly, second processor 3280 includes P- P interfaces 3286 and 3288. Processors 3270, 3280 may exchange information via a point-to- point (P-P) interface 3250 using P-P interface circuits 3278, 3288. As shown in Figure 32, IMCs 3272 and 3282 couple the processors to respective memories, namely a memory 3232 and a memory 3234, which may be portions of main memory locally attached to the respective processors.[00306] Processors 3270, 3280 may each exchange information with a chipset 3290 via individual P-P interfaces 3252, 3254 using point to point interface circuits 3276, 3294, 3286, 3298. Chipset 3290 may optionally exchange information with the coprocessor 3238 via a high- performance interface 3292. In one embodiment, the coprocessor 3238 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. [00307] A shared cache (not shown) may be included in either processor or outside of both processors yet connected with the processors via P-P interconnect, such that either or both processors’ local cache information may be stored in the shared cache if a processor is placed into a low power mode.[00308] Chipset 3290 may be coupled to a first bus 3216 via an interface 3296. In one embodiment, first bus 3216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation VO interconnect bus, although the scope of the present disclosure is not so limited.[00309] As shown in Figure 32, various I/O devices 3214 may be coupled to first bus 3216, along with a bus bridge 3218 which couples first bus 3216 to a second bus 3220. In one embodiment, one or more additional processor(s) 3215, such as coprocessors, high-throughput MIC processors, GPGPU’ s, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to
first bus 3216. In one embodiment, second bus 3220 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 3220 including, for example, a keyboard and/or mouse 3222, communication devices 3227 and a storage unit 3228 such as a disk drive or other mass storage device which may include instructions/code and data 3230, in one embodiment. Further, an audio I/O 3224 may be coupled to the second bus 3220. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 32, a system may implement a multi-drop bus or other such architecture.[00310] Referring now to Figure 33, shown is a block diagram of a second more specific exemplary system 3300 in accordance with an embodiment of the present disclosure. Like elements in Figures 32 and 33 bear like reference numerals, and certain aspects of Figure 32 have been omitted from Figure 33 in order to avoid obscuring other aspects of Figure 33. [00311] Figure 33 illustrates that the processors 3270, 3280 may include integrated memory and I/O control logic (“CL”) 3372 and 3382, respectively. Thus, the CL 3372, 3382 include integrated memory controller units and include I/O control logic. Figure 33 illustrates that not only are the memories 3232, 3234 coupled to the CL 3372, 3382, but also that I/O devices 3314 are also coupled to the control logic 3372, 3382. Legacy I/O devices 3315 are coupled to the chipset 3290.[00312] Referring now to Figure 34, shown is a block diagram of a SoC 3400 in accordance with an embodiment of the present disclosure. Similar elements in Figure 30 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 34, an interconnect unit(s) 3402 is coupled to: an application processor 3410 which includes a set of one or more cores 3002A-N, which include cache units 3004A-N, and shared cache unit(s) 3006; a system agent unit 3010; a bus controller unit(s) 3016; an integrated memory controller unit(s) 3014; a set or one or more coprocessors 3420 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 3430; a direct memory access (DMA) unit 3432; and a display unit 3440 for coupling to one or more external displays. In one embodiment, the coprocessor s) 3420 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.[00313] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
[00314] Program code, such as code 3230 illustrated in Figure 32, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00315] The program code may be implemented in a high level procedural or object-oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00316] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as “IP cores” may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00317] Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable’s (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[00318] Accordingly, embodiments also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.EMULATION (INCLUDING BINARY TRANSLATION, CODE MORPHING, ETC.)[00319] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may
translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[00320] Figure 35 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 35 shows a program in a high-level language 3502 may be compiled using an x86 compiler 3504 to generate x86 binary code 3506 that may be natively executed by a processor with at least one x86 instruction set core 3516. The processor with at least one x86 instruction set core 3516 represents any processor that can perform substantially the same functions as an Intel® processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel® x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel® processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel® processor with at least one x86 instruction set core. The x86 compiler 3504 represents a compiler that is operable to generate x86 binary code 3506 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 3516. Similarly, Figure 35 shows the program in the high level language 3502 may be compiled using an alternative instruction set compiler 3508 to generate alternative instruction set binary code 3510 that may be natively executed by a processor without at least one x86 instruction set core 3514 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 3512 is used to convert the x86 binary code 3506 into code that may be natively executed by the processor without an x86 instruction set core 3514. This converted code is not likely to be the same as the alternative instruction set binary code 3510 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 3512 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 3506.
FURTHER EXAMPLES[00321] At least some embodiments of the disclosed technologies can be described in view of the following examples:Example 1. An apparatus comprising: fetch circuitry to fetch a single instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, an M by K first source matrix, and a K by N second source matrix, the source matrices having elements that each comprise a pair of half-precision floating-point values, the opcode to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted singleprecision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix; decode circuitry to decode the fetched instruction; and the execution circuitry to respond to the decoded instruction as specified by the opcode.Example 2. The apparatus of example 1, wherein the half-precision floating-point format is specified by the opcode of the single instruction.Example 3. The apparatus of example 1, wherein M, N, and K are specified by the single instruction.Example 4. The apparatus of example 1, where the execution circuitry is to cause a matrix operations accelerator to perform at least the multiplications and the accumulation.Example 5. The apparatus of example 4, wherein M, N, and K are specified by a configuration of the matrix operations accelerator to be programmed by execution of a matrix accelerator configuration instruction before executing the single instruction.Example 6. The apparatus of example 1, wherein the execution circuitry is further to cause saturation of execution results, as necessary.
Example 7. The apparatus of example 1, wherein the single instruction is further to specify a writemask comprising M x N bits, each bit to control whether to mask a corresponding element of the destination matrix.Example 8. The apparatus of example 1, wherein the execution circuitry is further to generate a fault when a fault condition occurs, the fault condition selectable from: the destination matrix having a fewer number of rows than a number of rows of the first source matrix; and the destination matrix having a fewer number of columns than a number of columns of the second source matrix.Example 9. A method comprising: fetching, by fetch circuitry of a processor, a single instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, an M by K first source matrix, and a K by N second source matrix, the source matrices having elements that each comprise a pair of half-precision floating-point values, the opcode to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted singleprecision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the second result with previous contents of a corresponding element of the destination matrix; decoding, by decode circuitry of the processor, the fetched instruction into a decoded single instruction; and executing, by the execution circuitry of the processor, the decoded single instruction according to the opcode.Example 10. The method of example 9, wherein the half-precision floating-point format is specified by the opcode of the single instruction.Example 11. The method of example 9, wherein M, N, and K are specified by the single instruction.
Example 12. The method of example 9, where the execution circuitry causes a matrix operations accelerator to perform at least the multiplications and the accumulation.Example 13. The method of example 12, further comprising executing, by the execution circuitry of the processor before executing the single instruction, a matrix accelerator configuration instruction that programs a configuration of the matrix operations accelerator specifying M, N, and K.Example 14. The method of example 9, wherein the executing comprises saturating the execution results.Example 15. The method of example 9, wherein the single instruction further specifies a writemask comprising M x N bits, each bit controlling whether to mask a corresponding element of the destination matrix.Example 16. The method of example 9, wherein the executing generates a fault when a fault condition occurs, the fault condition selectable from: the destination matrix having a fewer number of rows than a number of rows of the first source matrix; and the destination matrix having a fewer number of columns than a number of columns of the second source matrix.Example 17. A non-transitory machine readable medium that stores program code that when executed by a machine causes the machine to perform a method comprising: fetching, by fetch circuitry of a processor, a single instruction having fields to specify an opcode and locations of a M by N destination matrix having single-precision elements, an M by K first source matrix, and a K by N second source matrix, the source matrices having elements that each comprise a pair of half-precision floating-point values, the opcode to indicate execution circuitry is to cause, for each element of the first source matrix and corresponding element of the second source matrix, a conversion of the half-precision floating-point values to single-precision values, a multiplication of converted singleprecision values from first values of the pairs together to generate a first result, a multiplication of converted single-precision values from second values of the pairs together to generate a second result, and an accumulation of the first result and the
second result with previous contents of a corresponding element of the destination matrix; decoding, by decode circuitry of the processor, the fetched instruction into a decoded single instruction; and executing, by the execution circuitry of the processor, the decoded single instruction according to the opcode.Example 18. The non-transitory machine readable medium of example 17, wherein the halfprecision floating-point format is specified by the opcode of the single instruction.Example 19. The non-transitory machine readable medium of example 17, wherein M, N, and K are specified by the single instruction.Example 20. The non-transitory machine readable medium of example 17, where the executing comprises the execution circuitry causing a matrix operations accelerator to perform at least the multiplications and the accumulation.Example 21. The non-transitory machine readable medium of example 20, wherein the method further comprises executing, by the execution circuitry of the processor before executing the single instruction, a matrix accelerator configuration instruction that programs a configuration of the matrix operations accelerator specifying M, N, and K.Example 22. The non-transitory machine readable medium of example 17, wherein the executing comprises saturating the execution results.Example 23. The non-transitory machine readable medium of example 17, wherein the single instruction further specifies a writemask comprising M x N bits, each bit controlling whether to mask a corresponding element of the destination matrix.Example 24. The non-transitory machine readable medium of example 17, wherein the executing generates a fault when a fault condition occurs, the fault condition selectable from: the destination matrix having a fewer number of rows than a number of rows of the first source matrix; and the destination matrix having a fewer number of columns than a number of columns of the second source matrix. |
In certain aspects, a semiconductor die includes a power rail, a first gate, and a second gate. The semiconductor die also includes a first gate contact electrically coupled to the first gate, wherein the first gate contact is formed from a first middle of line (MOL) metal layer, and a second gate contact electrically coupled to the second gate, wherein the second gate contact is formed from the first MOL metal layer. The semiconductor die further includes an interconnect formed from a second MOL metal layer, wherein the interconnect is electrically coupled to the first and second gate contacts, and at least a portion of the interconnect is underneath the power rail. |
A semiconductor die (110), comprising:a power rail (412) formed from a first metal layer (M1);a first gate (420b, 420b-1);a second gate (420d, 420d-1);a first gate contact (820, 1020) electrically coupled to the first gate, wherein the first gate contact is formed from a first middle of line, MOL, (CB) metal layer;a second gate contact (825, 1025) electrically coupled to the second gate, wherein the second gate contact is formed from the first MOL metal layer (CB); andan interconnect (810, 1010) formed from a second MOL metal layer (CA),wherein the interconnect is electrically coupled to the first and second gate contacts, and acts as electrical bridge between the first and the second gate contacts, and at least a portion of the interconnect is underneath the power rail,wherein further the first and second MOL metal layers (CB, CA) are formed at lower levels than the first metal layer (Ml).The semiconductor die of claim 1, wherein the interconnect makes contact with sidewalls of the first and second gate contacts.The semiconductor die of claim 1, further comprising:a source, wherein the interconnect is electrically coupled to the source; anda via electrically coupled between the interconnect and the power rail.The semiconductor die of claim 3, further comprising a source contact disposed between the source and the interconnect.The semiconductor die of claim 4, wherein the source contact comprises trench silicide.The semiconductor die of claim 3, wherein the source and the first gate are part of a p-type field effect transistor, PFET, and the power rail has a supply voltage of Vdd.The semiconductor die of claim 3, wherein the source and the first gate are part of an n-type field effect transistor, NFET, and the power rail is grounded.The semiconductor die of claim 1, further comprising:a first source, wherein the first source and the first gate are part of a first transistor;a second source, wherein the second source and the second gate are part of a second transistor, and the interconnect is electrically coupled to the first source and the second source; anda via electrically coupled between the interconnect and the power rail.The semiconductor die of claim 1, further comprising:a source/drain;a source/drain contact electrically coupled to the source/drain, wherein the source/drain contact is formed from the second MOL metal layer; anda via electrically coupled between the source/drain contact and the power rail.A semiconductor die (110), comprising:a power rail (412) formed from a first metal layer (Ml);a gate;a source;a gate contact electrically coupled to the gate, wherein the gate contact is formed from a first middle of line, MOL, metal layer (CB); andan interconnect formed from a second MOL metal layer (CA), wherein the interconnect is electrically coupled to the gate contact and the source, and acts as electrical bridge between the gate and the source contacts, and at least a portion of the interconnect is underneath the power rail, wherein further the first and second MOL metal layers (CB, CA) are formed at lower levels than the first metal layer (Ml).The semiconductor die of claim 10, wherein the source contact comprises trench silicide.The semiconductor die of claim 10, wherein the interconnect makes contact with a sidewall of the gate contact.The semiconductor die of claim 10, further comprising a via electrically coupled between the interconnect and the power rail.The semiconductor die of one of the preceding claims, wherein the semiconductor die forms a semiconductor chip (110). |
CROSS-REFERENCE TO RELATED APPLICATIONThis application claims priority to and the benefit of Non-Provisional Application No. 14/936,459 filed in the U.S. Patent and Trademark Office on November 9, 2015 , the entire content of which is incorporated herein by reference.BACKGROUNDFieldAspects of the present disclosure relate generally to routing on a die, and more particularly, to middle of line (MOL) routing on a die.BackgroundA semiconductor die may include many semiconductor devices (e.g., transistors). The semiconductor devices may be interconnected by one or more metal layers to form integrated circuits. As the dimensions of devices scale down, routing congestion on a die increases, making it more difficult to interconnect devices on the die.SUMMARYThe following presents a simplified summary of one or more embodiments in order to provide a basic understanding of such embodiments. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor delineate the scope of any or all embodiments. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later.According to an aspect, a semiconductor die is provided. The semiconductor die includes a power rail, a first gate, and a second gate. The semiconductor die also includes a first gate contact electrically coupled to the first gate, wherein the first gate contact is formed from a first middle of line (MOL) metal layer, and a second gate contact electrically coupled to the second gate, wherein the second gate contact is formed from the first MOL metal layer. The semiconductor die further includes an interconnect formed from a second MOL metal layer, wherein the interconnect is electrically coupled to the first and second gate contacts, and at least a portion of the interconnect is underneath the power rail.A second aspect relates to a semiconductor die. The semiconductor die includes a power rail, a gate, and a source. The semiconductor die also includes a gate contact electrically coupled to the gate, wherein the gate contact is formed from a first middle of line (MOL) metal layer. The semiconductor die also includes an interconnect formed from a second MOL metal layer, wherein the interconnect is electrically coupled to the gate contact and the source, and at least a portion of the interconnect is underneath the power rail.A third aspect relates to a semiconductor die. The semiconductor die includes a power rail, a first cell including a first plurality of gates and a first plurality of sources/drains, and a second cell including a second plurality of gates and a second plurality of sources/drains. The semiconductor cell also includes a first gate contact electrically coupled to one of the first plurality of gates, wherein the first gate contact is formed from a first middle of line (MOL) metal layer, and a second gate contact electrically coupled to one of the second plurality of gates, wherein the second gate contact is formed from the first MOL metal layer. The semiconductor die also includes an interconnect formed from a second MOL metal layer, wherein the interconnect is electrically coupled to the first and second gate contacts, and is routed underneath the power rail.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a side view of an exemplary semiconductor die according to certain aspects of the present disclosure.FIG. 2 shows a side view of an exemplary structure for coupling a gate to a metal layer according to certain aspects of the present disclosure.FIG. 3 shows a side view of an exemplary structure for coupling a source/drain to a metal layer according to certain aspects of the present disclosure.FIG. 4 shows a top view of an exemplary semiconductor die according to certain aspects of the present disclosure.FIG. 5 shows exemplary local interconnects formed from metal layer M1 according to certain aspects of the present disclosure.FIGS. 6A and 6B show side views of the exemplary local interconnects in FIG. 5 according to certain aspects of the present disclosure.FIG. 7 shows a top view of an exemplary interconnect formed from metal layer M1 and used for routing between cells according to certain aspects of the present disclosure.FIGS. 8A and 8B show top views of an exemplary interconnect providing routing under a metal M1 power rail according to certain aspects of the present disclosure.FIG. 9 shows a side view of the interconnect in FIGS. 8A and 8B according to certain aspects of the present disclosure.FIGS. 10A and 10B show top views of another exemplary interconnect providing routing under a metal M1 power rail according to certain aspects of the present disclosure.FIG. 11 shows a side view of the interconnect in FIGS. 10A and 10B according to certain aspects of the present disclosure.FIG. 12 shows a circuit diagram of multiple transistors, in which the sources and gates of the transistors are tied off to provide an electrical barrier according to certain aspects of the present disclosure.FIGS. 13A and 13B show top views of an interconnect under a metal M1 power rail for implementing the circuit in FIG. 12 according to certain aspects of the present disclosure.FIG. 14 shows a side view of the interconnect in FIGS. 13A and 13B according to certain aspects of the present disclosure.DETAILED DESCRIPTIONThe detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.FIG. 1 shows a simplified example of a semiconductor die (chip) 110. The semiconductor die 110 includes multiple interconnect metal layers, in which adjacent interconnect metal layers are separated by one or more insulating layers. The different interconnect metal layers may be interconnected using vias and/or other structures, which are not shown in FIG. 1 for ease of illustration. The bottom-most interconnect metal layer is labeled M1. The interconnect metal layer immediately above interconnect metal layer M1 is labeled M2, the interconnect metal layer immediately above metal layer M2 is labeled M3, and so forth. In this example, the semiconductor die 110 includes nine interconnect metal layers labeled M1-M9.The interconnect metal layers M1-M9 are used to couple various components of the die to one another, to one or more off-die devices, and/or to one or more power supplies. Typically, metal layer M1 is formed over semiconductor devices (e.g., transistors) on the die, and is coupled to the semiconductor devices using contact structures, as discussed further below. Metal layer M1 may be used to provide power rails for semiconductor devices on the die, to couple semiconductor devices on the die to one another, and/or to couple semiconductor devices on the die to higher metal layers. In this regard, FIG. 1 shows an example of a transistor 120 (e.g., field effect transistor) under metal layer M1. The transistor 120 is formed on a substrate 150 of the die before the metals M1-M9 are deposited over the transistor 120. The transistor 120 includes a gate 130, a first source/drain 140a, and a second source/drain 140b. The gate 130 may include polysilicon and/or other material. As used herein, the term "source/drain" indicates that the corresponding structure can act as a source or a drain. Although one transistor 120 is shown in FIG. 1 for ease of illustration, it is to be appreciated the semiconductor die 110 includes many transistors. It is also to be appreciated that FIG. 1 is not drawn to scale, and that the thicknesses of the metal layers M1-M9 and the spaces between adjacent metal layers may vary on the die.The transistor 120 may be formed on the die using a planar process or a FinFET process. FIG. 1 shows an example in which the transistor 120 is formed using a planar process. In this example, the transistor 120 includes a gate dielectric 145 (e.g., gate oxide) between the gate and channel of the transistor. For a transistor formed using a FinFET process (not shown), the transistor may include one or more fins between the first source/drain and second source/drain to form the channel of the transistor, and a gate formed over the fins. In either case, a transistor includes a gate, a first source/drain, and a second source/drain. Accordingly, it is to be appreciated that aspects of the present disclosure are applicable to transistors formed using planar processes and FinFET processes.FIG. 2 shows an exemplary contact structure 250 for coupling a gate 230 of a transistor (e.g., transistor 120) to metal layer M1. In this example, the contact structure 250 is disposed between the gate 230 and metal layer M1, and electrically couples the gate 230 to metal layer M1. The contact structure 250 includes a gate contact 252 formed from a first middle of line (MOL) metal layer (denoted "CB" in FIG. 2 ). As used herein, the term "MOL" refers to a layer under metal layer M1. The first MOL metal layer (CB) may include tungsten, copper and/or other conductive material. The gate contact 252 may make contact with the top of the gate 230, as shown in FIG. 2 . The contact structure 250 also includes a via 255 disposed between the gate contact 252 and metal layer M1. The via 255 electrically couples the gate contact 252 to metal layer M1.FIG. 3 shows an exemplary contact structure 350 for coupling a source/drain 340 of a transistor (e.g., transistor 120) to metal layer M1. In this example, the contact structure 350 is disposed between the source/drain 340 and metal layer M1, and electrically couples the source/drain to metal layer M1. The contact structure 350 includes a first source/drain contact 352 formed on top of the source/drain 340. The first source/drain contact 352 may be formed from a trench silicide (TS) layer. In this example, the first source/drain contact 352 may be formed by etching a trench in a dielectric material and filling the trench with a silicide material. The contact structure 350 also includes a second source/drain contact 355 formed from a second MOL metal layer (denoted "CA" in FIG. 3 ). The second MOL metal layer (CA) may include tungsten, copper and/or other conductive material. The second source/drain contact 355 is stacked on top of the first source/drain contact 352 in the example in FIG. 2 . The contact structure 350 further includes a via 357 disposed between the second source/drain contact 355 and metal layer M1. The via 357 electrically couples the second source/drain contact 355 to metal layer M1. The vias 255 and 357 in FIGS. 2 and 3 may be formed from the same conductive layer.Thus, the first and second MOL metal layers (CB and CA) are conventionally used to couple the gates and sources/drains, respectively, to metal layer M1 and/or other upper metal layers. The first and second MOL metal layers (CB and CA) are not conventionally used to form a global feature. As used herein, a "global feature" may refer to a structure that is shared by multiple cells. An example of a global feature is a power rail formed from metal layer M1, as discussed further below.FIG. 4 shows an exemplary top view of the die 110 with metal layers M2-M9 removed for ease of illustration. In this example, the die 110 includes a first power rail 410 formed from metal layer M1, a second power rail 412 formed from metal layer M1, and a third power rail 414 formed from metal layer M1. The power rails 412 may be formed from metal layer M1 using any metal etching process known in the art. The first and third power rails 410 and 414 may have a supply voltage of Vdd and the second power rail 412 may have a supply voltage of Vss, in which Vdd may be a positive voltage and Vdd may be approximately zero volts (e.g., ground) or another voltage lower than Vdd. Alternatively, the first and third power rails 410 and 414 may have a supply voltage of Vss and the second power rail 412 may have a supply voltage of Vdd. It is to be appreciated that each of the power rails 410, 412 and 414 may be longer than shown in FIG. 4 . In FIG. 4 , structures under the power rails 410, 412 and 414 are shown in dashed lines.The die includes a first cell 450 and a second cell 455. In the example in FIG. 4 , a boundary between the first and second cells 450 and 455 lies under the second power rail 412, although it is to be appreciated that this need not be the case. It is to be appreciated that the die may include many more cells than shown in FIG. 4 .The first cell 450 includes multiple gates 420a-420f (e.g., polysilicon gates) and multiple sources/drains 430a-430j, which form multiple transistors within the first cell 450. In this regard, FIG. 4 shows an example of a transistor 460 formed from gate 420b, source/drain 430a and source/drain 430b. In the example in FIG. 4 , the gates 420a-420f are elongated and arranged in parallel with one or more sources/drains between adjacent gates. For the example in which the gates 420a-420f include polysilicon, the gates may be referred to as poly conductors (PCs).The transistors within the first cell 450 may be interconnected by local interconnects (not shown in FIG. 4 ) to form an integrated circuit that performs one or more functions. For example, the transistor may be interconnected to form a circuit that performs a logic function (AND gate, OR gate, XOR gate, etc.), a circuit that performs a storage function (flip-flop, latch, etc.), a circuit that performs a multiplexing function (e.g., multiplexer), etc. The transistors within the first cell 450 may be powered by the first and second power rails 410 and 412, which provide voltages Vdd and Vss, as discussed above. In this regard, one or more of the transistors within the first cell 450 may be coupled to the power rails by one or more contact structures (e.g., contact structures 250 and 350 shown in FIGS. 2 and 3 ).The second cell 455 includes multiple gates 420b-420d, 420g, 420h and 420i (e.g., polysilicon gates) and multiple sources/drains 430k-430t, which form multiple transistors within the second cell 455. In the example shown in FIG. 4 , gates 420b-420d in the first cell 450 extend underneath the second power rail 412 into the second cell 455. In the example in FIG. 4 , the gates 420b-420d, 420g, 420h and 420i are elongated and arranged in parallel with one or more sources/drains between adjacent gates.The transistors within the second cell 455 may be interconnected by local interconnects (not shown in FIG. 4 ) to form an integrated circuit that performs one or more functions (e.g., any of the functions discussed above). The transistors within second cell 455 may be powered by the second and third power rails 412 and 414, which provide voltages Vdd and Vss, as discussed above. In this regard, one or more of the transistors within the second cell 455 may be coupled to the power rails by one or more contact structures (e.g., contact structures 250 and 350 shown in FIGS. 2 and 3 ).As discussed above, transistors in a cell may be interconnected to form a circuit. In this regard, FIG. 5 shows an example of a first local interconnect 510 and a second local interconnect 550 in the first cell 450, where each local interconnect is formed from metal layer M1. For ease of illustration, the references numbers for the sources/drains 430a-430t shown in FIG. 4 are omitted from FIG. 5 .In this example, the first local interconnect 510 couples gates 420d and 420e, in which the first local interconnect 510 is coupled to gate 420d by a first contact structure 520 and is coupled to gate 420e by a second contact structure 525. In this regard, FIG. 6A shows a side view of the first local interconnect 510, in which the first contact structure 520 includes a first gate contact 652a and a first via 655a, and the second contact structure 520 includes a second gate contact 652b and a second via 655b.Returning to FIG. 5 , the second local interconnect 550 couples gate 420b and source/drain 430b, in which the second local interconnect 550 is coupled to gate 420d by a third contact structure 555 and is coupled to source/drain 430b by a fourth contact structure 560. In this regard, FIG. 6B shows a side view of the second local interconnect 550, in which the third contact structure 555 includes a gate contact 662 and a first via 665, and the fourth contact structure 560 includes a first source/drain contact 672 (e.g., TS), a second source/drain contact 675, and a second via 678.In general, metal layer M1 may be used to form local interconnects that interconnect gates, interconnect one or more gates to one or more sources/drains, interconnect sources/drains, etc. Thus, metal layer M1 may be patterned to form local interconnects that interconnect the transistors in a cell to form a circuit. As discussed further below, metal layer M1 can also be used to form global interconnects between two or more cells.FIG. 7 shows an example in which metal layer M1 is used to form an interconnect 710 that couples transistors in the first and second cells 450 and 455. In this example, the interconnect 710 is coupled to gate 420b in the first cell 450 by a first contact structure 720, and to gate 420d in the second cell 455 by a second contact structure 725. Each of the first and second contact structures 720 and 725 may include the contact structure 250 shown in FIG. 2 . As shown in FIG. 7 , the interconnect 710 provides routing between the first and second cells 450 and 455. Thus, metal layer M1 can be used for both local routing and global routing.However, a drawback with using metal layer M1 for the interconnect 710 is that it requires a break in the second power rail 412 to pass through the second power rail 412, as shown in FIG. 7 . This is because the interconnect 710 and the second power rail 412 are formed from the same metal layer (i.e., metal layer M1). In the example in FIG. 7 , the second power rail 412 is broken into a first portion 412a and a second portion 412b to accommodate the interconnect 710. The break in the power rail causes power integrity degradation, which may cause time-sensitive circuits to malfunction. The power integrity degradation may include increased IR voltage drops in the power network and/or electromagnetic emissions. Another drawback is that the interconnect 710 occupies a significant area, causing metal routing congestion.In general, metal layer M1 routing is limited when it conflicts with a structure formed from metal layer M1 such as a power rail (e.g., the second power rail 412). In this case, either the metal layer M1 routing has be routed around the structure (which may not be possible in many cases) or the structure has to be broken to allow the metal layer M1 routing to pass through (which may cause one or more of the problems discussed above).Aspects of the present disclosure provide routing using the first and second MOL metal layers (CA and CB) that avoid one or more of the above drawbacks, as discussed further below.FIGS. 8A and 8B show top views of an exemplary interconnect 810 according to certain aspects of the present disclosure. FIG. 8A shows the interconnect 810 with the second power rail 412, and FIG. 8B shows the interconnect 810 with the second power rail 412 removed to provide an unobstructed view of the interconnect 810. FIG. 9 shows a side view of the interconnect 810.The interconnect 810 provides routing between gate 420b and gate 420d underneath the second power rail 412. The interconnect 810 is formed from the second MOL metal layer (CA), which is the same metal layer used to form source/drain contacts (e.g., source/drain contacts 355), as discussed above. As shown in FIG. 9 , the interconnect 810 extends between and makes contact with gate contacts 820 and 825 on gates 420b and 420d, respectively. In this respect, the interconnect 810 acts as an electrical bridge between the gate contacts 820 and 825. In the example in FIG. 9 , the interconnect 810 makes contact with sidewalls of the gate contacts 820 and 825.Thus, the interconnect 810 (which is formed from the second MOL metal layer (CA)) and gate contacts 820 and 825 (which are formed from the first MOL metal layer (CB)) electrically couple gates 420b and 420d. Since the first MOL metal layer (CA) and the second MOL metal layer (CB) are formed at lower levels than metal layer M1, the interconnect 810 and gate contacts 820 and 825 are able to provide routing between gates 420b and 420d under the second power rail 412. Consequently, there is no need to break the second power rail 412. As shown in FIGS. 8A and 9 , the second power rail 412 is continuous with no breaks. Further, since the interconnect 810 and gate contacts 820 and 825 are underneath metal layer M1, they do not interfere with metal layer M1, and therefore reduce routing congestion. As a result, area efficiency is improved.Thus, aspects of the present disclosure provide MOL routing (e.g., between transistors) using the first and second MOL metal layers (CB and CA). Since the first and second MOL metal layers (CB and CA) are formed at lower levels than metal layer M1, MOL routing is not subject to the same restrictions as metal layer M1 routing. For instance, MOL routing can cross underneath a structure (e.g., power rail) formed from metal layer M1 without requiring a break in the structure. In contrast, when metal layer M1 routing conflicts with a metal layer M1 structure (e.g., power rail), the metal layer M1 routing either has be routed around the structure (which may not be possible in many cases) or the structure has to be broken to allow the metal layer M1 routing to pass through. Thus, MOL routing according to aspects of the present disclosure can advantageously be used in cases where routing using metal layer M1 would be highly restricted.Although not explicitly shown in FIGS. 8A and 8B , it is to be appreciated that the die may also include local interconnects (e.g., local interconnects 510 and 550 as shown in FIG. 5 ) for interconnecting transistors in the first cell 450 and second cell 455 to form integrated circuits.FIGS. 10A and 10B show top views of another exemplary interconnect 1010 according to certain aspects of the present disclosure. FIG. 10A shows the interconnect 1010 with the second power rail 412, and FIG. 10B shows the interconnect 1010 with the second power rail 412 removed to provide an unobstructed view of the interconnect 1010. FIG. 11 shows a side view of the interconnect 1010.In this example, gate 420b in FIG. 8 is broken into gate 420b-1 in the first cell 450 and gate 420b-2 in the second cell 455, gate 420c in FIG. 8 is broken into gate 420c-1 in the first cell 450 and gate 420c-2 in the second cell 455, and gate 420d in FIG. 8 is broken into gate 420d-1 in the first cell 450 and gate 420d-2 in the second cell 455.In this example, the interconnect 1010 provides routing between gate 420b-1 and gate 420d-2 underneath the second power rail 412. The interconnect 1010 is formed from the second MOL metal layer (CA), which is the same metal layer used to form source/drain contacts (e.g., source/drain contacts 355), as discussed above. As shown in FIG. 11 , the interconnect 1010 extends between and makes contact with gate contacts 1020 and 1025 on gates 420b-1 and 420d-2, respectively. In this respect, the interconnect 1010 acts as an electrical bridge between the gate contacts 1020 and 1025.Thus, the interconnect 1010 (which is formed from the second MOL metal layer (CA)) and gate contacts 1020 and 1025 (which are formed from the first MOL metal layer (CB)) provide routing between gate 420b-1 in the first cell 450 and 420d-2 in the second cell. In the example in FIG. 11 , the interconnect 1010 makes contact with sidewalls of the gate contacts 1020 and 1025. Since the first MOL metal layer (CA) and the second MOL metal layer (CB) are formed at lower levels than metal layer M1, the interconnect 1010 and gate contacts 1020 and 1025 are able to provide routing under the second power rail 412.It is to be appreciated that aspects of the present disclosure are not limited to the particular examples discussed. In general, aspects of the present disclosure provide routing between cells under a metal M1 power rail (e.g., power rail 412) using an interconnect formed from the second MOL metal layer (CA). For example, the interconnect may couple a first gate in one cell to a second gate in another cell by extending between and making contact with gate contacts on the first and second gates, in which the gate contacts may be formed from the first MOL metal layer (CB). In this respect, the interconnect acts as an electrical bridge between the gate contacts. Since the second MOL metal layer (CA) is formed at a lower level than metal layer M1, the interconnect is able to pass underneath a metal M1 power rail, and therefore does not require a break in the metal M1 power rail.In certain aspects, one or more transistors may be tied off to provide electrical isolation between semiconductor devices (transistors). In these aspects, a transistor is tied off by coupling the source and gate of the transistor together. This permanently turns off the transistor, allowing the transistor to act as an electrical barrier between semiconductor devices on opposite sides of the transistor.In this regard, FIG. 12 is a circuit diagram illustrating an example in which transistors M1-M4 are tied off to provide electrical isolation between nodes 1205, 1210, 1215 and 1220. In this example, the gates G1-G4 and sources S1-S4 of the transistors M1-M4 of the transistors M1-M4 are all tied to node 1230, which is coupled to supply voltage Vdd. This effectively turns off transistor M1-M4, providing an electrical barrier between nodes 1205, 1210, 1215 and 1220, which may correspond to the drains D1-D4 of the transistors M1-M4. As a result, a device (e.g., transistor) coupled to one of the nodes 1205, 1210, 1215 and 1220 is electrically isolated from the other nodes 1205, 1210, 1215 and 1220. In the example in FIG. 12 , the transistors M1-M4 are p-type field effect transistors (PFETs). However, it is to be appreciated that the transistors M1-M4 may be replaced with n-type field effect transistors (NFETs) with the gates and sources of the NFETs tied to ground.FIGS. 13A and 13B show top views of an exemplary interconnect 1310 that provides routing underneath the second power rail 412 for implementing the exemplary circuit shown in FIG. 12 . FIG. 13A shows the interconnect 1310 with the second power rail 412, and FIG. 13B shows the interconnect 1310 with the second power rail 412 removed to provide an unobstructed view of the interconnect 1310. FIG. 14 shows a side view of the interconnect 1310. The interconnect 1310 may be formed from the second MOL metal layer (CA), as discussed further below.In this example, gate 420c and source/drain 430h may provide the gate G1 and source S1 of transistor M1, gate 420c and source/drain 430m may provide the gate G2 and source S2 of transistor M2, gate 420d and source/drain 430h may provide the gate G3 and source S3 of transistor M3, and gate 420d and source/drain 430m may provide the gate G4 and source S4 of transistor M4. Thus, gate 420c may provide the gates G1 and G2 for both transistors M1 and M2, and gate 420 may provide the gates G3 and G4 for both transistors M3 and M4. Similarly, source/drain 430h may provide the sources S1 and S3 for both transistors M1 and M3, and source/drain 430m may provide the sources S2 and S4 for both transistors M2 and M4. Sources/drains 430g, 4301, 430i and 430n may provide the drains D1-D4, respectively, of the transistors M1-M4.In this example, the interconnect 1310 extends between and makes contact with gate contacts 1320 and 1325 on gates 420c and 420d, respectively. As a result, the interconnect 1310 and gate contacts 1320 and 1325 couple the gates G1-G4 of the transistors M1-M4 together. In the example shown in FIG. 14 , the interconnect 1310 makes contact with sidewalls of the gate contacts 1320 and 1325.The interconnect 1310 is also coupled to source/drain 430h and source/drain 430m. In this example, the interconnect 1310 is coupled to source/drain 430 by source/drain contact 1340 (e.g., TS source/drain contact) disposed between the interconnect 1310 and source/drain 430h, and coupled to source/drain 430m by source/drain contact 1342 (e.g., TS source/drain contact) disposed between the interconnect 1310 and source/drain 430m. In the example shown in FIG. 14 , the interconnect 1310 makes contact with the tops of the source/drain contacts 1340 and 1342 (e.g., TS source/drain contacts).Thus, the interconnect 1310 couples the gates G1-G4 of the transistors M1-M4 together and couples the sources S1-S4 of the transistors M1-M4 together. As a result, the gates G1-G4 and sources S1-S4 of the transistors M1-M4 are all coupled together.The interconnect 1310 is also coupled to the second power rail 412 by via 1350 disposed between the interconnect 1310 and the second power rail 412. In one example, the second power rail 412 provides supply voltage Vdd and the transistors M1-M4 are PFETs. As a result, in this example, the gates G1-G4 and sources S1-S4 of the transistors M1-M4 are tied to Vdd. This effectively turns off the transistors M1-M4, providing an electrical barrier between the first cell 450 and the second cell 455, and more particularly between sources/drains 430g, 4301, 430i and 430n.Thus, the interconnect 1310 (which is formed from the second MOL metal layer (CA)) provides routing under the second power supply 412 for tying off the transistors M1-M4 to form an electrical barrier. It is to be appreciated that aspects of the present disclosure are not limited to the example shown in FIGS. 12-14 . In general, aspects of the present disclosure may tie off one or more transistors by using an interconnect formed from the second MOL metal layer (CA), in which the interconnect is coupled the gate and source of each transistor. The interconnect may also be coupled to a metal M1 power rail by a via between the interconnect and the power rail. For the example in which the one or more transistors are PFETs, the power rail may provide supply voltage Vdd. For the example in which the one or more transistors are NFETs, the power rail may provide supply voltage Vss.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.FURTHER SUMMARY1. A semiconductor die, comprising:a power rail;a first gate;a second gate;a first gate contact electrically coupled to the first gate, wherein the first gate contact is formed from a first middle of line (MOL) metal layer;a second gate contact electrically coupled to the second gate, wherein the second gate contact is formed from the first MOL metal layer; andan interconnect formed from a second MOL metal layer, wherein the interconnect is electrically coupled to the first and second gate contacts, and at least a portion of the interconnect is underneath the power rail.2. The semiconductor die of 1, wherein the interconnect makes contact with sidewalls of the first and second gate contacts.3. The semiconductor die of 1, wherein the power rail is formed from an M1 metal layer, and the M1 metal layer is above the first and second MOL metal layers.4. The semiconductor die of 1, further comprising:a source, wherein the interconnect is electrically coupled to the source; anda via electrically coupled between the interconnect and the power rail.5. The semiconductor die of 4, further comprising a source contact disposed between the source and the interconnect.6. The semiconductor die of 5, wherein the source contact comprises trench silicide.7. The semiconductor of 4, wherein the source and the first gate are part of a p-type field effect transistor (PFET), and the power rail has a supply voltage of Vdd.8. The semiconductor of 4, wherein the source and the first gate are part of an n-type field effect transistor (NFET), and the power rail is grounded.9. The semiconductor die of 1, further comprising:a first source, wherein the first source and the first gate are part of a first transistor;a second source, wherein the second source and the second gate are part of a second transistor, and the interconnect is electrically coupled to the first source and the second source; anda via electrically coupled between the interconnect and the power rail.10. The semiconductor die of 1, further comprising:a source/drain;a source/drain contact electrically coupled to the source/drain, wherein the source/drain contact is formed from the second MOL metal layer; anda via electrically coupled between the source/drain contact and the power rail.11. A semiconductor die, comprising:a power rail;a gate;a source;a gate contact electrically coupled to the gate, wherein the gate contact is formed from a first middle of line (MOL) metal layer; andan interconnect formed from a second MOL metal layer, wherein the interconnect is electrically coupled to the gate contact and the source, and at least a portion of the interconnect is underneath the power rail.12. The semiconductor die of 11, further comprising a source contact disposed between the source and the interconnect.13. The semiconductor die of 12, wherein the source contact comprises trench silicide.14. The semiconductor die of 11, wherein the interconnect makes contact with a sidewall of the gate contact.15. The semiconductor die of 11, wherein the power rail is formed from an M1 metal layer, and the M1 metal layer is above the first and second MOL metal layers.16. The semiconductor die of 11, further comprising a via electrically coupled between the interconnect and the power rail.17. A semiconductor die, comprising:a power rail;a first cell, the first cell comprising a first plurality of gates and a first plurality of sources/drains;a second cell, the second cell comprising a second plurality of gates and a second plurality of sources/drains;a first gate contact electrically coupled to one of the first plurality of gates, wherein the first gate contact is formed from a first middle of line (MOL) metal layer;a second gate contact electrically coupled to one of the second plurality of gates, wherein the second gate contact is formed from the first MOL metal layer; andan interconnect formed from a second MOL metal layer, wherein the interconnect is electrically coupled to the first and second gate contacts, and is routed under the power rail.18. The semiconductor die of 17, wherein the interconnect makes contact with sidewalls of the first and second gate contacts.19. The semiconductor die of 17, wherein the power rail is formed from an M1 metal layer, and the M1 metal layer is above the first and second MOL metal layers.20. The semiconductor die of 17, wherein a boundary between the first and second cells lies underneath the power rail. |
According to one embodiment a system is described. The system includes a direct memory access (DMA) controller and an input/output (I/O) device coupled to the DMA controller. The DMA controller is adaptable to operate in a normal mode and a descriptor mode. |
What is claimed is:1. A system comprising:configurable system logic having programmable logic;a direct memory access (DMA) controller adapted to operate in a descriptor mode;a configurable system interconnect coupled between the configurable system logic and the DMA controller;an input/output (I/O) device coupled to the DMA controller by way of the configurable system interconnect, wherein the I/O device is implemented in the programmable logic and the DMA controller terminates a DMA transfer and clears a current transfer counter before a terminal count is reached upon receiving an early termination request signal from the I/O device, and wherein the DMA controller sends an acknowledge signal to the I/O device in response to receiving the early termination request signal; anda descriptor table storing commands to carry out a transfer, the descriptor table being updated with a reduced transfer count in response to receiving the early termination request signal from the I/O device when the DMA controller is operating in the descriptor mode.2. The system of claim 1 wherein the DMA controller re-executes a DMA transfer with the I/O device upon receiving a retransmit request signal from the I/O device.3. The system of claim 1 further comprising:a central processing unit (CPU) coupled to the system interconnect; anda memory device coupled to the system interconnect.4. The system of claim 1 wherein the DMA controller comprises a first channel coupled to the I/O device to facilitate the transfer of data.5. The system of claim 4 wherein the DMA controller further comprises a register, coupled to the channel, to store configuration data.6. The system of claim 5 wherein the DMA controller further comprises error checking logic.7. The system of claim 4 wherein the channel comprises control logic to control the transfer process within the first channel.8. A system comprising:configurable system logic having programmable logic;a direct memory access (DMA) controller adapted to operate in a descriptor mode;a configurable system interconnect coupled between the configurable system logic and the DMA controller;a descriptor table storing commands to carry out a transfer of data with the DMA controller; andan input/output (I/O) device coupled to the DMA controller, wherein the I/O device is implemented in the programmable logic and the DMA controller re-executes a DMA transfer of the data associated with a current descriptor entry stored in the descriptor table from the beginning with the I/O device upon receiving a retransmit request signal from the I/O device, and wherein the DMA controller sends an acknowledge signal to the I/O device in response to receiving the retransmit request signal and the transmission of the data associated with the current descriptor entry stored in the descriptor table is restarted.9. The system of claim 8 wherein the DMA controller comprises a first channel coupled to the I/O device to facilitate data transfers.10. The system of claim 9 wherein the DMA controller further comprises a register, coupled to the channel, to store configuration data.11. The system of claim 10 wherein the DMA controller further comprises error checking logic.12. The system of claim 9 wherein the channel comprises control logic to control the transfer of data.13. A method comprising:configuring a first device in programmable logic of an integrated circuit;coupling the first device to a configurable system interconnect;transferring data between the first device and a second device under control of a direct memory access (DMA) controller by way of the configurable system interconnect;storing commands in a descriptor table to carry out a transfer of data between the first device and the second device;receiving a request signal at the DMA controller from the first device indicating a request by the first device to re-transmit the data between the first device and the second device;transmitting an acknowledge signal from the DMA controller to the first device in response to receiving the request by the first device to re-transmit the data; andre-transferring the data associated with a current descriptor entry stored in the descriptor table from the beginning between the first device and the second device.14. The method of claim 13 further comprising reloading configuration registers within the DMA controller prior to transmitting the acknowledge signal to the first device.15. A method comprising:configuring a first device in programmable logic of an integrated circuit;coupling the first device to a configurable system interconnect;storing commands in a descriptor table to carry out a transfer of data between the first device and a second device;transferring data between the first device and the second device under control of a direct memory access (DMA) controller by way of the configurable system interconnect;receiving a request signal at the DMA controller from the first device indicating a request by the first device to terminate the transfer of data between the first device and the second device;erasing appropriate DMA controller information in order to restart the transfer of data between the first device and the second device;transmitting an acknowledge signal from the DMA controller to the first device in response to receiving the request by the first device to terminate the transfer of data;terminating the transfer of data between the first device and the second device; andupdating the descriptor table with a reduced transfer count in response to receiving the request by the first device to terminate the transfer of data when the direct memory access controller is operating in the descriptor mode.16. The method of claim 15 wherein the erasing comprises, clearing a counter within the DMA controller prior to transmitting the acknowledge signal to the first device.17. The method of claim 15 further comprising:receiving a second request signal at the DMA controller from the first device indicating a request by the first device to re-transmit the data between the first device and the second device;transmitting a second acknowledge signal from the DMA controller to the first device; andre-transferring the data between the first device and the second device according to the first set of commands.18. The method of claim 15 further comprising:receiving a second request signal at the DMA controller from the first device indicating a request by the first device to re-transmit the data between the first device and the second device;transmitting a second acknowledge signal from the DMA controller to the first device; andterminating the transfer of data between the first device and the second device. |
FIELD OF THE INVENTIONThe present invention relates to integrated circuits, and more specifically, to executing direct memory access (DMA) transfers.BACKGROUNDDMA controllers are used in computer systems for moving blocks of data from one location to another location. Consequently, the system processor is relieved of the need to generate a long sequence of addresses to accomplish the transfer of data. Typically, the data transferred is a large block of data that begins at a source address and is moved to a destination beginning at a destination address. The DMA controller is started by an event responsive to which the DMA controller generates addresses of a source location and of a destination location wherein data is transferred.FIG. 4A is a block diagram of an exemplary DMA controller coupled to an input/output (I/O) device. The DMA controller is coupled to the I/O device via a request signal line and an acknowledge signal line. Typically, the I/O device requests the service of the DMA controller by asserting the request signal line. In response, the DMA controller asserts the acknowledge signal line when the DMA controller is able to service the device.However, before the acknowledge is transmitted and data is transferred to (or from) the I/O device, the system processor, or other bus master, must typically set up the DMA transfer parameters and mode of operations by writing directly to the control registers of the controller. Therefore, the processor is required to update the registers prior to each subsequent transfer. Using the system processor to continuously set up the DMA controller is not efficient. As a result, a more efficient system and method for executing DMA transfers is desired.SUMMARYAccording to one embodiment a system is described. The system includes a direct memory access (DMA) controller and an input/output (I/O) device coupled to the DMA controller. The DMA controller is adaptable to terminate a DMA transfer before a terminal count is reached. Further the DMA controller is adaptable to re-transmit data.BRIEF DESCRIPTION OF THE DRAWINGSA better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:FIG. 1 is a block diagram of one embodiment of a system;FIG. 2 is a block diagram of one embodiment of a direct memory access (DMA) controller;FIG. 3 is a block diagram of one embodiment of a DMA channel;FIG. 4A is a block diagram of an exemplary input/output (I/O) device coupled to a DMA controller;FIG. 4B is a block diagram of one embodiment of an I/O device coupled to a DMA channel;FIG. 5 is a flow diagram for one embodiment of the process carried out upon receiving a regular request at a DMA channel;FIG. 6 is a flow diagram of one embodiment of the process carried out by a DMA controller upon receiving a retransmit request from an I/O device;FIG. 7 is a flow diagram of one embodiment of the process carried out by a DMA controller upon receiving an early termination request from an I/O device;FIG. 8 is a flow diagram for one embodiment of the process carried out by a DMA controller upon receiving a regular request signal while operating in the descriptor mode;FIG. 9 is a flow diagram of one embodiment of the process carried out by a DMA controller upon receiving a retransmit request signal from an I/O device while operating in a descriptor mode; andFIG. 10 is a flow diagram of one embodiment of the process carried out by a DMA controller upon receiving an early termination signal from an I/O device while operating in a descriptor mode.DETAILED DESCRIPTIONIn the following description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.FIG. 1 is a block diagram of one embodiment of a system 100. System 100 includes a configurable system interconnect (CSI) 102, a central processing unit (CPU) 105, a direct memory access (DMA) controller 110 and a Joint Test Action Group (JTAG) interface 120. In addition, system 100 includes a memory interface 130, a read only memory (ROM) 140, a random access memory (RAM) 150 and configurable system logic (CSL) 160. According to one embodiment, the components of system 100 are all included on the same semiconductor chip.CSI 102 is a dedicated system bus for connecting CPU 105 to the other components within system 100. In addition, CSI 102 provides a synchronous interface for system 100 components. Further, CSI 102 includes address and data paths, a clock and control signals. According to one embodiment, CSI 102 is a 32-bit bus that supports multiple access modes. In such an embodiment, devices in system 100 may be configured to transmit 32-bit, 16-bit or 8-bit packets of data via CSI 102.CPU 105 is coupled to CSI 102 and executes sequences of instructions received from other components within system 100. According to one embodiment, CPU 105 is an ARM7TDMI processor developed by ARM of Cambridge, Mass. Alternatively, other processors may be used.DMA controller 110 is coupled to CSI 102 and controls direct memory accesses between memory devices within system 100 (e.g., RAM 150 and ROM 140), and input/output (I/O) devices, without using CPU 105. DMA transfers typically include a number of transactions from an I/O device to a memory location, or vice versa. JTAG interface 120 is adaptable to test the boundaries of system 100. According to one embodiment, JTAG interface 120 operates as a master device of CSI 102 and has access to all system resources in order to debug system 100. In a further embodiment, JTAG interface 120 converts serial bit streams into parallel registers whose contents are placed on the address, data and command busses in order to emulate CSI 102 transactions.Memory interface 130 provides a connection between CSI 102 and one or more external memory devices (not shown). ROM 140 is also coupled to CSI 102. ROM 140 is used to initialize system 100 upon startup. Further, ROM 140 may be configured to instruct CPU 102 to fetch and execute code segments from external memory devices and other interfaces. One of ordinary skill in the art will appreciate that other non-volatile memory devices (e.g., flash memory) may be used instead of a ROM.RAM 140 stores sequences of instructions that are executed by CPU 105. According to one embodiment, RAM 140, CPU 105 and CSL 160 are connected through CSI 102 such that devices residing in the matrices of CSL 160 may effectively access RAM 140 using DMA controller 110, in addition to access by CPU 105. CSL 160 comprises programmable logic that is coupled to CPU 105, RAM 150 and other system 100 components via CSI 102. According to one embodiment, CSL 160 includes an array of programmable logic tiles that correspond to design units of the physical layout of CSL 160. CSL 160 may be used to implement various device components such as Universal Asynchronous Receiver Transmitters (UARTs), registers, memories, etc. In a further embodiment, CSL 160 may include one or more I/O devices.According to one embodiment, DMA controller 110 is configured to terminate a DMA transfer before a terminal count has been reached. In addition, DMA controller 110 is configured to re-transmit data upon request by a connected I/O device. FIG. 2 is a block diagram of one embodiment of DMA controller 110. DMA controller 110 includes multiple channels 220, a configuration register matrix 240 and a Cyclical Redundancy Checker (CRC) 250.According to one embodiment, four DMA channels 220 (e.g., channels 0-3) are included within DMA controller 110. However, one of ordinary skill in the art will appreciate that other quantities of channels 220 may be included within DMA controller 110. DMA channels 220 are system pathways used by devices within system 100 to transfer data directly to and from RAM 150, ROM 140 or external memory devices coupled to memory interface 130. In one embodiment, each channel 220 is coupled to an I/O device via CSI 102. As described above, the I/O devices may be programmable logic implementations within CSL 160. In yet a further embodiment, REQUEST, ACKNOWLEDGE, STATUS and CONTROL wires are use to control data transfers between channel 220 and I/O devices.Configuration register matrix 240 stores DMA configuration data received from CPU 105 or any other bus master (e.g., JTAG interface 120). The data stored in register 240 includes information used for executing a DMA transfer. For example, register 240 may store the start and stop memory addresses of data to be transferred to or from an I/O device, as well as the size of the transfer.CRC 250 uses an error checking technique used to ensure the accuracy of transmitted data. According to one embodiment, transmitted messages are divided into predetermined lengths which, used as dividends, are divided by a fixed divisor. The remainder of the calculation is appended onto and sent with the message. The remainder is recalculated at the receiving end. If it does not match the transmitted remainder, an error is detected.FIG. 3 is a block diagram of one embodiment of DMA channel 220. DMA channel 220 includes control logic 340 and descriptor logic 380. Control logic 340 controls the transfer process within a DMA channel 220. For example, control logic 340 coordinates the set up of register 240 prior to the execution of a transfer. Descriptor logic 380 is used to coordinate DMA transfers in a descriptor mode. According to one embodiment, descriptor logic 380 receives control information used to carry out a DMA transfer from a descriptor table in memory. The descriptor table may be stored in RAM 150 or in another memory device coupled to memory interface 130. The control information received at descriptor logic 380 is interpreted and programmed into configuration register matrix 240.FIG. 4B is a block diagram of one embodiment of an I/O device 410 coupled to a DMA channel 220. As described above, I/O device 410 and DMA channel 220 are coupled via REQUEST, ACKNOWLEDGE, STATUS and CONTROL wires. According to one embodiment, the STATUS is paired with the REQUEST component to signal requests from I/O device 410 to DMA channel 220. As a result, I/O device 410 is capable of making three different types of requests to channel 220.In addition to a regular request for a transfer of data, I/O device 410 may request for a retransmission of data as well as request an early termination of data. Similarly, the CONTROL and ACKNOWLEDGE components are paired. Tables 1 and 2 below illustrate one embodiment of signal values needed for the different request types and their respective acknowledge signals.<tb><sep>TABLE 1<tb><sep>REQUEST<sep>STATUS<sep>ACTION<tb><sep>0<sep>0<sep>No Request<tb><sep>1<sep>0<sep>Request (Regular)<tb><sep>0<sep>1<sep>Retransmit Request<tb><sep>1<sep>1<sep>Early Termination Request<tb>TABLE 2<tb>ACKNOWLEDGE<sep>CONTROL<sep>ACTION<tb>0<sep>0<sep>No Acknowledge<tb>1<sep>0<sep>Acknowledge (Regular)<tb>0<sep>1<sep>Retransmit Acknowledge<tb>1<sep>1<sep>Early Termination AcknowledgeAccording to one embodiment, DMA controller may operate in either a normal mode or a descriptor mode.Normal ModeFIG. 5 is a flow diagram for one embodiment of the process carried out by DMA controller 110 upon receiving a regular request at a DMA channel 220 from an I/O device 410. At process block 510, CPU 105 (or other bus master), sets up the DMA transfer parameters. In particular, control logic 340 is used to program register 240 with a sequence of commands. The sequence of commands may include information such as the mode of operation, the start address of the first command and last address of the last command. In addition, the commands may include a count that maintains an accounting of the number of transfers to be carried out.At process block 520, a regular request (e.g., REQUEST=1 and STATUS=0) is received at a DMA channel 220 from an I/O device requesting service. At process block 530, DMA controller 110 requests access of CSI 102. At process block 540, DMA controller 110 transmits a regular acknowledge signal (e.g., ACKNOWLEDGE=1 and CONTROL=0) to I/O device 410. At process block 550, data is transmitted over CSI 102 to (or from) I/O device 410 according to the first command. At process block 560, it is determined whether register 240 contains more commands. If there are more commands stored in register 240, control is returned to process block 560. Otherwise, the process is completed.If DMA controller 110 is operating in the normal mode and receives a retransmit request signal (e.g., REQUEST=0 and STATUS=1) during a transfer, DMA channel 220 re-executes the current transfer. A retransmit request signal allows I/O device 410 to signal DMA controller 110 to retransmit the current active transfer without any action by CPU 105 being needed. In one embodiment, a retransmit request signal may be received if I/O device 410 determines that an error in transmission has occurred.FIG. 6 is a flow diagram of one embodiment of the process carried out by DMA controller 110 upon receiving a retransmit request signal from I/O device 410. At process block 610, DMA controller 110 is executing a transfer of data. At process block 620, channel 220 receives a retransmit request signal from I/O device 410. At process block 630, configuration register matrix 240 is reloaded by control logic 340. As a result, is it not necessary for CPU 105 to update control logic 340 with the next DMA transfer until the current DMA transfer has been completed. At process block 640, DMA controller 110 transmits a retransmit acknowledge signal (e.g., ACKNOWLEDGE=0 and CONTROL=1) to I/O device 410. At process block 650, the transfer of data is again commenced from the beginning. Again, notice that the retransmission process does not require intervention by CPU 105.If DMA controller 110 is operating in the normal mode and receives an early termination request signal (e.g., REQUEST=1 and STATUS=1) during a transfer, DMA channel 220 terminates the current transfer. An early termination request signal may be received in cases where the exact amount of incoming data is not known in advance and I/O device 410 determines that all of the required data has been received.FIG. 7 is a flow diagram of one embodiment of the process carried out by DMA controller 110 upon receiving an early termination request signal from I/O device 410. At process block 710, DMA controller 110 is executing a transfer of data. At process block 720, channel 220 receives an early termination request signal from I/O device 410. At process block 730, DMA channel 220 clears the current transfer counter (not shown). According to one embodiment, the transfer counter resides within configuration register matrix 240. At process block 740, DMA controller 110 transmits an early termination acknowledge signal (e.g., ACKNOWLEDGE=1 and CONTROL=1) to I/O device 410. At process block 750, the transfer is terminated.Descriptor ModeThe descriptor mode features a series of single transfers wherein DMA controller 110 automatically deduces the next transfer that is to be performed. The descriptor mode releases CPU 105 from the task of having to continuously monitor and manage DMA transfers. FIG. 8 is a flow diagram for one embodiment of the process carried out by DMA controller 110 upon receiving a regular request signal at a DMA channel 220 while operating in the descriptor mode.At process block 810, CPU 105, or other bus master, constructs a descriptor table 380 in memory (e.g., RAM 150). At process block 820, DMA controller 220 retrieves a command at the beginning of the descriptor table. At process block 830, the retrieved command is programmed into configuration register matrix 240 in conjunction with descriptor logic 380. At process block 840, a regular request signal is received at a DMA channel 220. At process block 850, DMA controller 110 requests access of CSI 102. At process block 860, DMA controller 110 transmits a regular acknowledge signal to I/O device 410.At process block 870, data is transmitted to (or received from) I/O device 410 over CSI 102. At process block 880, it is determined whether register 240 contains more transfer commands. If there are more commands stored in register 240, control is returned to process block 870 where more data is transmitted. If there are no more commands stored in register 240, it is determined whether descriptor table 380 contains more commands. If there are more commands stored in descriptor table 380, control is returned to process block 820 where the next command is retrieved, process block 890. Otherwise, the process is completed.While operating in the descriptor mode, DMA controller 110 may receive a retransmit request signal. FIG. 9 is a flow diagram of one embodiment of the process carried out by DMA controller 110 upon receiving a retransmit request signal from I/O device 410. At process block 910, DMA controller 110 is executing a transfer of data. At process block 920, channel 220 receives a retransmit request signal from I/O device 410.At process block 930, DMA controller 110 transmits a retransmit acknowledge signal (e.g., ACKNOWLEDGE=0 and CONTROL=1) to I/O device 410. At process block 940, DMA controller 110 will restart the existing descriptor entry since a copy of the current descriptor command is stored in configuration register matrix 240. At process block 950, the transfer of data is again commenced from the beginning.While operating in the descriptor mode, DMA controller 110 may receive an early termination signal. FIG. 10 is a flow diagram of one embodiment of the process carried out by DMA controller 110 upon receiving an early termination request signal from I/O device 410. At process block 1010, DMA controller 110 is executing a transfer of data. At process block 1020, channel 220 receives an early termination request signal from I/O device 410.At process block 1030, DMA controller 110 transmits an early termination acknowledge signal to I/O device 410. At process block 1040, DMA channel 220 terminates the transfer associated with the current descriptor entry. At process block 1050, DMA channel 220 updates the reduced transfer count into the descriptor table in memory. In one embodiment, the descriptor table includes a field entry that indicates the amount of data that is to be transferred. This field is updated after the transfer with the amount of data that has actually been transferred.At process block 1060, DMA controller retrieves the next entry in descriptor table 380. At process block 1070, it is determined whether the next descriptor table entry is valid. If the next descriptor table entry is valid, the transfer of data is terminated, process block 1080. However, if the next descriptor table entry is invalid, it is determined whether I/O device 410 is signaling another request, process block 1090. If I/O device 410 is signaling another request, control is returned to process block 1010 where data is transferred. If I/O device 410 is not signaling another request, control is returned to process block 1090 where it is determined whether I/O device 410 is signaling another request.Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as the invention. |
According to some aspects, this disclosure describes techniques for mirroring native media output of a source device via a different destination device. The source device may control the destination device to the media via an output interface associated with the destination device. The source device may receive a media element of the native media and, in response, determine whether to output (mirror) the native media including the media element based on at least one parameter associated with the media element. According to other aspects, this disclosure describes techniques for preventing at least one media element from being mirrored via a destination device operating to mirror other native media. As one example, the source device may change how the source device encodes at least one region associated with the media element, in order to freeze media output associated with the at least one region. |
CLAIMS: 1. A method for mirroring native media of a source device via a destination device, the method comprising: receiving a first media element of the native media of the source device; causing the destination device to mirror the native media comprising the first media element via an output interface associated with the destination device; receiving a second media element of the native media of the source device; and determining whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. 2. The method of claim 1 , wherein the at least one parameter is based on user input. 3. The method of claim 2, wherein the at least one parameter is based on previously received user input stored in memory. 4. The method of claim 2, wherein the at least one parameter is based on dynamically determined user input received in response to receiving the second media element. 5. The method of claim 4, further comprising: in response to receiving the second media element from at least one software application, providing a user with a user interface that enables the user to indicate whether to control the destination device to mirror the second output element. 6. The method of claim 1, further comprising: causing the destination device to not mirror the second media element until it is determined whether to mirror the second media element based on the at least one parameter. 7. The method of claim 6, further comprising: causing the destination device to mirror other native media including the first media element while causing the destination device to not mirror the second media element. 8. The method of claim 7, wherein causing the destination device to mirror other native media including the first media element while causing the destination device to not mirror the second media element comprises: causing the destination device to mirror the first media element, including reading at least some previously output media stored in memory. 9. The method of claim 7, wherein causing the destination device to mirror other native media including the first media element while causing the destination device to not mirror the second media element comprises: sending, to the destination device, the media that includes the first output element, and does not include the second output element. 10. The method of claim 9, wherein the media comprises video media, and wherein causing the destination device to mirror other native media including the first media element while causing the destination device to not mirror the second media element comprises: identifying a region of the media that corresponds to the second media element; and encoding the native media to generate mirrored media to be mirrored by the destination device, wherein encoding the native media comprises: assigning one of a plurality of prediction modes to a plurality of blocks of video data outside of the identified region; and assigning a skip prediction mode to each of the plurality of blocks within the identified region of the media; and outputting the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. 11. The method of claim 10, wherein assigning one of a plurality of prediction modes to a plurality of blocks of video data outside of the identified region comprises assigning one or more prediction modes selected from the group consisting of: a split mode; the skip mode; a direct mode; an intra mode; and an inter mode. 12. The method of claim 1, wherein receiving the first media element and receiving the second media element comprises receiving the first and second media elements from one or more software applications executing on at least one computing device. 13. The method of claim 12, wherein the at least one software application comprises one or more software applications selected from the group consisting of: a video playback application; a photo viewing application; an audio playback application; a telephone application; a text messaging application; an electronic mail (email) application; and a game application. 14. A source device configured to mirror native media of the source device via a destination device, comprising: a mirroring module configured to: receive a first media element of the native media of the source device; cause a destination device to mirror the native media comprising the first media element via an output interface associated with the destination device; receive a second media element of the native media; and determine whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. 15. The device of claim 14, wherein the at least one parameter is based on user input. 16. The device of claim 15, wherein the at least one parameter is based on previously received user input stored in memory. 17. The device of claim 15, wherein the at least one parameter is based on dynamically determined user input received in response to receiving the second media element. 18. The device of claim 17, wherein the mirroring module is further configured to: in response to receiving the second media element from at least one software application, provide a user with a user interface that enables the user to indicate whether to control the destination device to mirror the second output element. 19. The device of claim 14, wherein the mirroring module is further configured to: cause the destination device to not mirror the second media element until it is determined whether to mirror the second media element based on the at least one parameter. 20. The device of claim 19, wherein the mirroring module is further configured to: cause the destination device to mirror other native media including the first media element while causing the destination device to not mirror the second media element. 21. The device of claim 20, wherein the mirroring module is further configured to: cause the destination device to mirror the first media element, based on reading at least some previously output media stored in memory. 22. The device of claim 19, wherein the mirroring module is further configured to: send, to the destination device, the media that includes the first output element, and does not include the second output element. 23. The device of claim 22, wherein the media comprises video media, and wherein the mirroring module is further configured to: identify a region of the media that corresponds to the second media element; and encode the native media to generate mirrored media to be mirrored by the destination device, including: assigning one of a plurality of prediction modes to a plurality of blocks of video data outside of the identified region; and assigning a skip prediction mode to each of the plurality of blocks within the identified region of the media; and outputting the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. 24. The device of claim 23, wherein the mirroring module is further configured to: assign one of a plurality of prediction modes to each of a plurality of blocks of video data outside of the identified region selected from the group consisting of: a split mode; the skip mode; a direct mode; an intra mode; and an inter mode. 25. The device of claim 14, wherein receiving the first media element and receiving the second media element comprises receiving the first and second media elements from one or more software applications executing on at least one computing device, and wherein the at least one software application comprises one or more software applications selected from the group consisting of: a video playback application; a photo viewing application; an audio playback application; a telephone application;a text messaging application; an electronic mail (email) application; and a game application. 26. A computer-readable storage medium that stores instructions configured to cause a computing device to: receive a first media element of native media of a source device; cause a destination device to mirror the native media comprising the first media element via an output interface associated with the destination device; receive a second media element of the native media; and determine whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. 27. The computer-readable storage medium of claim 26, wherein the at least one parameter is based on user input. 28. The computer-readable storage medium of claim 27, wherein the at least one parameter is based on previously received user input stored in memory. 29. The computer-readable storage medium of claim 27, wherein the at least one parameter is based on dynamically determined user input received in response to receiving the second media element. 30. The computer-readable storage medium of claim 29, wherein the instructions further cause the computing device to: in response to receiving the second media element from at least one software application, provide a user with a user interface that enables the user to indicate whether to control the destination device to mirror the second output element. 31. The computer-readable storage medium of claim 26, wherein the instructions further cause the computing device to:cause the destination device to not mirror the second media element until it is determined whether to mirror the second media element based on the at least one parameter. 32. The computer-readable storage medium of claim 31, wherein the instructions further cause the computing device to: cause the destination device to mirror other native media including the first media element while causing the destination device to not mirror the second media element. 33. The computer-readable storage medium of claim 32, wherein the instructions further cause the computing device to: cause the destination device to mirror the first media element, based on reading at least some previously output media stored in memory. 34. The computer-readable storage medium of claim 32, wherein the instructions further cause the computing device to: send, to the destination device, the media that includes the first output element, and does not include the second output element. 35. The computer-readable storage medium of claim 34, wherein the instructions further cause the computing device to: identify a region of the media that corresponds to the second media element; and encode the native media to generate mirrored media to be mirrored by the destination device, including: assigning one of a plurality of prediction modes to a plurality of blocks of video data outside of the identified region; and assigning a skip prediction mode to each of the plurality of blocks within the identified region of the media; and outputting the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. 36. The computer-readable storage medium of claim 35, wherein the instructions further cause the computing device to assign the one of the plurality of prediction modes selected from the group consisting of: a split mode; the skip mode; a direct mode; an intra mode; and an inter mode. 37. The computer-readable storage medium of claim 26, wherein the instructions further cause the computing device to receive the first media element and the second media element comprises from one or more software applications executing on at least computing device, and wherein the at least one software application comprises one or more software applications selected from the group consisting of: a video playback application; a photo viewing application; an audio playback application; a telephone application; a text messaging application; an electronic mail (email) application; and a game application. 38. A source device configured to mirror native media of the source device via a destination device, comprising: means for receiving a first media element of the native media of the source device; means for causing a destination device to mirror the native media comprising the first media element via an output interface associated with the destination device; means for receiving a second media element of the native media; and means for determining whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. 39. The device of claim 38, further comprising: means for determining the at least one parameter is based on user input. 40. The device of claim 39, further comprising: means for determining the at least one parameter based on previously received user input stored in memory. 41. The device of claim 39, further comprising: means for dynamically determining user input received in response to receiving the second media element. 42. The device of claim 41, further comprising: means for, in response to receiving the second media element from at least one software application, providing a user with a user interface that enables the user to indicate whether to control the destination device to mirror the second output element. 43. The device of claim 41, further comprising: means for causing the destination device to not mirror the second media element until it is determined whether to mirror the second media element based on the at least one parameter. 44. A method of encoding a frame of video data, comprising: identifying at least one region of a frame of video data to freeze; assigning one of a plurality of prediction modes to each block of a plurality of blocks of video data in the video frame that reside outside of the identified at least one region; and assigning a skip prediction mode to each of the plurality of blocks within the at least one region; and outputting the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. 45. The method of claim 44, assigning one of a plurality of prediction modes to a plurality of blocks of video data outside of the identified region assigning one or more prediction modes selected from the group consisting of: a split mode; the skip mode; a direct mode; an intra mode; and an inter mode. 46. The method of claim 44, wherein identifying the at least one region of a frame of video data to freeze comprises identifying the at least one region to prevent a received media element of media native to a source device from being mirrored via the destination device along with other media native to the source device. 47. The method of claim 44, further comprising: assigning a skip prediction mode to each of the plurality of blocks within the at least one region comprises assigning the skip prediction mode to each of the blocks regardless of whether there are any differences between a respective block and a predictive block that may be used by a decoder to reconstruct the respective block. 48. A device configured to encode a frame of video data, comprising; a mirroring module configured to: identify at least one region of a frame of video data to freeze; and a video encoder configured to: assign one of a plurality of prediction modes to each block of a plurality of blocks of video data outside of the identified at least one region; and assign a skip prediction mode to each of the plurality of blocks within the at least one region; and output the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. 49. The device of claim 48, wherein the video encoder is configured to assign one of a plurality of prediction modes to a plurality of blocks of video data outside of the identified region selected from the group consisting of: a split mode; the skip mode; a direct mode; an intra mode; and an inter mode. 50. The device of claim 48, wherein the mirroring module is further configured to identify the at least one region of a frame of video data to freeze based on identifying the at least one region to prevent a received media element of media native to a source device from being mirrored via the destination device along with other media native to the source device. 51. The device of claim 48, wherein the video encoder is further configured to: assign a skip prediction mode to each of the plurality of blocks within the at least one region comprises assigning the skip prediction mode to each of the blocks regardless of whether there are any differences between a respective block and a predictive block that may be used by a decoder to reconstruct the respective block. 52. A computer-readable storage medium that stores instructions configured to cause a computing device to: identify at least one region of a frame of video data to freeze; assign one of a plurality of prediction modes to each block of a plurality of blocks of video data outside of the identified at least one region; assign a skip prediction mode to each of the plurality of blocks within the at least one region; and output the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. 53. The computer-readable storage medium of claim 52, wherein the instructions further cause the computing device to: assign one of a plurality of prediction modes to each of the plurality of blocks of video data outside of the identified region selected from the group consisting of: a split mode; the skip mode; a direct mode; an intra mode; and an inter mode. 54. The computer-readable storage medium of claim 52, wherein the instructions further cause the computing device to: identify the at least one region of a frame of video data to freeze to prevent a received media element of media native to a source device from being mirrored via the destination device along with other media native to the source device. 55. The computer-readable storage medium of claim 52, wherein the instructions further cause the computing device to: assign the skip prediction mode to each of the blocks within the identified at least one region regardless of whether there are any differences between a respective block and a predictive block that may be used by a decoder to reconstruct the respective block. 56. A device configured to encode a frame of video data, comprising: means for identifying at least one region of a frame of video data to freeze; means for assigning one of a plurality of prediction modes to each block of a plurality of blocks of video data outside of the identified at least one region; and means for assigning a skip prediction mode to each of the plurality of blocks within the at least one region; and means for outputting the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. |
SELECTIVE MIRRORING OF MEDIA OUTPUT TECHNICAL FIELD [0001] This disclosure relates generally to devices configured to provide media output (e.g., audio and/or visual output) to one or more users. BACKGROUND [0002] In recent years, computing devices have steadily evolved to provide users with more and more functionality. For example, a smart phone, tablet computer, laptop computer, or other computing device may be configured to execute a plurality of different software applications that each provides functionality to a user. In some examples, such a computing device may include at least one native output interface, such as a display and/or audio speakers of the computing device. [0003] In some examples, such a computing device may include one or more communications applications (e.g., voice call, text-messaging, electronic mail, video telephony, social networking) that cause the computing device to output one or more media elements elements comprising one or more messages to a user via a native output interface (e.g., a display, speakers) of the computing device. For example, a text- messaging application may receive a text message, and generate a media element (e.g., a pop-up notification and/or associated audible sound) to be output via a native output interface of the computing device to alert/inform a user of the incoming text message. [0004] As another example, such a computing device may include one or more media applications configured to output one or more media elements such as sound (e.g., an mp3 player, digital radio station, cloud music network) and/or video (e.g., digital video player, streaming video application) via a native output interface of the computing device. According to other examples, a computing device may include one or more gaming, photo viewing, calendar, alarm, or other applications configured to cause the computing device to output native media comprising one or more media elements, via a native output interface of the computing device. SUMMARY [0005] This disclosure is directed to techniques for mirroring native media output, such as audio and/or visual media output, of a source device via a destination device.According to these techniques, the source device may operate in a mirroring mode such that one or more media elements of the media that are substantially similar to media elements of the native media output of the source device are output via the destination device. Also, according to these techniques, the source device may receive another media element of the media. In response to receiving the received media element, the source device may compare at least one characteristic of the received media element to at least one parameter, and determine whether to cause the destination device to mirror the at least one received media element based on the comparison. [0006] In some examples, the source device may cause the destination device to output other native media of the source device, while preventing the received media element from being output via the destination device, until a mirroring status of the media element is resolved, (e.g., based on the at least one parameter). [0007] According to one example, a method for mirroring native media of a source device via a destination device is described herein. The method includes receiving a first media element of the native media of the source device. The method further includes causing the destination device to mirror the native media comprising the first media element via an output interface associated with the destination device. The method further includes receiving a second media element of the native media of the source device. The method further includes determining whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. [0008] According to another example, a source device configured to mirror native media of the source device via a destination device is described herein. The source device includes a mirroring module configured to: receive a first media element of the native media of the source device, cause a destination device to mirror the native media comprising the first media element via an output interface associated with the destination device, receive a second media element of the native media, and determine whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. [0009] According to another example, a computer-readable storage medium that stores instructions is described herein. The instructions are configured to cause the computing device to receive a first media element of native media of a source device. Theinstructions are further configured to cause the computing device to cause a destination device to mirror the native media comprising the first media element via an output interface associated with the destination device. The instructions are further configured to cause the computing device to receive a second media element of the native media. The instructions are further configured to cause the computing device to determine whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. [0010] According to another example, a source device configured to mirror native media of the source device via a destination device is described herein. The device includes means for receiving a first media element of the native media of the source device. The device further includes means for causing a destination device to mirror the native media comprising the first media element via an output interface associated with the destination device. The device further includes means for receiving a second media element of the native media. The device further includes means for determining whether to cause the destination device to mirror the native media including the second media element based on at least one parameter associated with the second media element. [0011] According to another example, a method of encoding a frame of video data is described herein. The method includes identifying at least one region of a frame of video data to freeze. The method further includes assigning one of a plurality of prediction modes to each block of a plurality of blocks of video data in the video frame that reside outside of the identified at least one region. The method further includes assigning a skip prediction mode to each of the plurality of blocks within the at least one region. The method further includes outputting the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. [0012] According to another example, a device configured to encode a frame of video data is described herein. The device includes a mirroring module configured to: identify at least one region of a frame of video data to freeze. The device further includes a video encoder configured to: assign one of a plurality of prediction modes to each block of a plurality of blocks of video data outside of the identified at least one region; assign a skip prediction mode to each of the plurality of blocks within the at least one region;and output the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. [0013] According to another example, a computer-readable storage medium that stores instructions is described herein. The instructions are configured to cause a computing device to identify at least one region of a frame of video data to freeze. The instructions are further configured to cause the computing device to assign one of a plurality of prediction modes to each block of a plurality of blocks of video data outside of the identified at least one region. The instructions are further configured to cause the computing device to assign a skip prediction mode to each of the plurality of blocks within the at least one region. The instructions are further configured to cause the computing device to output the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. [0014] According to another example, a device configured to encode a frame of video data is described herein. The device includes means for identifying at least one region of a frame of video data to freeze. The device further includes means for assigning one of a plurality of prediction modes to each block of a plurality of blocks of video data outside of the identified at least one region. The device further includes means for assigning a skip prediction mode to each of the plurality of blocks within the at least one region. The device further includes means for outputting the frame of video data to a destination device to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device. [0015] The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims. BRIEF DESCRIPTION OF DRAWINGS [0016] FIG. 1 is a conceptual diagram that depicts one example of a source device configured to mirror native media output of the source device via at least one destination device consistent with one or more aspects of this disclosure.[0017] FIG. 2 is a conceptual diagram that illustrates screen shot examples of media output by a source device and a destination device consistent with one or more aspects of this disclosure. [0018] FIG. 3 is a block diagram that illustrates one example of a source device configured to mirror native media output of the source device via at least one destination device consistent with one or more aspects of this disclosure. [0019] FIG. 4 is a flow diagram that illustrates one example of a method of controlling a destination device to mirror at least one media element of native media output consistent with one or more aspects of this disclosure. [0020] FIG. 5 is conceptual diagram that illustrates one example of a user interface that may be provided to a user consistent with one or more aspects of this disclosure. [0021] FIG. 6 is a conceptual diagram that illustrates another example of a user interface that may be provided to a user consistent with one or more aspects of this disclosure. [0022] FIG. 7 is a flow diagram that depicts one example of a method of controlling a destination device to mirror native media output consistent with one or more aspects of this disclosure. [0023] FIG. 8 is a block diagram that illustrates one example a source device that includes a video encoder and a destination device that includes a video decoder consistent with one or more aspects of this disclosure. [0024] FIGS. 9 and 10 are conceptual diagrams that illustrate one example of a technique for visually freezing at least one identified region in a frame media output by a destination device consistent with one or more aspects of this disclosure. [0025] FIG. 11 is a flow diagram that illustrates one example of a method of visually freezing at least one identified region in a frame media output by a destination device consistent with one or more aspects of this disclosure. [0026] FIG. 12 is a flow diagram that illustrates one example of a method of mirroring media via a destination device consistent with one or more aspects of this disclosure. DETAILED DESCRIPTION [0027] FIG. 1 is a conceptual diagram that illustrates one example of a source computing device 110 configured to control at least one destination device 114 to output native media of the source device 110 consistent with one or more aspects of this disclosure. Source device 110 may comprise any device that includes at least one nativeoutput interface configured to output media, such as audio and/or visual media. For example, source device 110 depicted in FIG. 1 comprises a smart phone or tablet computer that includes a first output interface that comprises a display screen 112, and a second output interface that comprises one or more speakers (not depicted in FIG. 1) native to the source device. In other examples, source device 110 may comprise a laptop computer, desktop computer, gaming controller, or any other device that includes or is configured to control at least one native output interface source device 110. [0028] As shown in the example of FIG. 1, screen 112 of source device 110 may be configured to output native graphical output 130. Native graphical output 130 may comprise, for example, one or more still or video images. In the example of FIG. 1, speakers of source device 110 are configured to provide native audio output 131. Native audio output 131 may comprise one or more audible sounds. Such audio output 131 may be associated with (e.g., configured to be output in conjunction with) graphical output 130. For example, where native graphical output 130 comprises video or image content, audio output 131 may comprise audible sounds associated with the video or image. [0029] As shown in FIG. 1, source device 110 is communicatively coupled to at least one destination device(s) 114 (hereinafter "destination device 114"). Destination device 114 may be used by source device 110 to operate one or more output interfaces 120, 124 associated with the destination device 114, to mirror native media output of the source device 110. The term "mirror" as described herein may refer to controlling, by source device 110, at least one destination device 114 to output at least a portion of native media, such as native audio, video, and/or image media output of the source device 110. [0030] Source device 110 may control destination device 114 to mirror media output of source device 110 via one or more output interfaces 120, 124 associated with destination device 114. For example, as shown in FIG. 1, source device 110 may control external display interface 120 to output mirrored graphical output 132, which may comprise substantially similar graphical output to native graphical output 130 of source device 110. Mirrored graphical output 132 may comprise the same one or more images as native graphical output 130, but such images may be processed, sized, formatted, and/or otherwise modified for presentation via external display interface 120. [0031] As also shown in FIG. 1, source device 110 may control external audio interface 124 to output mirrored audio output 136, which may comprise substantially similaraudio output to audio output 131 depicted in FIG. 1. Mirrored audio output 136 may comprise the same audible sounds as native graphical output 130, but processed, formatted, and/or otherwise modified to be output via external audio interface 124. In some examples, source device 110 may control destination device 114 to output media comprising native graphical output 130 as well as native audio output 136 of source device 310. [0032] In some examples, such as shown in FIG. 1, destination device 114 may comprise a device that is coupled to one or more of an external display interface 120, such as a television display, computer monitor, or the like, that includes a screen 122 configured to present images to a viewer, and/or an external audio interface 124, such as speakers or a stereo system. For example, destination device 110 may comprise a set- top box (e.g., GOOGLE TV, APPLE TV, ROKU), gaming console (e.g., NINTENDO WII, SONY PLAYSTATION 3, MICROSOFT XBOX), digital video disc (DVD) player, BLURAY player, or other separate unit coupled to an external display device and/or an external audio device as shown in the example of FIG. 1. In other examples not depicted in FIG. 1, one or more of destination device 114, external display interface 120, and/or external audio interface 124 may be combined as a single device. For example, a display device, such as a television or computer monitor, may include both a screen 122 and one or more speakers. In other examples, the same, or different, destination devices 114 may control one or more audio or video output interfaces that are separate from the one or more destination devices 114. According to still other examples, the functionality performed by destination device 114 as described herein may be performed by one or more of an external display device and/or an external audio device. For example, a television display and/or stereo receiver may include hardware and/or software configured to communicatively couple the television and/or stereo with source device 110, such that native media of source device 110 may be mirrored the television display or stereo receiver. [0033] In some examples, source device 110 may be communicatively coupled to destination device 114 to mirror media output of source device 110 via a wired connection, such as one or more of a high definition multimedia interface (HDMI), digital video interface (DVI), or other wired connection. In other examples, source device 110 may be coupled to destination device 114 to mirror media output wirelessly. For example, destination device 114 may include one or more communicationscomponents configured to enable the source device to send native media output to the destination device via one or more wireless communications protocols, such as WI-FI, BLUETOOTH, a cellular network, or any other form of wireless communications. [0034] As described above, source device 110 may be configurable to mirror media output of source device 110 via destination device 114. For example, source device 110 may provide a user with a user-selectable option to activate or deactivate a mirroring mode where native audio media, visual media, or both are output via destination device 114. In a standard operating mode, source device 110 may receive media in the form of a plurality of media elements, and control one or more native output interfaces 112, 131 of source device 110 to output media comprising the plurality of media elements. In such a mirroring mode, destination device 114, source device 110 may control destination device 114 to output substantially the same media as native graphical output 130 and/or native audio output 131 that source device 110 would have output using one or more native output interfaces 112, 131 in the standard operating mode. For example, in such a mirroring mode, as source device 110 receives media elements from one or more sources (e.g., from one or more software applications executing on the source device), source device 110 may process the received media elements for output via destination device, and send each of the processed media elements to the destination device 114 to be output. When operating to mirror media output, source device 110 may or may not itself continue to output native media 130, 131 via native output interfaces 130, 131. [0035] This disclosure is directed to techniques for controlling the mirroring of media output of source device 110 via one or more destination device(s) 114. According to these techniques, when operated in a mirroring mode, source device 110 may selectively prevent one or more received media elements from being mirrored via destination device 114. For example, source device 110 may receive a first media element of native media, and control destination device 114 to mirror the native media including the first media element of native media via the destination device 114. Source device 110 may also receive a second media element of the native media. In response to receiving the second media element, source device 110 may compare at least one characteristic of the second media element to at least one parameter. Based on the comparison, source device 110 may determine whether or not to mirror the second media element via the one or more destination devices 114. In some examples, source device 110 may controldestination device 114 to continue mirroring media without mirroring the second media element, while source device 110 determines whether or not to mirror the second media element. For example, source device 110 may cause destination device 114 to continue outputting video and/or audio of the first media element of the native media, but not video and/or audio associated with the second media element. [0036] As one non-limiting example, the first media element may comprise output of a digital video player application executing on source device 110, which outputs the first media element. In response to the first media element, source device 110 may control destination device 114 to output video and/or audio associated with the first media element. Source device may then receive a second media element. According to this example, the second media element may comprise a message from a communications application executing on source device 110. For example, the second media element may comprise an indication that an email, text-message, voice call, voicemail, or other message has been received from one or more messaging, telephony, social networking, or other applications. According to the techniques of this disclosure, upon receipt of the second media element, source device 110 may compare at least one characteristic of the second media element to at least one parameter, to determine whether to control destination device 114 to present the second media element based on the comparison. According to the techniques described herein, a user or other viewer may continue viewing video media comprising the first media element (the played back video) as well as audio media associated with the first media element (an audio track associated with the video), but not mirror audio or video media associated with the received second media element, which may be private and/or person to the user, or otherwise undesirable to share via destination device 114. Instead, the second media element may only be displayed via a native output interface of source device 110. As such, another user with access (e.g., to view and/or listen to) to one or more output interfaces of destination device 114 may not be given access to the first media element. In this manner, the user of source device 110 may be able to share some native media of source device 110 (e.g., the video and associated audio playback described above) on a selective basis, while maintaining confidentiality of other media or otherwise preventing mirroring of media that the user does not desire to share with others, such as the incoming message. [0037] In some examples, source device 110 may determine whether to mirror a received media element (a "mirroring status" of the media element, as described herein)based on at least one parameter that comprises a predetermined parameter stored in memory. For example, source device 110 may operate to provide a user-interface that a user may use to pre-select one or more options regarding the mirroring of media via one or more destination devices 114 as described herein. For example, source device 110 may provide a user interface that allows a user to select application-specific options regarding the mirroring of media. As one specific example, where the second media element is received from a communications application, source device 110 may provide a user interface that enables a user to indicate that all media received from the communications application should be mirrored, that no media received from the communications application should be mirrored, and/or that media associated with a message from one or more specific users should or should not be mirrored. In other examples, such a user interface may also enable a user to indicate that source device 110 should request and receive user input to authorize media received from one or more applications or users. [0038] Based on such one or more predetermined parameters stored in memory, source device 110 may determine whether or not to mirror the received second media element via the destination device 114. For example, if such a predetermined parameter indicates that all messages from user "John Smith" are to be prevented from being mirrored, source device 110 may not output a received second media element that comprises a text message or voice call from user "John Smith" via the destination device. [0039] According to other examples consistent with the techniques described herein, source device 110 may determine whether or not to mirror, via destination device 114, a second multimedia element based on one or more dynamically determined parameters. For example, in response to receiving the second media element, source device 110 may provide a user-interface that allows a user to select whether or not to mirror the received second media element. In some such examples, source device 110 may not operate to mirror the second media element via destination device 114, unless a mirroring status of the second media element is resolved. For example, source device 110 may not mirror the second media element, unless source device 110 determines based on the at least one parameter that the second media element should be mirrored. In some examples, while source device 110 determines a mirroring status of the second media element, source device 110 may continue to mirror other native media output of source device 110 viadestination device 114, without mirroring the second media element. For example, source device 110 may continue to mirror audio output associated with the first media element via destination device 114, but not mirror audio output associated with the second media element. As another example, source device 110 may cause destination device 114 to mirror media comprising native graphical media of source device 114 with at least one region of the mirrored graphical media that corresponds to the second media element frozen or removed from the mirrored media output. [0040] The techniques described herein may be advantageous, because when source device 110 is operated in a mirroring mode to mirror native media of the source device 110 via destination device 114, source device 110 may provide a user with an ability to control whether one or more received media elements of native media are mirrored via the destination device 114. Such control may be desirable to a user, because a user may desire to mirror some, but all, native media of source device 110, in order to maintain confidentiality and/or avoid mirroring at least some irrelevant elements, of some media native to the source device 110. The techniques described herein may further advantageously enable a user to control mirroring of newly received native media elements, while still allowing the user and/or another viewer to continue to enjoy other native media of the source device 110 that the user does desire to be mirror via destination device 114. In this manner, a user experience when using source device 110 to mirror media via destination device 114 may be improved. [0041] FIG. 2 is a conceptual diagram that depicts a screen shot of one example of native graphical output 230 of a source device 210, as well a screen shot of one example of mirrored graphical output 232 of an external display interface 220 associated with a destination device 214 consistent with one or more aspects of this disclosure. As shown in FIG. 2, source device 210 is operated to output native media output (graphical output 230 in the example of FIG. 2) that comprises a plurality of media elements 235, 236, 237, and 239. Source device 210 may also operate to output audio media via one or more native speakers associated with the source device 210 (included in source device 210 or coupled to source device 210). The plurality of media elements 235, 236, 237, and 239 may be received from any source, such as one or more software applications executing on source device 210. [0042] For example, as shown in FIG. 2, media first element 236 is an image of a dog. First media element 236 may comprise a video or still image received from a mediadisplay application, such as, for example, a video player or photo viewing application. As also shown in FIG. 2, media element 235 comprises a digital clock, and media element 239 comprises an alarm. Media elements 235 and 237 may be received from a clock and/or alarm application executing on source device 210. As also shown in FIG. 2, media element 237 comprises a current date. Media element 237 may be received from a calendar application executing on source device 210. Although not depicted in FIG. 2, in some examples, source device 210 may also output audio media associated with one or more of media elements 235, 236, 237, and 239. [0043] As also shown in FIG. 2, source device 210 may operate to control destination device 214 to mirror media output (e.g., native graphical output 230 in the example of FIG. 2, as well as native audio output, not depicted in FIG. 2) of source device 210. For example, as shown in FIG. 2, display interface 220 is operated to present mirrored graphical output 232 comprising substantially similar media as native graphical output 230. For example, as shown in FIG. 2, media elements 235, 236, 237, and 239 may be output substantially similar to native graphical output 230, albeit processed (e.g., reformatted, resized, reshaped, rearranged, reoriented, and/or otherwise modified) for presentation via external display interface 220. [0044] As also depicted in the example of FIG. 2, while source device 210 is operated to present native graphical output 230 comprising media elements 235, 236, 237, and 239 (and destination device 214 is operated to mirror media elements 235, 236, 237, and 239), source device 210 may receive second media element 238. In the example of FIG. 2, second media element 238 comprises a graphical notification that source device 210 has received a text message from a user named "John Smith," however second media element may comprise any form of media, including any visible and/or audible media. According to the techniques described herein, before source device 210 controls destination device 214 to output second media element 238, source device 210 resolve a mirroring status of the second media element 238. For example, source device 210 may compare one or more characteristics of second media element 238 to at least one parameter, and determine whether or not to mirror the second media element 238 via destination device 214 based on the comparison. In some examples, while the mirroring status of the second media element 238 is being resolved, source device 210 may output the second media element 238 via one or more native audio or display interfaces native to the source device 310, as shown in the example of FIG. 2.[0045] In some examples, source device 210 may resolve a mirroring status of second media element 238 based on at least one predetermined parameter stored in memory. For example, source device 210 may provide a user-interface that a user may use to preselect one or more options regarding the mirroring of, e.g., particular media elements, types of media elements or categories of media elements. Source device 210 may store an indication of one or more such selected options in memory as a parameter for later use to determine whether to mirror media elements received by source device 210 and thereby resolve a display status of one or more received media elements. [0046] As one example, such a user interface may enable a user to select one or more options based on a type of source a media element is received from. For example, such a user interface may enable a user to indicate that all media element received from communications applications (e.g., electronic mail, text messaging, video telephony, social networking applications), or all video applications is or is not to be mirrored via a destination device. As another example, source device 110 may provide a user interface that allows a user to select application-specific options regarding the mirroring of media. For example, such a user interface may enable a user to indicate that media received from one or more specific applications, such as a specific text-messaging or a specific video application, should or should not be mirrored via the destination device. According to still other examples, such a user interface may enable a user to select user- specific options regarding the mirroring of media. For example, such a user interface may enable a user to indicate that all media associated with a particular user, or media from one or more specific types of application or specific application associated with a particular user, should or should not be mirrored via the destination device. As one specific example, such a user interface may enable a user to indicate that all media, or media received by one or more types of application or specific applications, should or should not be mirrored via the destination device. In some examples, source device 210 may resolve a display status of a received media element, such as second media element 238 depicted in FIG. 2, by comparing at least one characteristic of the received media element to at least one predetermined parameters, based on user input received via one or more user interfaces as described above. [0047] In other examples, source device 210 may resolve a display status of a received media element based on one or more dynamically determined parameters. For example, in response to receiving second media element 238, source device 210 may present auser with a user interface that enables the user to select whether or not to mirror the received second media element 238 via destination device 214. For example, source device 210 may, via one or more native media interfaces of source device 210 (e.g., screen 212 depicted in FIG. 2 and/or speaker associated with source device 210), output second media element 238 along with an associated user interface that allows a user to select whether to 1) mirror the second media element 238 via destination device 214, or 2) prevent the second media element 238 from being mirrored (while continuing to output the second media element 238 via one or more native media interfaces of source device 210). As one specific example, such a user interface may state "Allow mirroring of text message from John Smith on external display?", and may enable a user to select yes or no. In some examples, source device 210 may resolve a display status based on one or more dynamically determined parameters that are defined based on user input received in response to such a user interface. In some examples, a source device 210 may also resolve a mirroring status of a received display element user interface input that dismisses the received second media element 238 entirely. According to these examples, a user may interact with source device 210 to cause the second media element 238 to be removed from native media output of source device 210. [0048] As described above, in some examples, source device 210 may not control destination device 214 to mirror second media element 238, until a mirroring status of second media element 238 is resolved by source device 210. For example, source device 210 may cause destination device 214 not to mirror second media element 238, unless source device 210 determines that second media element 238 should be mirrored based on comparison to at least one predetermined or dynamically determined parameter, as described above. [0049] In some examples, until a mirroring status of a received media element is resolved, source device 210 may cause destination device 214 to stop actively mirroring at least a portion of media of native graphical output 230. For example, source device 210 may cause destination device 214 to mirror audio media associated with first media element 236 (i.e., an audio track associated with an image associated with first media element), but not mirror audio media associated with second media element 238. [0050] Source device 210 may cause destination device 214 to mirror video media associated with first media element 236, but not mirror video media associated with second media element 238. For example, source device 210 may prevent second mediaelement from being output by destination device 214 by causing at least a portion of graphical output of destination device to be visually "frozen," e.g., by repeating a given frame of video, image or graphical data, or by stopping destination device 214 from outputting audio and/or visual media altogether, until a mirroring status of second media element 238 is resolved. [0051] For example, source device 210 may freeze an entire frame of mirrored graphical output 232 by continually sending destination device 214 data representing a previously presented frame that does not include second media element 238, or by sending one or more commands to destination device 214 that instruct destination device 214 to modify processing of received media to output a "frozen" frame. According to another example, source device 210 may stop sending audio and/or visual media to destination device 214, and thereby stop destination device 214 from outputting audio and/or visual media altogether. [0052] According to some aspects of this disclosure, source device 210 may continue to control destination device 214 to mirror some native media output of source device 210, but not other native media output of source device 210. Referring to the example of FIG. 2, source device 210 may continue to cause destination device 214 to mirror native graphical media output comprising first media element 236 (and/or media elements 235, 237 and 239), and not mirror native graphical media output comprising second media element 238 by removing and/or freezing at least one region of the mirrored graphical media associated with the second media element 238. In some examples, source device 210 may also mirror native audio media output of source device 210 associated with first media element 236 (and/or media elements 235, 237 and 239), and not mirror native audio media output associated with second media element 238. [0053] Referring to FIG. 2, first media element 236 comprises an image (picture as opposed to a video sequence) of a dog. As shown in the example of FIG. 2, a left bottom-most portion of first media element 236 (i.e., the front legs of the image of the dog, not shown in FIG. 2) is not displayed in order to prevent second media element 238 from being displayed. [0054] In some examples, until a mirroring status of second media element 238 is resolved, source device 210 may not cause region 233 to be removed from mirrored graphical output 232 as shown in the example of FIG. 2. Instead, source device 210 may cause region 233 of mirrored graphical output 232 to include an image from at leastone previous frame that did not include second media element 238. In this manner, source device 210 may visually "freeze" previous image data that corresponds to region 233, which may minimize disruption of the display of other media, such as media elements 235, 236, 237, and 239 of mirrored graphical output 232 depicted in FIG. 2. Accordingly, if a user used source device 210 to mirror a still photo of a dog via destination device 214 as shown in the example of FIG. 2, a bottom left most portion of first media element (corresponds to the front legs of the dog image, not visible in the example of FIG. 2) may still be displayed and viewable by the user. [0055] In some examples, to cause region 233 associated with second media element 238 to be visually "frozen" based on previously displayed image data as described above, source device 210 may send one or more commands to destination device 214 that cause destination device 214 to modify it's operation to repeatedly read image data to output images associated with region 233, while using received image data to mirror other native media of source device 210. [0056] In some examples, such a command may comprise an in-band signal, communicated within a frequency band of a communications link between source device 310 and the destination device that is used to mirror media. For example, where the communications link comprises a WI-FI communications link, such a command may be communicated within the frequency band of the WI-FI signal. [0057] According to other examples, such a command may comprise an out-of-band signal. For example, where the communications link comprises a WI-FI communications like, such a command may be communicated using a different frequency band than defined for the WI-FI signal, and/or using a different wireless protocol. For example, such a command may be communicated using one or more low power communications protocols, such as a BLUETOOTH communications protocol. [0058] For example, source device 210 may communicate one or more out-of-band signals to destination device 214, to cause destination device 214 to modify its operation to process images associated with region 233 differently than portions of mirrored graphical data displayed outside of region 233. [0059] According to other examples, source device 210 may modify how native media, such as native graphical output 230, is processed by source device 210 to generate mirrored graphical output 232 such that second media element 238 is not included in mirrored graphical output 232 when output via destination device 214. For example, asdescribed in further detail below with reference to FIGS. 8-11, source device 210 may modify how graphical data is encoded by source device 210, such that when the graphical data is decoded by destination device 214, the mirrored graphical media is presented with region 233 visually frozen, as described above. [0060] According to the techniques described herein, when operated in a mirroring mode, source device 210 may provide a user with an ability to keep certain media, such as second media element 238, confidential from other viewers of an output interface associated with destination device 214. In addition, according to the techniques described herein, source device 210 may enable prevent received media native to source device 210 from being mirrored via destination device, while minimizing interruption of the user's ability to enjoy other media native media of source device 210 mirrored via destination device 214. [0061] FIG 3 is a block diagram that illustrates one example of a source device 310 configured to mirror native media output via at least one destination device (e.g., destination device 114, 214 depicted in FIGS. 1 and 2, respectively) consistent with one or more aspects of this disclosure. As shown in FIG. 3, device 210 includes a sensor module 340, a user input module 345, a memory 341, a communications module 342, a graphics processing module 343, a processor 344, a native display screen 312 and a power source 349. Processor 344 may include one or more components of device 310 configured to execute instructions. For example, processor 344 may comprise one or more central processing units (CPU), application specific integrated circuits (ASICs), field programmable gat arrays (FPGAs), discrete logic components, or other component configured to execute instructions that control operation of device 310. [0062] Memory 341 comprises one or more components of device 310 configured to store data and/or instructions, such as one or more magnetic storage components, optical storage components, random access memory (RAM) storage components, FLASH memory storage components, or any other component configured to store instructions and/or data. For example, memory 341 may store one or more software programs that to operate device 310 to output media. Memory 341 may also or instead be configured to store data that represents media that may be output by device 310. For example, memory 310 may store one or more digital representations of audible sounds, still images, video, or other media that may be output by device 310.[0063] Communications module 342 may comprise one or more hardware and/or software components of device 310 configured to enable device 310 to communicate with one or more other devices. For example, communications module 342 may comprise one or more components that enable one or more of wireless communication (e.g., WI-FI, BLUETOOTH, and cellular networks such as 3G and 4G cellular networks) or wired communication (e.g., ETHERNET). In some examples, communications module 342 may be used by device 310 to control another device, such as destination device 114 depicted in FIG. 1, to mirror native media output of device 310. For example, source device 310 may be coupled to the destination device via communications module 342 such that source device 310 may communicate data and/or instructions that cause the destination device to mirror data according to the techniques described herein. [0064] Sensor module 340 may comprise one or more hardware and/or software components of device 310 that operate one or more sensors (not depicted in FIG. 2) of device 310. For example, sensor module 340 may comprise one or more hardware and/or software components of device 310 configured to operate one or more microphones, image sensors (e.g., camera sensor), accelerometer sensor, gyroscope sensor, single or multi -touch display sensors configured to detect user gestures performed at screen 312 or another portion of device 310, or any other type of sensor that device 310 may include. [0065] Graphics processing module 343 may comprise one or more hardware and/or software components of device 310 configured to process graphics instructions (e.g., higher level graphics instructions such as, for example, one or more instructions according to the OPENGL standard) and generate image data that may be used to present images via screen 312. For example, graphics processing module 343 may comprise a graphics processing unit (GPU) and/or other components configured to process graphics. In some examples, graphics module 343 may process graphics based on graphics instructions received from one or software applications executing on processor 344. In other examples, graphics module 343 may process graphics based on graphics instructions received from memory 341 and/or communications module 342. [0066] Power source 349 may comprise one or more components of device configured to store energy, such as electrical energy, that may be used to power one or more components of device 310. For example, power source 349 may comprise one or morebatteries internal to device 310 that may be used to power device 310 when device 210 is not connected to an external power source, such as a wall outlet. In some examples, power source 349 may be a limited power source. In some examples, it may be desirable to decrease an amount of energy stored by power source 349 that is used to operate device 310, such that device 310 may be used for longer periods of time between being charged (e.g., by being connected to an external power source). [0067] As also shown in FIG. 3, device 310 includes a user input module 345. User input module 345 may comprise one or more hardware and/or software components of device 310 configured to receive and/or process one or more indications of sensed information from sensor module 440, and communicate one or more indications of the sensed information to one or more other components of device 310. For example, user input module 345 may receive one or more indications of user interaction with screen 312, and determine that a user has performed one or more single and/or multi-touch gestures at screen 312 (and/or another surface of device) based on the indication of detected user interaction. According to this example, user input module 345 may send an indication of the determined single or multi -touch gesture to one or more other components of device 310. For example, user input module may send such an indication to one or more software applications executing on processor 344. The one or more software applications may control device 310 in response to such a received indication. [0068] As depicted in FIG. 3, device 310 may also include a display processing module 347 and an audio processing module 350. Display processing module 347 may be configured to receive, from one or more sources such as memory 341, communications module 342, graphics processing module 343, and/or one or more software applications executing on processor 344, image data that represents one or more media elements that represent one or more images for display. Display processing module 347 may be configured to process such received data for presentation via one or more displays. For example, as depicted in FIG. 3, display processing module 347 may process image data to control a native display 312 of source device 310. [0069] Display processing module 346 may also direct and/or process received image data to control one or more displays external to source device 310. For example, display processing module 346 may, via external display interface 348, control at least one display external to source device 310 (e.g., external display interface 120communicatively coupled to destination device 114 depicted in FIG. 1). As one example, display processing module 346 may process image data to be sent to one or more external displays in a format catered to operation of the one or more external displays, in order to mirror graphical output media of source device 310 in a visually pleasing manner. For example, display processing module 346 may process the image data to mirror native graphical output to modify resolution or contrast, resize, reshape, reorient, rearrange, or otherwise modify image data to be output by one or more displays external to source device 310 in a visually pleasing manner. [0070] As also depicted in FIG. 3, source device includes an audio processing module 350. Audio processing module 350 may process received image data to control one or more speakers external to source device 310. For example, audio processing module 350 may, via external audio interface 354, control at least one speaker (e.g., external audio interface 130 depicted in FIG. 1) communicatively coupled to and controlled by another device (e.g., destination device 114 depicted in FIG. 1). Audio processing module 350 may process audio data to be sent to one or more external audio interfaces to mirror native audio media output of source device 310. For example, audio processing module 350 may process the image data by modifying a bit rate, volume, audio encoding technique, or other characteristic of audio data for output via one or more speakers external to source device 310. [0071] As also depicted in FIG. 3 source device 310 includes a mirroring module 360 consistent with one or more aspects of this disclosure. Generally speaking, mirroring module 360 may comprise any combination of hardware and/or software components that are configured to control whether one or more media elements received by display processing module 347 and/or audio processing module 350 are mirrored via a destination device consistent with one or more aspects of this disclosure. For example, in response to receiving a new media element from one or more sources (e.g., memory 341, communications module 342, graphics processing module 343, and/or one or more software applications executing on processor 344), mirroring module 360 may compare at least one characteristic of the received media element to at least one parameter. Based on the comparison, mirroring module 360 may resolve a display status for the media element based on determining whether to mirror the media element. In some examples, mirroring module 360 may prevent such a received media element from being mirrored, until the display status for the media element is resolved (e.g., untilcomparison with the at least one parameter is completed and mirroring is either confirmed or denied). [0072] In the example of FIG. 3, display processing module 347, audio processing module 350, and mirroring module 360 are depicted as separate functional hardware and/or software components for purposes of describing the techniques of this disclosure. In other examples, functionality described herein with respect to display processing module 347, audio processing module 350, and/or mirroring module 360 may be performed by a unitary hardware and/or software component. In other examples, any one of display processing module 347, audio processing module 350, and/or mirroring module 360 may comprise multiple hardware and/or software components that, in combination, operate according to the functionality attributed to display processing module 347, audio processing module 350, and/or mirroring module 360 as described herein. In addition, even if one or more display processing module 347, audio processing module 350, and/or mirroring module 360 described herein are implemented in software, such software modules as described herein may comprise hardware in that they functionally operate by executing on a hardware processing component. [0073] According to one or more aspects of this disclosure, when source device 310 is operated in a mirroring mode, mirroring module 360 may selectively prevent or allow one or more received media elements of from being mirrored via a destination device (e.g., destination device 114 depicted in FIG. 1). For example, as shown in FIG. 3, display processing module 347, audio processing module 350, and/or mirroring module 360 may receive a first media element 336. Display processing module 347 and/or audio processing module 350 may cause native media of the source device mirrored via a destination device (e.g., destination device 114 depicted in FIG. 1), to include the first media element 336. [0074] As shown in FIG. 3, display processing module 347, audio processing module 350, and/or mirroring module 360 may also receive a second media element 338. Mirroring module 360 may compare at least one characteristic of the second media element to at least one parameter. Based on the comparison, mirroring module 360 may determine whether or not to mirror the second media element 338 via at least one destination device. [0075] In some examples, mirroring module 360 may determine whether to mirror second media element 336 based on at least one predetermined parameter stored inmemory. For example, mirroring module 360 may, via display processing module 347, graphics processing module 343, and/or one or more software applications executing on processor 344, provide a user-interface that a user may use to pre-select one or more options regarding the mirroring of media via one or more destination devices as described herein. For example, mirroring module 360 may provide a user interface that allows a user to select application-specific options regarding the mirroring of media, which may be stored in memory as one or more predetermined parameters for later use in determining a mirroring status of a received media element. Based on such one or more predetermined parameters stored in memory, mirroring module 360 may confirm whether or not to mirror received second media element 338, which may thereby resolve a display status of second media element 338. [0076] According to other examples, mirroring module 360 may determine whether or not to output the second multimedia element 338 based on one or more dynamically determined parameters. For example, in response to receiving the second media element 338, mirroring module 360 may, via display processing module 347, graphics processing module 343, and/or one or more software applications executing on processor 344, provide a user-interface that allows a user to select whether or not to mirror the second media element 338. In some examples, such a user interface may provide a user with an option to simply allow or deny mirroring of the received second media element 338. In other examples, such a user interface may, also or instead, enable a user to select one or more predetermined parameters that may be applied to further media elements received by mirroring module (and/or display processing module, audio processing module 350). [0077] In some examples, mirroring module 360 may cause second media element 338 to not be mirrored via a destination device 314, unless mirroring module determines that the second media element 338 is to be output via the destination device 314 based on the at least one predetermined or dynamically determined parameter as described above (i.e., resolve a mirroring status of second media element 338). For example, where the parameter is a predetermined parameter, mirroring module 360 may cause the second media element 338 to not be output via the destination device unless comparison of at least one characteristic of the second media element 338 to the at least one predetermined parameter confirms that indicates the second media element 338 should be mirrored. As another example, where the parameter is a dynamically determinedparameter, mirroring module 360 may not cause the second media element 337 to be output via the destination device until mirroring module 360 provides a user of source device 320 with a user interface that enables the user to confirm whether to mirror the second media element 338, and source device 310 receives such confirmation from the user that the second media element should be mirrored. [0078] In some examples, while mirroring module 360 resolves a mirroring status of the second media element 338, mirroring module 360 may modify the mirroring of media by the destination device such that the second media element 338 is not output by the destination device. For example, mirroring module 360 may cause display processing module 347 and/or audio processing module 350 to stop mirroring any media to the destination device (e.g., via external display interface 348, external audio interface 354). According to other examples, mirroring module 360 may cause all mirrored media to be frozen until mirroring module 360 resolves a mirroring status of the second media element 338. [0079] According to other examples, mirroring module 360 may cause the destination device to continue to mirror other native media of source device 310, such as graphical media including first media element 336 as shown in FIG. 3, without mirroring second media element 338 until the mirroring status of second media element 338 is resolved. For example, mirroring module 360 may identify at least one region in the mirrored media that corresponds to the second media element 338. According to these examples, mirroring module may cause such an identified region in the mirrored media to be removed, or frozen, which may prevent the second media element 338 from being mirrored. [0080] In some examples, mirroring module 360 may also cause the destination device to continue to mirror native audio media of source device 310 including native audio media associated with first media element 336, until the mirroring status of second media element 338 is resolved, without mirroring any native audio media associated with second media element 338 with the mirrored media. For example, audio processing module 350 may not process any audio media associated with second media element 338 and/or send any audio media associated with second media element 338 to the destination device until the mirroring status of the second media element 338 is resolved.[0081] In some examples, mirroring module 360 may cause at least one region of mirrored graphical media associated with second media element 338 to be visually frozen in media mirrored via the destination device. For example, mirroring module 360 may send one or more in-band or out-of-band signals that indicate, to the destination device, that the destination device should modify its operation to process received image data in order to freeze one or more regions of mirrored graphical media that correspond to second media element 338. [0082] According to other examples, mirroring module 360 may cause source device 310 itself to process image data before the image data is sent to the destination device, such that the identified region in the mirrored media is visually frozen. According to one such example, where display processing module 347 includes a video encoding module, and the destination device includes a corresponding video decoding module, mirroring module 360 may assign one or more prediction modes to image data associated with the identified region that cause the image data, once decoded by the decoder, to be frozen by causing the video decoding module to use image data from at least one previously displayed frame of image data to present the at least one image. As one specific example, mirroring module 360 may cause a video encoding module of display processing module 347 to assign a skip prediction modes to each block of video data within the identified region, such that each of the blocks uses image data of at least one previously decoded frame unchanged to output images in identified region, as described in further detail below with reference to FIGS 8-12. [0083] FIG. 4 is a flow chart that illustrates one example of a method that may be performed by mirroring module 360 of a source device 320 consistent with one or more aspects of this disclosure. The method depicted in FIG. 4 is described as performed by mirroring module 360 of source device 320, however the method of FIG. 4 may be performed by any component of source device 320, and/or any other device. [0084] As shown in FIG. 4, when source device 320 is operated in a mirroring mode such that native media output of source device 320 is mirrored via a destination device, mirroring module 380 may receive a media element of the native media output (401). The media element may comprise, for example, audio and/or visual data received from a software application executing on a processor of source device 320, and/or any other source.[0085] As also depicted in FIG. 4, mirroring module 360 may determine whether or not the received media element is known to mirroring module 360 (402). For example, mirroring module 360 may determine whether there are any predetermined parameters stored in memory that may be used by mirroring module 360 to confirm whether or not the received media element should be mirrored via the destination device 114. [0086] As also shown in FIG. 4, if mirroring module 360 determines that there are predetermined parameters stored in memory that confirm the received media element should be mirrored via the destination device, mirroring module 360 may compare at least one characteristic of the received media element to the one or more predetermined parameters (405). Based on the comparison, mirroring module 360 may determine whether or not to cause the received media element to be mirrored via the destination device (406). [0087] As also shown in FIG. 4, if mirroring module 360 determines that the received media element is not know (e.g., there are not any predetermined parameters stored in memory that may be used to confirm whether the received media element should be mirrored), mirroring module 360 may request, from a user, confirmation of whether or not to mirror the received media element (403). For example, in response to receiving the second media element 338, mirroring module 360 may, via display processing module 347, graphics processing module 343, and/or one or more software applications executing on processor 344, provide a user-interface that allows a user to specifically indicate whether or not to mirror the received media element. As also shown in FIG. 4, mirroring module 360 may determine authorization to mirror the received media element based on user input received in response to the requested confirmation (404). [0088] As described above, in some examples, mirroring module 360 may determine a mirroring status for a received media element based on one or more predetermined parameters stored in memory. As also described above, mirroring module 360 may be operable to provide a use with a user-interface that enables the user to specify such predetermined parameters. FIG. 5 is a conceptual diagram that illustrates one example of such a user interface 501 that enables a user to identify one or more such predetermined parameters consistent with one or more aspects of this disclosure. [0089] According to the example of FIG. 5, a user interface 501 is depicted that may enable a user to specify one or more mirroring parameters associated with an application, "Application 1" executable by a processor of source device 320. As shownin the example of FIG. 5, user interface 500 enables a user to select one or more application specific parameters. For example, button 501 depicted in FIG. 5 may enable a user to authorize all media output from Application 1 to be mirrored via one or more destination devices. As another example, as also depicted in FIG. 5, button 502 may enable a user to instead indicate mirroring module 360 should request authorization from the user (and/or other users) before mirroring any particular media element received from Application 1. [0090] As also shown in FIG. 5, user interface 500 may, in some examples, also enable a user to identify one or more user-specific settings associated with Application 1. For example, button 504 depicted in FIG. 5 may bring the user to another user interface (not depicted) that may enable a user to select one or more mirroring options specific to one or more other users of Application 1. For example, where Application 1 comprises a messaging application, such as a text messaging, telephony, social networking, or other messaging application, button 504 depicted in FIG. 5 may enable a user of source device 310 to identify one or more other users or groups of user, and indicate whether or not media elements from Application 1 should all be mirrored, all be prevented from being mirrored, or should only be mirrored if user confirmation is requested and received. [0091] As also depicted in FIG. 5, where source device 310 is configured to mirror native media output of source device 310 via more than one destination device, user interface 500 may enable user to apply one or more settings indicated by the user via user interface 500 to one or more particular destination devices. For example, as shown in FIG. 5, buttons 504 may enable a user to select one or more of a destination device coupled to a display and/or an associated audio output device located in the user's bedroom, a display and/or an associated audio output device located in the user's living room, and/or a display and/or an associated audio output device located in the user's car. [0092] As described above, in some examples, mirroring module 360 may determine a mirroring status for a received media element based on one or more dynamically determined parameters. For example, in response to receiving a media element, mirroring module 360 may provide a user interface that enables a user to confirm, or deny, mirroring of the received information element via one or more destination devices. In one example, such a user interface may present one or more user-selectablebuttons that enable the user to confirm or deny mirroring of the specific media element. In addition, if the source device 310 is configured to mirror media to more than one destination device, and the user confirms mirroring of the media element, such a user interface may also enable the user to identify one or more destination devices via which to mirror the media element. [0093] In other examples, such a user interface provided in response to such a received media element may also enable a user to select one or more parameters that may later be used by mirroring module 360 to determine a mirroring status of one or more other media elements received by the mirroring module. FIG. 6 is a conceptual diagram that depicts one example of such a user interface that may be presented in response to a received media element. As depicted in FIG. 6, user interface 600 includes an identification 610 of the received media element. For example, as shown in FIG. 6, identification 610 indicates that the media element comprises a notification of a message from user John Smith via an application "Application 1." [0094] As also depicted in the example of FIG. 6, user interface 600 enables a user to select one or more user and/or application-specific options regarding the mirroring of the received media element. For example, button 606 may enable a user to indicate that the particular message identified by identification 610 should be mirrored; however mirroring module should continue to request authorization for mirroring further media elements received from Application 1. As another example, button 607 may enable a user to indicate that all media output received via Application 1 should be mirrored via one or more destination devices. As another example, button 608 may enable a user to indicate that no media output received from Application 1 should be mirrored. As another example, button 609 may enable a user to indicate that all media output received via Application 1 from user John Smith should be mirrored. As another example, button 611 may enable a user to indicate that no output of Application 1 from user John Smith should be mirrored. As another example, button 612 may enable a user to indicate that no media output from any application associated with user John Smith should be mirrored. [0095] As also depicted in FIG. 6, where source device 310 is configured to mirror media output via more than one destination device, user interface 600 may enable user to apply one or more settings indicated by the user via user interface 600 to one or more particular destination devices. For example, as shown in FIG. 6, buttons 604 mayenable a user to select one or more of a destination device coupled to a display and/or an associated audio output device located in the user's bedroom, a display and/or an associated audio output device located in the user's living room, and/or a display and/or an associated audio output device located in the user's car. [0096] FIG. 7 is a flow diagram that illustrates one example of a method of operating a source device to mirror media output of the source device via at least one destination device consistent with one or more aspects of this disclosure. The method depicted in FIG. 7 is described as being performed by source device 310 depicted in FIG. 3, however the method of FIG. 7 may be performed by any device. [0097] As depicted in FIG. 7, source device 310 may control a destination device (e.g., destination device 114 depicted in FIG. 1) to output media via an output interface (e.g., external display interface 120, external audio interface associated with the destination device (701). For example, source device 310 may control the destination device to mirror media that is native to the source device. As also depicted in FIG. 7, source device 310 may receive a first media element (e.g., first media element 236 depicted in FIG. 2, first media element 336 depicted in FIG. 3) of the media (602). For example, source device 310 may receive the first media element of the media from at least one software application executing on source device 310. [0098] As also depicted in FIG. 7, source device 310 may output, to the destination device, the media comprising the first media element (703). As also depicted in FIG. 7, source device 310 may receive a second media element (e.g., second media element 238 depicted in FIG. 2, second media element 338 depicted in FIG. 3) of the media (704). As also shown in FIG. 7, source device 310 may determine whether to output the media including the second media element based on at least one parameter associated with the second media element (705). For example, source device 310 may compare at least one characteristic of the second media element to the at least one parameter, and determine whether to output the media including the second media element to the destination device. In some examples, the at least one parameter may comprise a previously determined parameter stored in memory, which may be based on previously received user interface input. In other example, the at least one parameter may be dynamically determined by source device 310. For example, in response to receiving the second media element, source device 310 may provide a user interface to the user that enablesthe user to specify whether or not to mirror the media comprising the second media element to the destination device. [0099] In some examples, source device 310 may not output the media comprising the second media element unless the source device resolves a mirroring status of the second media element. For example, source device 310 may not output the media comprising the second media element unless source device 310 receives confirmation from the user to mirror the media comprising the second media via the user interface. In some examples, while source device 310 resolves a mirroring status of the second media element, source device 310 may or may not control the destination device to output other media, such as media including the first media element. [0100] FIG. 8 is a block diagram that depicts one example of a source device 810 configured to mirror media output of the source device 810 via at least one destination device 814 consistent with one or more aspects of this disclosure. As depicted in FIG. 8, source device 810 includes a display processing module 847. Display processing module 847 may receive media data comprising video data, and process the received media data to be output by one or more displays, such as a native display of source device 810 (not depicted in the example of FIG. 8) and/or an external display interface 821 of destination device 814 or communicatively coupled to destination device 814. As also shown in FIG. 8, destination device 814 may further include a display processing module 882. Display processing module 882 may be operable to receive one or more representations of media data, such as video data from source device 810, and operate external display interface 821 to present images consistent with the received media data. [0101] According to the example of FIG. 8, display processing module 847 may include a video encoder 820. Generally speaking, video encoder 820 may be configured to encode media data to compress, or reduce a size of, the media data before the media data is transferred to another device for playback. For example, video encoder 820 may compress media data comprising video data before source device 810 sends the media data to destination device 814. As one example, video encoder 820 may compress received media to generate an encoded bit stream. Such an encoded bit stream may, in some examples, be sent to another device, such as destination device 814. [0102] As shown in FIG. 8, display processing module 882 of destination device 814 may also include a corresponding video decoder 830. Generally speaking, videodecoder 830 may receive compressed data, such as an encoded bit stream from source device 810 via a communications link 816 and/or storage media 832 depicted in FIG. 8, and decompress, or decode, the compressed data to reconstruct the video data in a format o be output via one or more displays. [0103] As also shown in FIG. 8, described above, according to some aspects of this disclosure, source device 810 control destination device 814 to mirror media output of source device 810 via destination device 814. For example, source device 810 may receive a first media element 836, and cause destination device 814 to output video media comprising first media element 836 via external display 821. [0104] Source device 810 may also receive a second media element 838. As described above with respect to source device 310 depicted in FIG. 3, source device 810 may prevent second media element 838 from being mirrored via destination device 814, until a mirroring status of second media element 838 is resolved. For example, source device 810 may prevent the second media element 838 from being output unless source device 810 confirms that second media element 838 should be mirrored, based on at least one predetermined or dynamically determined parameter, as described above. [0105] As also described above, in some examples, source device 810 may prevent second media element 838 from being mirrored by causing destination device 814 to cease outputting all media. For example, source device 810 may stop sending further media to destination device 814, until source device 810 resolves a mirroring status of second media element 838. [0106] In other examples, source device 810 may prevent second media element 838 from being mirrored causing destination device 814 to freeze a frame of media currently output by destination device 814 that does not include the second media element 838. For example, source device 810 may communicate one or more in-band or out-of-band commands to destination device 814 to cause destination device 814 to continue reading a previously displayed frame of image data from memory, until source device 810 resolves a display status of second media element 838. As another example, source device 810 may repeatedly send data representing a previously displayed frame of video media to destination device 814. [0107] According to still other examples, source device 810 may continue to actively mirror media via destination device 814, while preventing second media element 838 from being mirrored via destination device 814. For example, as described above withrespect to FIG. 2, source device 810 may identify at least one region (e.g., region 233 depicted in FIG. 2) in media to be presented via destination device 814 that corresponds to second media element 838 in native media output of source device 810. [0108] As also described above, source device 810 may operate to freeze and/or remove media output (e.g., video output) of such an identified region associated with the second media element 838, in order to prevent second media element 838 from being included in media mirrored by destination device 814, while still outputting the other mirrored media. [0109] According to some such examples, source device 810 may communicate one or more in-band or out-of-band command signals, as described above, to destination device 814 that instruct destination device 814 to freeze and/or remove media output of such an identified region that corresponds to second media element 838. According to these examples, destination device 814 may be configured to interpret such a command signal and operate in response to such a received command signal. For example, in response to such a command signal, destination device 814 may remove media output that corresponds to such an identified region by not processing and/or outputting received media data that corresponds to the identified region. According to another example, in response to such a command signal, destination device may be configured to freeze media output that corresponds to the identified region by repeatedly reading media data of a previously displayed frame from memory, until a mirroring status of second media element 838 is resolved by source device 810. According to this example, destination device 814 may continue to receive and/or process data normally to display mirrored media outside of the identified region. [0110] According to other examples, source device 810 may not communicate one or more such command signals that cause operation of destination device 814 to change in order to prevent second media element 838 from being mirrored along with other media. According to these examples, source device 810 may itself process image data to be sent to destination device 814, such that the second media element 838 is not included in other media mirrored via destination device 814. For example, in order to prevent second media element 838 from being output with other mirrored media, mirroring module 860 of source device 810 may modify operation of display processing module 847.[0111] As one example, mirroring module 860 may modify operation of video encoder 820 of display processing module 847 to prevent second media element 838 from being output with other mirrored media via destination device 814. For example, mirroring module 860 may cause video encoder 820 to encode data representing media of at least one identified region that corresponds to second media element 838 differently than data that represents media outside of the identified region. [0112] As one example, mirroring module 860 may cause video encoder 820 to encode, as part of an encoded bit stream that represents media to be mirrored via destination device 814, one or more syntax elements that instruct the decoder to decode the bit stream. For example, according to the techniques described herein, video encoder 920 may signal such syntax elements that indicate that the decoder should use data of at least one previous frame of image data without any modifications to display images in the at least one identified region. In this manner, mirroring module 860 may cause native media of source device 810 to continue to be actively mirrored via destination device 814, while preventing second media element 838 from being output in the mirrored media, by changing how video encoder 830 operates to encode the at least one identified region associated with the second media element 838. [0113] According to some examples, video encoder 820 and video decoder 830 may operate according to a video compression standard, such as the High Efficiency Video Coding (HEVC) standard presently under development, and may conform to the HEVC Test Model (HM). Alternatively, video encoder 820 and video decoder 830 may operate according to other proprietary or industry standards, such as the ITU-T H.264 standard, alternatively referred to as MPEG-4, Part 10, Advanced Video Coding (AVC), or extensions of such standards. The techniques of this disclosure, however, are not limited to any particular coding standard. Other examples of video compression standards include MPEG-2 and ITU-T H.263. [0114] During the encoding process, video encoder 820 may execute a number of coding techniques or operations. In general, video encoder 820 operates on video blocks within individual video frames (or other independently coded units such as slices) in order to encode the video blocks. Frames, slices, portions of frames, groups of pictures (i.e., frames), or other data structures may be defined as independent data units that include a plurality of video blocks, and syntax elements may be included that are associated with such different independent data units. The video blocks withinindependent data units may have fixed or varying sizes, and may differ in size according to a specified coding standard. In some cases, each video frame may include a series of independently decodable slices, and each slice may include one or more macroblocks or LCUs. [0115] Macroblocks are one type of video block defined by the ITU H.264 standard and other standards. Macroblocks typically refer to 16 by 16 blocks of data. The ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, or 4 by 4 for luma components, and 8 by 8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components. [0116] The emerging HEVC standard defines new terms for video blocks. In particular, with HEVC, video blocks (or partitions thereof) may be referred to as "coded units." With the HEVC standard, largest coded units (LCUs) may be divided into smaller and smaller coded units (CUs) according to a quadtree partitioning scheme, and the different CUs that are defined in the scheme may be further partitioned into so-called prediction units (PUs) and/or transform units (TUs). The LCUs, CUs, and PUs, and TUs are all video blocks within the meaning of this disclosure. Other types of video blocks may also be used, consistent with the HEVC standard or other video coding standards. Thus, the phrase "block" refers to any size of video block. Moreover, video blocks may sometimes refer to blocks of video data in the pixel domain, or blocks of data in a transform domain such as a discrete cosine transform (DCT) domain, a domain similar to DCT, a wavelet domain, or the like. [0117] Referring again to FIG. 8, video encoder 820 may perform predictive coding in which a video block being coded is compared to another block of video data in order to identify a predictive block. This process of predictive coding across frames is often referred to as motion estimation and motion compensation. Motion estimation estimates video block motion relative to one or more predictive video blocks of one or more predictive frames (or other coded units). Motion compensation generates the desired predictive video block from the one or more predictive frames or other coded units. Motion compensation may include an interpolation process in which interpolation filtering is performed to generate predictive data at fractional pixel precision. This process of prediction coding can also be performed within a frame, where spatiallyneighboring pixels within the same frame to a current block are used to generate a predictive block. [0118] After generating the predictive block, the differences between the current video block being coded and the predictive block are coded as a residual block, and prediction syntax (such as a motion vector) is used to identify the predictive block. For example, Video encoder 820 may code each block (i.e., each CU according to HEVC) using one of several prediction modes. For example, according HEVC, the prediction modes may include, for example, a split mode, a skip mode, a direct mode, as well as additional modes for Inter_2Nx2N, Inter_Nx2N, Inter_2NxN, Inter_NxN, Inter_2NxnU, Inter_2NxnD, Inter_nLx2N, Inter_nRx2N, Intra_2Nx2N, and Intra_NxN, where such modes refer to the sizes of the PUs and whether the mode is an intra- or inter-predictive mode. According to a skip mode, a current CU (e.g., a current PU) is reconstructed based on a co-located block in a reference frame without residual data, resulting in the current CU being identical to the co-located block in the reference frame. In direct mode, a current CU is reconstructed based on a co-located block in a reference frame with residual data, resulting in the current PU corresponding to the reference block plus residual data. In some examples, such prediction modes may be generally exclusive to one another, meaning any given CU may be coded using only one of the modes. [0119] This disclosure describes techniques for identifying, by a source device such as source device 810 depicted in FIG. 8, a region within video to be displayed by another, destination device 814, at least one region of the video output to freeze, and modifying operation of encoder 820 to cause the identified at least one region of the video output to be visually frozen in media output. In some examples, the at least one region of the video output to freeze may be determined by mirroring module 860 depicted in FIG. 8. According to these examples, mirroring module 860 may identify the at least one region based on a corresponding region in native video output of source device 810 associated with second media element 838, in order to prevent second media element 838 from being output along with other mirrored media (e.g., other mirrored video), such as media including first media element 836. As described above, mirroring module 860 may modify operation of video encoder 820 to prevent second media element 838 from being displayed, until a mirroring status of the second media element 838 is resolved. [0120] To encode video data, video encoder 820 may divide a frame or slice of video data into a plurality of blocks of video data (e.g., a plurality of CU according to HEVC).As part of the encoding process, video encoder 820 may assign one or more prediction modes that signal, to a decoder that decodes the video data, how to predict content of each particular block relative to other blocks in the same or different frame or slice of video data. [0121] According to some aspects of this disclosure, in response to identifying at least one region in mirrored video that should be frozen, source device 810 may cause video encoder 820 to assign a skip prediction mode to each block within the identified region. FIG. 9 is a conceptual diagram that illustrates one example of a frame 901 of video data. In some examples, frame 901 may correspond to an entire frame of video data to be displayed. According to other examples, frame 901 may comprise a subset of a frame of video data, such as a slice or larger block of video data (e.g., a macroblock or higher- level CU, such as an LCU as described above). As shown in the example of FIG. 9, video encoder 820 may encode frame 901 as a plurality of blocks 903 of video data. [0122] As described above, under normal operation, as video encoder 820 encodes the blocks 903, video encoder 820 may assign a prediction mode to each block 903 of the frame 901. The prediction mode for each of the plurality of blocks may be signaled as one or more syntax elements associated with each respective block to a decoder as part of an encoded bit stream of video data represents frame 901. For example, as shown in FIG. 9, as a result of encoding, each block 903 of frame 901 has been assigned one of a split mode, a skip mode, a direct mode, or an intra mode or inter mode. Frame 901 depicted in FIG. 9 is provided for exemplary purposes only. In other examples, video encoder 820 may assign one or more other prediction modes not shown in the example of FIG. 9 to one or more blocks 903 of frame 901. For example, according to the H.264 video coding standard, video encoder 820 may assign one of nine different prediction modes to each block 903 of frame 901. As another example, according to the proposed HEVC standard for video coding, video encoder 820 may assign one of 33 prediction modes to each block 903 of frame 901. [0123] As also depicted in FIG. 9, source device 810 may identify a region to freeze 905 in frame 901. For example, as described above, mirroring module 860 of source device 810 may identify region 905 as one that corresponds to second media element 838 depicted in FIG. 8, in order to prevent second media element 838 from being mirrored via destination device 814.[0124] As depicted in FIG. 9, under normal operation, video encoder 820 may assign one of a plurality of prediction modes all the blocks 903 of video data of frame 901. For example, under normal operation, video encoder 820 may assign a block of video data in a portion of a frame that does not change a skip prediction mode, so that a decoder that decodes the block does not apply any residual block to the encoded block to reconstruct the frame. Instead, the decoder may use the prediction block alone to reconstruct the block. As another example, under normal operation, video encoder 820 may assign one or more blocks where images change a prediction mode of a plurality of other prediction modes that instruct the decoder to apply a residual block to a prediction block to reconstruct the encoded frame. [0125] According to some aspects of this disclosure, instead of video encoder 820 assigning different prediction modes such as split, skip, direct, or one of a plurality of intra and/or inter prediction modes to blocks of identified region, video encoder may instead assign the same prediction mode to all the blocks of identified region 905, regardless of differences between the frame and a previous frame, as shown in the example of FIG. 10. For example, as shown in FIG. 10, video encoder 820 may assign each block of video data within identified region 905 of frame 911 with a skip prediction mode. In this manner, video encoder 820 may encode the blocks within region differently than in a standard prediction mode, where at least some of the blocks within region would have been assigned to have one or more other prediction modes that cause the decoder to apply a residual block of data to a predictive block in order to reconstruct the block. [0126] In some examples, in response to identifying a region 905 to freeze in frame 901, video encoder 820 may generate an encoded bit stream that represents frame 911 shown in FIG. 10, which includes one or more syntax elements that signal the skip prediction modes assigned to each block of identified region 905. Video encoder 820 may send the encoded bit stream to a video decoder, such as video decoder 830 of destination device 814 depicted in FIG. 8. The video decoder 830 may receive the encoded bit stream from encoder 820, via a direct communications link 816, or via a storage medium, such as storage device 832 depicted in FIG. 8. Video decoder 830 may decode the received encoded bit stream, to reconstruct frame 911 in order to output frame 911 via external display 821.[0127] To decode the received encoded bit stream, video decoder 830 may operate normally, and apply the signaled skip prediction modes that video encoder 820 assigned to each block within identified region 905, which may cause video decoder 830 to not apply any change, such as a residual block, to reconstruct frame 911 of video data. In this manner, encoder 820 of source device 810 may use unchanged data of at least one previously presented frame to output (display) media associated with the at least one region 905, which may cause the at least one region to appear visually frozen to a viewer. In some examples, source device 810 (e.g., mirroring module 860) may cause encoder 820 to freeze such a portion of video output that corresponds to identified region 905 as described above, in order to prevent a second media element 838 as depicted in FIG. 8 associated with identified region 905 from being mirrored via destination device 814. [0128] FIG. 11 is a flow diagram that illustrates one example of a method of encoding video data consistent with one or more aspects of this disclosure. The example of FIG. 11 is described as performed by source device 810 depicted in the example of FIG. 8, however any device may be used to perform the method of FIG. 11. [0129] As shown in FIG. 11, source device 810 may identify at least one region (903) of a frame of video data to be frozen when output via a destination device (1101). As also shown in FIG. 11, source device 810 (e.g., video encoder 820) may assign each of a plurality of blocks of the frame that are outside of the identified region one of a plurality of prediction modes (1102). For example, source device 810 may assign each of the plurality of blocks one of a split mode, a skip mode, a direct mode, or an intra mode or inter mode, or other prediction mode. In some examples, source device 810 may assign at least some of the plurality of blocks outside of the identified region a prediction mode based on one or more differences between each of the blocks and at least one predictive block that may be used to reconstruct the block. For example, if there are differences between a block outside of the identified region, source device 810 may assign the block one of a direct, intra, or inter prediction mode, which may be used by a decoder to determine a residual block that may be applied to a predictive block to reconstruct the block. In other examples, if there are no differences between a block outside of the identified region and a predictive block that may be used to reconstruct the block, source device 810 may assign the block a skip prediction mode.[0130] As also shown in FIG. 11, source device 810 may assign all of a plurality of blocks of the frame that are within the identified region the same prediction mode. For example, source device 810 may assign each of the plurality of blocks within the identified region a skip prediction mode (1103). In some examples, source device 810 may assign each of the plurality of blocks within the identified region the skip prediction mode, regardless of whether there are any differences between the block and a predictive block that may be used by a decoder to reconstruct the block. [0131] As also shown in FIG. 11, source device 810 may output the frame of video data to a destination device (e.g., destination device 814 depicted in FIG. 8), to cause the identified at least one region to be visually frozen in the frame when the frame is output via the destination device (1104). For example, the destination device may include a video decoder 830, and the frame of video data may be received by the video decoder as an encoded bit stream. The video decoder may decode the received encoded bit stream, using one or more prediction modes associated with blocks of video data that are signaled in the encoded bit stream. According to this example, because blocks of video data within the identified region were all assigned skip modes as described above, when the video decoder operates normally to decode the blocks, previously output (e.g., displayed) image data may be used by video decoder for the identified at least one region. In this manner, the identified at least one region in a displayed image may appear visually frozen to a viewer. [0132] Referring back to the example of FIG. 3, according to some aspects of this disclosure, a source device 310 may operate in a mirroring mode to mirror native media of the source device, including audio and/or video media, via a destination device 314. As described above, source device 310 may receive a media element (e.g., second media element 336 depicted in FIG. 3) of media to be mirrored via destination device 314, and determine whether or not to mirror the received media element based on at least one parameter associated with the media element. The received media element may or may not be output by a native audio or display interface of the source device 310. As also described above, source device 310 may operate to prevent the media element from being output via the destination device 314 (e.g., by freezing or removing one or more regions associated with the media element in media native to source device 310 in the mirrored media), until a mirroring status of the media element is resolved by the source device 310. To do so, source device 310 may continue mirroring other native mediaoutput of the source device, modified such that the received media element is not output via the destination device 314. In some examples, once the source device 310 resolves the mirroring status of the media element (e.g., determines whether or not to mirror the media element with other media), the source device 310 may cease modifying native media output mirrored to the source device 310, such that the received media is not output via the destination device 314. Instead, source device 310 may return to mirroring the other native media output as usual. [0133] According to some examples, a video sequence may include a series of video frames or pictures. The frames or pictures may be arranged into a plurality of group of pictures (GOP) that generally comprises a series of one or more of the video pictures. A GOP may include syntax data in a header of the GOP, a header of one or more of the pictures, or elsewhere, that describes the pictures included in the GOP. In some examples, syntax data in a header of a GOP may identify a picture of the GOP as an access point may be used as a starting point for playback of the video sequence. In some examples, such an access point may comprise an independently decodable frame of the video sequence. The independently decodable frame may not rely on data from any other frames to be decoded by a decoder. [0134] As described above, source device 310 may transition from operating in a normal mode to mirror native media output of source device 310 via destination device 314, and a modified mode to prevent one or more media elements of the native media from being mirrored via the destination device. According to some aspects of this disclosure, when source device 310 transitions between modes as described above, source device 310 may control destination device 814 to begin outputting mirrored media (e.g., mirrored video media) at a new GOP (e.g., at an identified access point of the new GOP). In some examples, when source device 310 transition to operating in the modified mode, source device may use a longer GOP to begin outputting media. In some examples, when source device transitions back to a normal operating mode, source device 310 may use a shorter GOP to begin outputting media. [0135] FIG. 12 is a flow diagram that depicts one example of a technique for mirroring native media via a destination device 314 consistent with one or more aspects of this disclosure. As shown in FIG. 12, source device 310 may be operated to cause native media of source device 310, which includes video media, to be mirrored via a destination device 314 (1201). As also shown in FIG. 12, source device 310 mayidentify at least one portion of mirrored media to be frozen (1202). For example, source device 310 may identify the at least one portion of mirrored media to be frozen to prevent at least one received native media element of the source device 310 from being mirrored via the destination device, as described herein. As also shown in FIG. 12, in response to identifying the at least one region of mirrored media to be frozen (e.g., in response to receipt of a new media element of the media) source device 310 may cause the destination device 314 to mirror the media with the identified region frozen using a first GOP to initiate mirroring of the media (1203). For example, source device 310 may cause the destination device to mirror the media with the identified region frozen beginning at an identified access point picture of the first GOP. As also shown in FIG. 12, source device 310 may determine that the at least one frozen region is to be unfrozen in the mirrored media (1204). As also depicted in FIG. 12, source device 310 may cause the destination device to mirror media with the identified region unfrozen using a second GOP different than the first GOP (1205). For example, source device 310 may cause the destination device to mirror the media beginning at an identified access point picture of the second GOP. In some examples, source device 310 may select the first GOP to have a longer length than the second GOP. [0136] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware -based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer- readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. [0137] By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magneticdisk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0138] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. [0139] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.[0140] Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims. |
A shared real-time counter is configured to provide an accurate counter output based on a fast clock period when driven by a fast clock signal or by a slow clock signal. Combinational logic circuitry provides glitch free switching between a fast clock signal input to the counter and a slow clock input to the counter. The counter is always on and increases its count by an appropriate rational number of counts representing fast clock cycles for every cycle of the fast clock while in a fast clock mode, and by an appropriate rational number of fast clock periods for every cycle of the slow clock signal while in a slow clock mode. |
CLAIMS What is claimed is: 1. A method for generating a counter output of a dual mode counter, comprising: receiving a fast clock signal on a first signal path; receiving a slow clock signal on a second signal path; receiving a clock select signal on a third signal path, the clock select signal indicating selection of one of a fast clock mode and a slow clock mode; synchronizing transitions of the clock select signal with the slow clock signal; increasing the counter output by a first counter increment for each period of the fast clock signal, in response to the clock select signal indicating the fast clock mode; and increasing the counter output by a second counter increment for each period of the slow clock signal, the second counter increment comprising a ratio of the period of the slow clock signal divided by the period of the fast clock signal in response to the clock select signal indicating the slow clock mode. 2. The method of claim 1, further comprising: delaying the increasing of the counter until after a next period of the slow clock signal is received on the second signal path in response to the clock selection signal transitioning from indicating the fast clock mode to indicating the slow clock mode; increasing the counter output by the first counter increment for a first period of the slow clock signal after the transitioning; and increasing the counter by the second counter increment for a second period of the slow clock signal after the transitioning. 3. The method of claim 2, further comprising: gating off the fast clock signal from the counter before a next fast clock signal, in response to the clock selection signal transitioning from indicating the fast clock mode to indicating the slow clock mode. 13241821v.l 4. The method of claim 1, further comprising: increasing the counter output by the second counter increment for a first period of the fast clock signal after the transitioning in response to the clock selection signal transitioning from indicating the slow clock mode to indicating the fast clock mode; and increasing the counter by the first counter increment for a second period of the fast clock signal after the transitioning. 5. The method of claim 1, further comprising: integrating the dual mode counter into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 6. A counter apparatus comprising: register circuitry including a count input path, a count output path and a counter clock input path; adder circuitry including a first adder input path, a second adder input path and an adder output path, the adder output path coupled to the count input path of the register circuitry, and the second adder input path coupled to the count output path of the register circuitry; multiplexer circuitry including a first selectable input path, a second selectable input path, a multiplexer output path and multiplexer selector input path, the multiplexer output path coupled to the first adder input path, the first selectable input path coupled to a first counter increment signal, and the second selectable input path coupled to a second counter increment signal; and glitch avoidance circuitry configured to couple the counter clock input path to one of a fast clock or a slow clock, in response to a clock select signal, and to provide a counter increment select signal to the multiplexer selector input path, in response to the clock select signal, the counter increment select signal synchronized with the slow clock. 7. The apparatus of claim 6, integrated into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a 13241821v.lnavigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 8. A counter apparatus comprising: means for receiving a fast clock signal on a first signal path; means for receiving a slow clock signal on a second signal path; means for receiving a clock select signal on a third signal path, the clock select signal indicating selection of one of a fast clock mode and a slow clock mode; means for synchronizing transitions of the clock select signal with the slow clock signal; means for increasing an output of a counter by a first counter increment for each period of the fast clock signal in response to the clock select signal indicating the fast clock mode; and means for increasing the counter output by a second counter increment for each period of the slow clock signal in response to the clock select signal indicating the slow clock mode, the second counter increment comprising a ratio of the period of the slow clock signal divided by the period of the fast clock signal. 9. The apparatus of claim 8, comprising: means for delaying the increasing of the counter until after a next period of the slow clock signal is received on the second signal path in response to the clock selection signal transitioning from indicating the fast clock mode to indicating the slow clock mode; means for increasing the counter output by the first counter increment for a first period of the slow clock signal after the transitioning; and means for increasing the counter by the second counter increment for a second period of the slow clock signal after the transitioning. 10. The apparatus of claim 9, further comprising: means for gating off the fast clock signal from the counter before a next fast clock signal in response to the clock selection signal transitioning from indicating the fast clock mode to indicating the slow clock mode. 13241821v.l 11. The apparatus of claim 8, further comprising: means for increasing the counter output by the second counter increment for a first period of the fast clock signal after the transitioning in response to the clock selection signal transitioning from indicating the slow clock mode to indicating the fast clock mode; and means for increasing the counter by the first counter increment for a second period of the fast clock signal after the transitioning. 12. The apparatus of claim 8, integrated into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 13. A method for generating a counter output of a dual mode counter, comprising the steps of: receiving a fast clock signal on a first signal path; receiving a slow clock signal on a second signal path; receiving a clock select signal on a third signal path, the clock select signal indicating selection of one of a fast clock mode and a slow clock mode; synchronizing transitions of the clock select signal with the slow clock signal; increasing the counter output by a first counter increment for each period of the fast clock signal in response to the clock select signal indicating the fast clock mode; and increasing the counter output by a second counter increment for each period of the slow clock signal in response to the clock select signal indicating the slow clock mode, the second counter increment comprising a ratio of the period of the slow clock signal divided by the period of the fast clock signal. 14. The method of claim 13, further comprising the steps of: delaying the increasing of the counter until after a next period of the slow clock signal is received on the second signal path in response to the clock selection signal transitioning from indicating the fast clock mode to indicating the slow clock mode; 13241821v.lincreasing the counter output by the first counter increment for a first period of the slow clock signal after the transitioning; and increasing the counter by the second counter increment for a second period of the slow clock signal after the transitioning. 15. The method of claim 14, further comprising the step of: gating off the fast clock signal from the counter before a next fast clock signal, in response to the clock selection signal transitioning from indicating the fast clock mode to indicating the slow clock mode. 16. The method of claim 13, further comprising the steps of: increasing the counter output by the second counter increment for a first period of the fast clock signal after the transitioning in response to the clock selection signal transitioning from indicating the slow clock mode to indicating the fast clock mode; and increasing the counter by the first counter increment for a second period of the fast clock signal after the transitioning. 17. The method of claim 13, further comprising the step of: integrating the dual mode counter into at least one of a mobile phone, a set top box, a music player, a video player, an entertainment unit, a navigation device, a computer, a hand-held personal communication systems (PCS) unit, a portable data unit, and a fixed location data unit. 13241821v.l |
MULTI-CLOCK REAL-TIME COUNTER Field of the Disclosure [0001] The present disclosure is in the field of digital counter circuitry and more particularly relates to multi-clock counters including glitch free switching between clocks signals. Background [0002] Digital circuit designs often include counter circuitry to measure time between events by counting cycles of various clock signals or other signals in the circuit design. In complex digital systems, a real-time counter may be shared between different processing units to keep track of time. Such shared real-time counters often include highly accurate or high resolution clock signals that may be generated by a high accuracy crystal oscillator, for example. [0003] High resolution clock signals and high accuracy crystal oscillators operate at very high frequencies and consume much more energy than lower resolution clock signals and lower accuracy oscillators operating at lower frequencies. To reduce energy consumption, digital circuits may be configured to shut off a high frequency clock signal during periods when a lower frequency clock signal is suitable for processing operations of the circuits. [0004] Circuitry may be configured to switch certain clock signal inputs between a fast clock signal source and a slow clock signal source at various times to save energy. However, such switching between a fast clock signal and a slow clock signal can introduce inaccuracies to the output of a real-time counter that provides a count based on the switched clock signal. Therefore, systems that employ a slow clock signal during a low power mode commonly include two separate counters, a fast counter driven by the fast clock and a slow counter driven by the slow clock. When the low power mode is complete, simple arithmetic has been used based on the slow counter to advance the fast counter by the number of fast clock periods that would have passed during the low power mode. This dual-counter approach dis- advantageously involves the use of multiple counters and multiplication circuitry or 13241821v.lsoftware. Another disadvantage of the dual-counter approach is that a real-time count based on cycles of the fast clock may not be available during the low power mode. Summary [0005] For a more complete understanding of the present disclosure, reference is now made to the following detailed description and the accompanying drawings. In an exemplary aspect, a shared real-time counter is configured to provide an accurate counter output based on a fast clock period when driven by a fast clock signal or by a slow clock signal. Combinational logic circuitry provides glitch free switching between a fast clock signal input to the counter and a slow clock input to the counter. The counter output increases by a first number of fast clock counts, e.g., one count, for every cycle of the fast clock while in a fast clock mode, and by an appropriate second number of fast clock counts for every cycle of the slow clock signal while in a slow clock mode, e.g. low power mode. [0006] Aspects of the present disclosure include a method for generating a counter output of a dual mode counter. The method includes receiving a fast clock signal on a first signal path, receiving a slow clock signal on a second signal path, and receiving a clock select signal on a third signal path. The clock select signal indicates selection of either a fast clock mode or a slow clock mode. Transitions of the clock select signal are synchronized with the slow clock signal. The counter output is increased by a first counter increment for each period of the fast clock signal in response to the clock select signal indicating the fast clock mode. Otherwise, the counter output is increased by a second counter increment for each period of the slow clock signal in response to the clock select signal indicating the slow clock mode. The second counter increment represents a ratio of the period of the slow clock signal divided by the period of the fast clock signal. [0007] Aspects of the present disclosure include a counter apparatus including register circuitry that further includes a count input path, a count output path and a counter clock input path. The circuitry also includes adder circuitry that has a first adder input path, a second adder input path and an adder output path. The adder output path is coupled to the count input path of the register circuitry and the second adder input path coupled to the count output path of the register circuitry. According 13241821v.lto aspects of the disclosure, the apparatus further includes multiplexer circuitry having a first selectable input path, a second selectable input path, a multiplexer output path and multiplexer selector input path. The multiplexer output path is coupled to the first adder input path. The first selectable input path is coupled to a first counter increment signal, and the second selectable input path is coupled to a second counter increment signal. Glitch avoidance circuitry is configured to couple the counter clock input path to either a fast clock or a slow clock in response to a clock select signal. The glitch avoidance circuitry provides a counter increment select signal to the multiplexer selector input path in response to the clock select signal. The counter increment select signal is synchronized with the slow clock. [0008] Further aspects of the disclosure include a counter apparatus including means for receiving a fast clock signal on a first signal path, means for receiving a slow clock signal on a second signal path, and means for receiving a clock select signal on a third signal path. The clock select signal indicates selection of either a fast clock mode or a slow clock mode. The apparatus includes means for synchronizing transitions of the clock select signal with the slow clock signal. According to aspects of the disclosure, the counter apparatus includes means for increasing an output of a counter by a first counter increment for each period of the fast clock signal in response to the clock select signal indicating the fast clock mode and means for increasing the counter output by a second counter increment for each period of the slow clock signal in response to the clock select signal indicating the slow clock mode. The second counter increment represents a ratio of the period of the slow clock signal divided by the period of the fast clock signal. [0009] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its 13241821v.lorganization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure. Brief Description of the Drawings [0010] The accompanying drawings are presented to aid in the description of aspects. The drawings are provided solely for illustration of the aspects and not limitation thereof. [0011] FIGURE 1 is a diagram illustrating an always-on real-time counter apparatus according to an aspect of the present disclosure. [0012] FIGURE 2 is a signal timing diagram showing exemplary signal states during a glitch free switching of clock signals in the real time counter according to aspects of the present disclosure. [0013] FIGURE 3 is a process flow diagram showing a method for providing an always-on real-time counter according to an aspect of the present disclosure. [0014] FIGURE 4 is a block diagram showing an exemplary wireless communication system in which a dual-clock real-time counter may be advantageously employed according to an aspect of the present disclosure. Detailed Description [0015] Aspects of the present disclosure provide an always-on counter that dynamically switches between a fast clock signal that is used during normal operation and a slow clock signal that can be used during low power modes of operation. The fast clock signal and slow clock signal may be unsynchronized relative to each other. During normal operation, the counter changes by a first number of counts for each fast clock cycle. During low power modes of operation, while running on the slow clock signal, the counter changes by a second number of counts for each cycle of the 13241821v.lslow clock signal. The ratio of the second number of counts to first number of counts equals the ratio of slow clock period to fast clock period. In an example, the first number equals one so the counter changes by one count for each cycle of the fast clock signal during normal operation. [0016] In an illustrative aspect, while the fast clock is running, the counter increments by 1 count on each rising edge of the fast clock signal (fclk_src). Just prior to entering a low power mode and shutting down the fast clock' s crystal oscillator, external circuitry provides a mode change indicator. The mode change indicator may be received in the form of a state change of a clock select signal (clk_sel). The clk_sel signal is used to switch the source of the counter's clock from fclk_src to the slow clock signal (sclk_src) in a dynamic glitch free manner and to switch the value of count increments. Upon exiting the low power mode and restarting the fclk_src crystal oscillator, the clk_sel signal is again toggled by the external circuitry to indicate a mode change. In response to the state change of the clk_sel signal, the process is reversed whereby the source of the counter's clock is switched back to fclk_src signal and the value of count increment is switched back to 1. [0017] Referring to FIGURE 1, an always-on real-time counter according to at least one aspect of the present disclosure is described. The real-time counter includes an fclk_src path 102, a sclk_src path 104 and a clk_sel path 106. A first flip flop 108 includes an inverted clock input coupled to the sclk_src path 104 and a data input coupled to the clk_sel path 106. A two input AND gate 110 includes two inverted inputs (thereby configured as a NAND gate). One of the inverted inputs of the AND gate 110 is coupled to an output of the first flip flop 108. The other inverted input of the AND gate 110 is coupled to a count increment select (cnt_sel) path 123. [0018] Output of the AND gate 110 is coupled to a data input path of a second flip flop 112. Output of the second flip flop 112 is coupled to a data input of a third flip flop 114. The second flip flop 112 and third flip flop 114 each include an inverted clock input coupled to the fclk_src path 102. Another two-input AND gate 116 includes a non- inverted input coupled to the output of the first flip flop 108 and an inverted input coupled to the output of the third flip flop 114. Output of the AND gate 116 is coupled to the cnt_sel path 123. 13241821v.l[0019] A two-input AND gate 118 includes one non- inverted input coupled to the output of the third flip flop 114 and another non-inverted input coupled to the fclk_src path 102. Another two-input AND gate 120 includes one non-inverted input coupled to the cnt_sel path 123 and another non-inverted input coupled to the sclk_src path 104. Outputs from the AND gate 118 and the AND gate 120 are each coupled to inputs of a two-input OR gate 122. Output of the OR gate 122 is coupled to a clock input (cnt_clk) of a register 126. [0020] A two input multiplexer 124 includes one input coupled to a first count increment path 125 and another input coupled to a second count increment path 127. A signal select input of the multiplexer 124 is coupled to the cnt_sel path 123. A two- input adder 128 includes one input coupled to an output of the multiplexor 124 and another input coupled to an output of the register 126. Output of the adder 128 is coupled to a data input of the register 126. [0021] According to aspects of the disclosure, the indicator (clk_sel) is synchronous with the clock (sclk_src). Internally the indicator is synchronized to the falling edge of fclk_src and used to gate-off the fclk_src after the falling edge. Also, according to aspects of the disclosure, there is no need to re- synchronize the falling edge of sclk_src because the frequency of fclk_src is generally much higher than the frequency of sclk_src. For example, glitch free operation is provided when the frequency of fclk_src is at least five times the frequency of sclk_src. This does not present a problem in practical implementations because the frequency of fclk_src will generally be in the range of 100 - 1000 times faster than sclk_src. Therefore, the rising edge of sclk_src should not be counted before the next rising edge of fclk_src while changing modes. [0022] In the various aspects, no synchronous relationship is implied between the fclk_src signal and the sclk_src signal. The ratio of fclk_src frequency to sclk_src frequency may not necessarily be an integer. Therefore, in the illustrative aspects, the counter includes a fixed-point adder to keep track of fractional remainders. [0023] Operation of the real-time counter according to one example of the present disclosure is described with further reference to the signal timing diagram shown in FIGURE 2 together with FIGURE 1. In this example, the frequency of the 13241821v.lfclk_src signal is 21 MHz and the frequency of the sclk_src signal is 4 MHz. Therefore the ratio (mult) of the fast clock and slow clock frequencies is 5.25. In the illustrated case, only four cycles of the sclk_src signal are counted resulting in a count value with a zero decimal portion. It should be understood that the count could also include a non-zero decimal portion in many cases. [0024] The timing diagram shown in FIGURE 2 illustrates the relative timing of signals on the various signal paths shown in FIGURE 1 during a transition from normal operation, to low power mode and a transition from low power mode back to normal operation. The rows labeled fclk_src, sclk_src, clk_sel, cnt_clk, cnt_sel, and cnt_out each represent signals on their respective signal paths shown in FIGURE 1. A sequence of time periods from 0 to 37 is also shown for reference along the bottom row of FIGURE 2. [0025] According to this example, in normal operation, fclk_src is coupled via the AND gate 118 and the OR gate 122 to cnt_clk which clocks the register 126. Cnt_sel is low which controls the multiplexor 124 to provide a count increment value of 1 to the adder 128. The adder adds the increment value to the last counter output to generate a next count value. Upon each rising edge of cnt_clk at the register clock input, the next count value is shifted into the register and the current count value is shifted out of the register as cnt_out. [0026] At about time interval 2, clk_sel changes state to high indicating a signal from external circuitry to enter a low power mode. The clk_sel signal does not propagate through the first flip flop 108 until the next falling edge of sclk_src is applied to the inverted clock input to the first flip flop 108 at about time interval 5. This indication to select slow clock operation is then provided to AND gates 110 and 116 and changes input to the second flip flop 112 from high to low. [0027] Upon the next falling edge of fclk_src which occurs at about time interval 6, a low signal state is propagated through the second flip flop 112 to the input of the third flip flop 114. Upon the next falling edge of fclk_src which occurs at about time interval 7, the low signal state is propagated through the third flip flop 114 to the AND gate 118 and the AND gate 116. The low input to the AND gate 118 shuts off the fclk_src from the OR gate 122 and ultimately from the clock input 13241821v.lcnt_clk of register 126. At the same time, the same low signal to the inverted input of the AND gate 116 changes the output state of the AND gate 116 (cnt_sel) from low to high. This causes the multiplexer 124 to begin providing the second increment value (mult) to the adder 128. This also causes the AND gate 120 to pass the sclk_src to the OR gate 122 and ultimately to the clock input cnt_clk of the register 126. [0028] Upon the next rising edge of sclk_src which occurs at about time interval 8, the register output cnt_out is incremented by only the first increment value ' 1 ' which had been shifted in from the adder before the fclk_src had been shut off from the register clock cnt_clk. At the same time, a next count that is incremented by the second increment value (mult) is shifted into the register 126 from the adder 128. [0029] At about time interval 10, the external circuitry shuts off fclk_src to save energy. This does not affect the counter which by this time is incremented in response to sclk_src. The next rising edge of sclk_src occurs at about time interval 13 and is applied to the clock input cnt_clk of the register 126. This causes the register 126 to output a count incremented by the second increment value (mult), which is 5.25 in this example. In response to this same rising edge, a new count further incremented by mult (e.g. equal to mult plus cnt_out) is shifted into the register 126 from the adder 128. This is repeated upon the next rising edge of sclk_src which occurs at about time interval 18. [0030] At about time interval 20, clk_sel changes state to low indicating a signal from external circuitry to enter a normal operating mode. The clk_sel signal does not propagate through the first flip flop 108 until the next falling edge of sclk_src is applied to the inverted clock input to the first flip flop 108 at about time interval 26. This indication to select normal operation using the fclk_src is then provided to the AND gates 110 and 116 and changes input to the second flip flop 112 from low to high. [0031] Upon the next falling edge of fclk_src, which occurs at about time interval 27, a high signal state is propagated through the second flip flop 112 to the input of the third flip flop 114. Upon the next falling edge of fclk_src, which occurs at about time interval 28, the high signal state is propagated through the third flip flop 114 to the AND gate 118 and the AND gate 116. The high input to the AND gate 118 13241821v.lturns on fclk_src to the OR gate 122 and ultimately to the clock input cnt_clk of the register 126. At the same time, the same high signal to the inverted input of the AND gate 116 changes the output state of the AND gate 116 (cnt_sel) from high to low. This causes the multiplexer 124 to begin providing the first increment value (T) to the adder 128. This also causes the AND gate 120 to turn off the sclk_src to the OR gate 122 and ultimately from the clock input cnt_clk of the register 126. [0032] Upon the next rising edge of fclk_src, which occurs at about time interval 28, the register output cnt_out is still incremented by the second increment value (mult) that had been shifted in from the adder before the sclk_src had been shut off from the register clock cnt_clk. At the same time, a next count incremented by the first increment value (T) is shifted into the register 126 from the adder 128. [0033] The next rising edge of fclk_src occurs at about time interval 29 and is applied to the clock input cnt_clk of register 126. This causes the register 126 to output a count incremented by the first increment value Ί ' . In response to this same rising edge, a new count further incremented by T (e.g. equal to 1 plus cnt_out) is shifted into the register 126 from the adder 128. This is repeated upon the rising edge of each following cycle of fclk_src. [0034] A method for providing an always-on real-time counter according to one aspect of the present disclosure is described with reference to FIGURE 3. The method includes receiving a fast clock signal on a first signal path in block 302, receiving a slow clock signal on a second signal path in block 304, and receiving a clock select signal on a third signal path in block 306. The clock select signal indicates selection of either a fast clock mode or a slow clock mode. The method further includes synchronizing transitions of the clock select signal with the slow clock in block 308. In block 310, the method includes increasing a counter output by a first counter increment for each period of the fast clock in response to the clock select signal indicating the fast clock mode. In block 312, the method includes increasing the counter output by a second counter increment for each period of the slow clock in response to the clock select signal indicating the slow clock mode. The second counter increment equals a ratio of the period of the slow clock divided by the period of the fast clock. 13241821v.l[0035] FIGURE 4 shows an exemplary wireless communication system 400 in which an aspect of a multi-clock real-time counter may be advantageously employed according to one aspect of the present disclosure. For purposes of illustration, FIGURE 4 shows three remote units 420, 430, and 450 and two base stations 440. It should be recognized that typical wireless communication systems may have many more remote units and base stations. Any of remote units 420, 430, and 450, as well as the base stations 440, may include improved clock circuitry such as disclosed herein. FIGURE 4 shows forward link signals 480 from the base stations 440 and the remote units 420, 430, and 450 and reverse link signals 490 from the remote units 420, 430, and 450 to base stations 440. [0036] In FIGURE 4, a remote unit 420 is shown as a mobile telephone, a remote unit 430 is shown as a portable computer, and a remote unit 450 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units may be cell phones, hand-held personal communication systems (PCS) units, tablets, portable data units such as personal data assistants, or fixed location data units such as meter reading equipment. Although FIGURE 4 illustrates certain exemplary remote units that may include an improved clock system as disclosed herein, the clock system is not limited to these exemplary illustrated units. Aspects may be suitably employed in any electronic device in which a slow clock and fast clock is desired. [0037] Although certain aspects of the present disclosure are described in terms of particular combinations logic elements including AND gates, OR gates, flip flops and registers, for example, it should be understood that various alternative combinational logic elements including inverters, NAND gates and the like, for example be configured to provide the disclosed functionality within the scope of the present disclosure. Persons having ordinary skill in the art may select combinational logic elements best suited to a particular circuit layout to perform the disclosed functionality. [0038] While exemplary aspects incorporating the principles of the present disclosure have been disclosed hereinabove, the present disclosure is not limited to the disclosed aspects. Instead, this application is intended to cover any variations, uses, or adaptations of the disclosure using its general principles. Further, this application is intended to cover such departures from the present disclosure as come 13241821v.lwithin known or customary practice in the art to which this disclosure pertains and which fall within the limits of the appended claims. 13241821v.l |
A method, apparatus, and system are provided for monitoring locks using monitor-memory wait. According to one embodiment, a node associated with a contended lock is monitored; and a processor seeking the contended lock is put to sleep until a monitor event occurs. |
CLAIMS What is claimed is: 1. A method, comprising: monitoring a node associated with a contended lock; and putting a processor to acquire the contended lock to sleep until an event occurs. 2. The method of claim 1, wherein the monitoring the node comprises monitoring a lock address corresponding to the contended lock by executing a monitor instruction to activate the monitoring of the node. 3. The method of claim 1, further comprises executing a memory wait (mwait) instruction to put the processor to sleep until the event occurs. 4. The method of claim 1, further comprises: waking up the processor when the event occurs, wherein the event comprises the contended lock becoming available; and the processor acquiring the available lock. 5. The method of claim 1, wherein the contended lock becoming available comprises the processor is next a queue to acquire the contended lock, and the contended lock is released. 6. The method of claim 1, wherein the putting the processor to sleep comprises relinquishing of resources by the processor for other processors to use. 7. The method of claim 4, wherein the waking up comprises inactivating the monitoring of the node, and the processor using the relinquished resources. 8. The method of claim 7, wherein the relinquishing comprises: relinquishing of a plurality of registers in a register pool; relinquishing of a plurality of instruction queue entries in an instruction queue; <Desc/Clms Page number 24> relinquishing of a plurality of store buffer entries in a store buffer; and relinquishing of a plurality of re-order buffer entries in a re-order buffer. 9. A method, comprising specifying a monitor address associated with a queue element to monitor the queue element, wherein the specifying comprises executing a monitor instruction and a memory wait (mwait) instruction. 10. The method of claim 9, wherein the queue element corresponds with a processor to acquire a contended lock. 11. The method of claim 10, wherein the processor is put to sleep while waiting for the contended lock using a combination of monitor/mwait. 12. The method of claim 11, wherein the processor is awakened when an event occurs, the event comprising the processor is next a queue to acquire the contended lock, and the contended lock is released. 13. A processor, comprising: an execution unit to execute a monitor instruction and a memory wait (mwait) instruction to monitor a node associated with a contended lock; and logic to put a logical processor to acquire the contended lock to sleep until an event has occurred. 14. The processor of claim 13, further comprising detection logic to detect the occurrence of the event, wherein the event comprises a designated event including the contended lock becoming available. 15. The processor of claim 13, wherein the putting the logical processor to sleep comprises relinquishing of resources by the logical processor for other logical processors to use. 16. The processor of claim 13, wherein the logic is further to wake up the logical <Desc/Clms Page number 25> processor when the event occurs, the waking up comprises inactivating the monitoring of the node, and the logical processing using the relinquished resources. 17. The processor of claim 16, wherein the relinquishing comprises: relinquishing of a plurality of registers in a register pool; relinquishing of a plurality of instruction queue entries in an instruction queue; relinquishing of a plurality of store buffer entries in a store buffer; and relinquishing of a plurality of re-order buffer entries in a re-order buffer. 18. A system comprising: a storage medium; and a processor coupled with the storage medium, the processor having an execution unit to execute a monitor instruction and a memory mwait (mwait) instruction to monitor a node associated with a contended lock ; and logic to put a logical processor to acquire the contended lock to sleep until an event has occurred. 19. The system of claim 18, further comprising detection logic to detect the occurrence of the event, wherein the event comprises a designated event including the contended lock becoming available. 20. The system of claim 18, wherein the putting the logical processor to sleep comprises relinquishing of resources by the logical processor for other logical processors to use. 21. The system of claim 18, wherein the logic is further to wake up the logical processor when the event occurs, the waking up comprises inactivating the <Desc/Clms Page number 26> monitoring of the node, and the logical processing using the relinquished resources. 22. A machine-readable medium having stored thereon data representing sequences of instructions, the sequencing of instructions which, when executed by a machine, cause the machine to: monitor a node associated with a contended lock; and put a processor to acquire the contended lock to sleep until an event occurs. 23. The machine-readable medium of claim 22, wherein the monitoring the node comprises monitoring a lock address corresponding to the contended lock by executing a monitor instruction to activate the monitoring of the node. 24. The machine-readable medium of claim 22, wherein the sequences of instructions which, when executed by the machine, further cause the machine to execute a memory wait (mwait) instruction to put the processor to sleep until the event occurs. 25. The machine-readable medium of claim 22, wherein the sequences of instructions which, when executed by the machine, further cause the machine to: wake up the processor when the event occurs, wherein the event comprises the contended lock becoming available; and allow the processor to acquire the available lock. 26. The method of claim 22, wherein the putting the processor to sleep comprises relinquishing of resources by the processor for other processors to use. 27. A machine-readable medium having stored thereon data representing sequences of instructions, the sequencing of instructions which, when executed by a machine, cause the machine to specify a monitor address associated with a queue element to <Desc/Clms Page number 27> monitor the queue element, wherein the specifying comprises executing a monitor instruction and a memory wait (mwait) instruction. 28. The machine-readable medium of claim 27, wherein the queue element corresponds with a processor to acquire a contended lock. 29. The machine-readable medium of claim 28, wherein the sequences of instructions which, when executed by the machine, further cause the machine to put the processor to sleep while waiting for the contended lock using a combination of monitor/mwait. 30. The machine-readable medium of claim 29, wherein the sequences of instructions which, when executed by the machine, further cause the machine to awaken the processor when an event occurs, the event. comprising the contended lock becoming available. |
<Desc/Clms Page number 1> QUEUED LOCKS USING MONITOR-MEMORY WAIT BACKGROUND OF THE INVENTION Field of the Invention [0001] This invention relates to processors and more particularly to, using monitor-memory wait for monitoring a lock for one or more processors waiting for the lock until the lock become available. Description of Related Art [0002] Typically, a hyperthreaded or multi-threaded processor is capable of processing multiple instruction sequences concurrently. A primary motivating factor driving execution of multiple instruction streams within a single processor is the resulting improvement in processor utilization. Hyperthreaded processors allow multiple instruction streams to execute concurrently in different execution resources in an attempt to better utilize those resources. Furthermore, hyperthreaded processors can be used for programs that encounter high latency delays or which often wait for events to occur. [0003] Typically, hyperthreaded processors have a single resource setup that is to be shared by all threads or logical processors (processors). Not having adequate resources may result in significant contention between processors, particularly when one or more processors wait for a lock to become available. Several techniques have been proposed to improve program operation inefficiency and other resource-consuming delays dealing with lock contention between multiple processors. For example, in a conventional spinwait locks system, a waiting queue is used to put the processor waiting for the lock on the waiting list to wait until the lock becomes available. However, during such waiting, the processor continuously accesses the memory location of the lock, causing the memory contention on that memory location, bottlenecking of resources, waste of memory bandwidth, compute bandwidth, microarchitectural resources, and power. Such"busy waiting"processors can have adverse effect on the performance of other processors in the pact. <Desc/Clms Page number 2> BRIEF DESCRIPTION OF THE DRAWINGS [0004] The appended claims set forth the features of the present invention with particularity. The embodiments of the present invention, together with its advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which: [0005] Figure 1 is a block diagram illustrating an embodiment of a hyperthreaded processor having a memory access monitor; [0006] Figure 2 is a flow diagram illustrating an embodiment of an operation of a hyperthreaded processor; [0007] Figure 3 is a block diagram illustrating an embodiment of a hyperthreaded processor; [0008] Figure 4 is a block diagram illustrating an embodiment of a process for partitioning, sharing, and duplicating of resources; [0009] Figure 5 is a flow diagram illustrating an embodiment of a process for suspending and resuming execution of a thread; [0010] Figure 6 is a flow diagram illustrating an embodiment of a process for activation and operation of monitoring logic; [0011] Figure 7 is a flow diagram illustrating an embodiment of a process for monitor operations; [0012] Figure 8 is a flow diagram illustrating an embodiment of a process for acquiring a lock and monitoring the lock using monitor-memory wait; [0013] Figure 9 is a flow diagram illustrating an embodiment of a process for releasing a lock and monitoring the lock using monitor-memory wait; [0014] Figure 10 is a block diagram illustrating an embodiment of a system; and [0015] Figure 11 is a block diagram illustrating an embodiment of various design representations or formats for simulation, emulation, and fabrication of a design. <Desc/Clms Page number 3> DETAILED DESCRIPTION [0016] A method and apparatus are described for monitoring a lock for one or more processor waiting for the lock. Broadly stated, embodiments of the present invention provide for using monitor-memory mwait for monitoring a lock for one or more processors waiting for the lock until the lock becomes available. [0017] A system, apparatus, and method are provided for putting to sleep a processor to acquire a lock that may be contended by other processors, until a monitor event occurs, such as the lock becomes available to the processor. Stated differently, although the processor may be waiting for the lock to become available, it may sleep while waiting in a queue. According to one embodiment, the option of the processor sleeping may include the processor relinquishing its resources and providing the relinquished resources to be used by other processors. According to one embodiment, the processor seeking the lock may be a logical processor of a hyperthreaded processor. A typical hyperthreaded processor may include multiple threads or logical processors sharing the same resource. [0018] According to one embodiment, the monitor-memory wait (monitor-mwait) mechanism may be used to monitor the contended lock and to put the processor to sleep until, for example, the lock becomes available. The contended lock may refer to a lock that one or more processors wait or seek to acquire. According to one embodiment, a node or queue element (node) may be generated corresponding to the processor. According to one embodiment, the node may be initialized, associated with the contended lock, and monitored, using monitor-mwait. The monitoring of the node may include monitoring the lock by, for example, monitoring the lock address of the lock which may be referred to as monitor address. According to one embodiment, one or more events, or a set time period, may be referred to as monitor events, and upon the occurrence of a monitor event, the monitoring of the node may end and the processor may be awakened. For example, having the processor next in the queue to claim the lock and the availability of the lock may be referred to as a monitor event. Stated differently, when the processor is next (or first) in line to receive the contended lock and the lock becomes available, the processor may claim the lock and may also reclaim some or all of the previously relinquished <Desc/Clms Page number 4> resources. According to one embodiment, the contended lock may become available when released by another processor owning the lock. [0020] According to one embodiment, monitor-mwait may be implemented in one thread or processor while letting other processors use processing resources. For example, according to one embodiment, a monitor may be set up such that a processor may sleep until a particular memory access, such as a write to a specified memory location, occurs. A processor may be awakened upon a specified event without executing routines that may waste processor resources. According to one embodiment, partitions previously dedicated to the now sleeping processor may be relinquished while the processor is still sleeping. These and/or other embodiments of the present invention may relatively improve the overall machine throughput. [0021] In the following description, numerous specific details such as logic implementations, opcodes, resource partitioning, resource sharing, and resource duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of various embodiments of the present invention. It will be appreciated, however, to one skilled in the art that the embodiments of the present invention may be practiced without such specific details, based on the disclosure provided. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. Various steps of the embodiments of the present invention will be described below. The various steps of the embodiments may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or a machine or logic circuits programmed with the instructions to perform the various steps. Alternatively, the various steps of the embodiments may be performed by a combination of hardware and software. [0023] Various embodiments of the present invention may be provided as a computer program product, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process according to various embodiments of the present invention. <Desc/Clms Page number 5> The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, flash memory, or another type of media/machine-readable medium suitable for storing electronic instructions. Moreover, various embodiments of the present invention may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e. g. , a modem or network connection). [0024] Figure 1 is a block diagram illustrating an embodiment of a hyperthreaded processor 100 having a memory access monitor 110. According to one embodiment a processor 100 may be formed as a single integrated circuit. According to another embodiment, multiple integrated circuits may together form a processor 100, and according to yet another embodiment, hardware and software routines (e. g. , binary translation routines) may together form the processor 100. As illustrated, a bus/memory controller 120 may provide instructions for execution to a front end 130. The front end 130 may direct the retrieval of instructions from various threads according to instruction pointers 170. Instruction pointer logic may be replicated to support multiple threads. [0025] According to one embodiment, the front end 130 may feed instructions into thread/processor partitionable resources 140 for further processing. The thread/processor partitionable resources 140 may include logically separated partitions dedicated to particular threads when multiple threads are active within the processor 100. According to one embodiment, each separate partition may only contain instructions from the thread to which that portion is dedicated. The thread/processor partitionable resources 140 may include, for example, instruction queues. When in a single thread mode, the partitions of the thread/processor partitionable resources 140 may be combined to form a single large partition dedicated to the one thread. [0026] According to one embodiment, the processor 100 may also include replicated state 180. The replicated state 180 may include state variables sufficient to maintain context for a logical processor. With replicated state 180, multiple threads may execute without competition for state variable storage. Additionally, register allocation logic may be replicated for each thread. The replicated state-related logic may operate with the appropriate resource partitions to prepare incoming instructions for execution. <Desc/Clms Page number 6> [0027] According to one embodiment, the thread/processor partitionable resources 140 may pass instructions along to shared resources 150. The shared resources 150 may operate on instructions without regard to their origin. For example, scheduler and execution units may be thread-unaware shared resources. The partitionable resources 140 may feed instructions from multiple threads to the shared resources 150 by alternating between the threads in a fair manner that provides continued progress on each active thread. Thus, the shared resources 150 may execute the provided instructions on the appropriate state without concern for the thread mix. [0028] According to one embodiment, the shared resources 150 may be followed by another set of thread/processor partitionable resources 160. The thread/processor partitionable resources 160 may include retirement resources, such as a re-order buffer. Accordingly, the thread/processor partitionable resources 160 may ensure that execution of instructions from each thread concludes properly and that the appropriate state for that thread is appropriately updated. [0029] According to one embodiment, programmers may be provided with a mechanism to implement the functionality of monitor-memory wait without requiring constant polling of a memory location or even execution of instructions. For example, the processor 100 may include a memory access monitor 110. The memory access monitor 110 may be programmable with information about a memory access cycle for which the memory access monitor 110 may be enabled to watch. Accordingly, the memory access monitor 110 may include a monitor cycle information register 112, which is compared against bus cycle information received from the bus/memory controller 120 by comparison logic 114. If a match occurs, a resume thread signal may be generated to re- start a suspended thread. Memory access information may be obtained from internal and/or external buses of the processor. [0030] The monitor cycle information register 112 may contain details specifying the type of cycle and/or the address which may trigger the resumption of a thread. According to one embodiment, the monitor cycle information register 112 may store a physical address, and the memory access monitor 110 may watch for any bus cycle that indicates an actual or potential write to that physical address. Such a cycle may be in the form of an explicit write cycle and/or may be a read for ownership or an invalidating cycle by another agent attempting to take exclusive ownership of a cacheable line so that it may <Desc/Clms Page number 7> write to that line without an external bus transaction. The memory access monitor 110 may be programmed to trigger on various transactions in different embodiments. [0031] Figure 2 is a flow diagram illustrating an embodiment of an operation of a hyperthreaded processor. The operations of the various embodiments of Figure 1 may be further explained with reference to the flow diagram of Figure 2. According to one embodiment, the instruction set of the processor 100 may include a MONITOR opcode (instruction) to set up the monitor transaction information. At processing block 200, the MONITOR opcode is received as a part of the sequence of instructions of a first thread (T1). At processing block 210, in response to the MONITOR opcode, the processor 100 enables the memory access monitor 110 to monitor memory accesses for the specified memory access. The triggering memory access may be specified by an implicit or explicit operand. Therefore, executing the MONITOR opcode may specify the monitor address as the monitor address may be stored in advance in a register or other location as an implicit operand. The memory access monitor 110 may test whether the specified cycle is detected at decision block 215. If the specified cycle is not detected, the memory access monitor 110 may continue monitoring memory accesses. If the triggering cycle is detected, then a monitor event pending indicator may be set at processing block 220. [0032] According to one embodiment, the execution of the MONITOR opcode may trigger activation of the memory access monitor 110. The memory access monitor 110 may begin to operate in parallel with other operations in the processor 100. According to one embodiment, the MONITOR instruction itself may only set up the memory access monitor 110 with the proper memory cycle information and activate the memory access monitor 110, without unmasking monitor events. Stated differently, after the execution of the MONITOR opcode, monitor events may accrue, but may not be recognized unless they are explicitly unmasked. At processing block 225, triggering of a memory wait (mwait) is illustrated as a separate event. According to one embodiment, a MWAIT opcode may be used to trigger the recognition of monitor events and the suspension of T1. Using two separate instructions to set up and trigger the thread suspension may provide a programmer the added flexibility and allow more efficient programming. According to another embodiment, mwait may be triggered from the first opcode which may also set up the memory access monitor 110. In either case, one or more instructions may arm the <Desc/Clms Page number 8> memory access monitor 110 and enable recognition of monitor events. [0034] According to one embodiment, where separate opcodes are used to arm the memory access monitor 110 and to trigger the recognition of monitor events, a test may be performed to ensure that the memory access monitor 110 has been activated before suspending the thread at decision block 230. Furthermore, by testing if a monitor event is already pending (not illustrated), suspension of T1 may be avoided, and operation may continue at processing block 250. If the monitor 110 has been enabled and no monitor events are already pending, T1 may be suspended at processing block 235. [0035] With T1 suspended, according to one embodiment, the processor 100 may enter an implementation dependent state which may allow other threads to more fully utilize the processor resources. According to one embodiment, the processor may relinquish some or all of the partitions of partitionable resources 140 and 160 that were dedicated to T1. According to another embodiment, different permutations of the MONITOR opcode or settings associated therewith may indicate which resources to relinquish, if any. For example, when a programmer anticipates a shorter wait, the thread may be suspended, but maintain its resource partitions. Throughput may still be enhanced because the shared resources may be used exclusively by other threads during the thread suspension period. When a longer wait is anticipated, relinquishing all partitions associated with the suspended thread may allow other threads to have additional resources, potentially increasing the throughput of the other threads. The additional throughput may come at the cost of the overhead associated with removing and adding partitions when threads are respectively suspended and resumed. [0036] According to one embodiment, T1 may remain in a suspended state until a monitor event is pending. As previously discussed, the memory access monitor 110 may operate independently to detect and signal monitor events (blocks 215-220). If the processor 100 detects that a monitor event is pending at decision block 240, then T1 may be resumed at processing block 250. No active processing of instructions in T1 may need to occur for the monitor event to wake up T1 ; rather, T1 may remain suspended and the enabled memory access monitor 110 may signal an event to the processor 110. The processor 100 may handle the event and recognize that the event indicating T1 should be resumed, and performs the appropriate actions to resume T1. The embodiments of Figures 1 and 2 may provide techniques to allow a <Desc/Clms Page number 9> thread suspended by a program to be resumed upon the occurrence of a specified memory access. According to one embodiment, other events may also cause T1 to be resumed. For example, an interrupt may cause T1 to resume. Such an implementation may allow the memory access monitor 110 to be less than perfect in that it may miss (not detect) certain memory accesses or other conditions that should cause the thread to resume. As a result, T1 may be awakened unnecessarily at times. However, such an implementation reduces the likelihood that T1 will become permanently frozen due to a missed event, simplifying hardware design and validation. The unnecessary awakenings of T1 may be only a minor inconvenience as a loop may be constructed to have Tl double-check whether the condition it was awaiting truly did occur, and if not to suspend itself once again. [0038] According to one embodiment, the thread/processor partitionable resources, the replicated resources, and the shared resources may be arranged differently. In some embodiments, there may not be partitionable resources on both ends of the shared resources. According to one embodiment, the thread/processor partitionable resources may not be strictly partitioned, but rather may allow some instructions to cross partitions or may allow partitions to vary in size depending on the thread being executed in that partition or the total number of threads being executed. Additionally, different mixes of resources may be designated as shared, duplicated, and thread partitioned resources. [0039] Figure 3 is a block diagram illustrating an embodiment of a hyperthreaded processor. As illustrated, according to one embodiment, Figure 3 includes coherency related logic 350, one implementation of a monitor 310, and one specific implementation of thread suspend/resume and processor sleep/awake logic 377, among other components. According to one embodiment, a bus interface 300 includes a bus controller 340, event detect logic 345, a monitor 310, and the coherency related logic 350. [0040] According to one embodiment, the bus interface 300 may provide instructions to a front end 365, which performs micro-operand (uOP) generation, generating uOPs from macroinstructions. Execution resources 370 may receive uOPs from the front end 365, and back end logic 380 may retire various uOPs after they are executed. According to one embodiment, out-of-order execution may be supported by the front end, back end, and execution resources. [0041] According to one embodiment, a MONITOR opcode may enter the <Desc/Clms Page number 10> processor through the bus interface 300 and be prepared for execution by the front end 365. According to one embodiment, a special MONITOR uOP may be generated for execution by the execution resources 370. The MONITOR uOP may be treated similarly to a store operation by the execution units, with the monitor address being translated by address translation logic 375 into a physical address, which may be provided to the monitor 310. The monitor 310 may communicate with thread suspend/resume and processor sleep/awake logic 377 to cause resumption of threads. The thread may suspend and resume logic may perform partition and anneal resources as the number of active threads changes. [0042] For example, Figure 4 is a block diagram illustrating an embodiment of a process for partitioning, sharing, and duplicating of resources. According to one embodiment, partitioned resources may be partitioned and annealed (fused back together for re-use by other threads) according to the flow of active threads in the machine. According to one embodiment, duplicated resources may include instruction pointer logic in the instruction fetch portion 405 of the pipeline, register renaming logic in the rename portion 415 of the pipeline, state variables (not illustrated), and an interrupt controller (not illustrated). Shared resources, according to one embodiment, may include schedulers in the schedule stage 425 of the pipeline, a pool of registers in the register read 430 and register write portions 445 of the pipeline, execution resources in the execute portion 435 of the pipeline. Additionally, a trace cache (in I-fetch 405) and an LI data cache (in LI cache 440) may be shared resources populated according to memory accesses without regard to thread context. According to another embodiment, consideration of thread context may be used in caching decisions. Partitioned resources, according to one embodiment, may include two queues in queuing stages 410 of the pipeline, a re-order buffer in a retirement stage 450 of the pipeline, and a store buffer. Thread selection multiplexing logic may alternate between various duplicated and partitioned resources to provide reasonable access to both threads. [0043] For exemplary purposes, it is assumed that the partitioning, sharing, and duplicating, as illustrated in Figure 4, may be utilized in conjunction with the embodiment of Figure 3 in further describing operation of an embodiment of the processor of Figure 3. In particular, further details of operation of the embodiment of Figure 3 will now be discussed with respect to the flow diagram of Figure 5. The processor is assumed to be <Desc/Clms Page number 11> executing in a multi-threading mode, with at least two threads active. [0044] Figure 5 is a flow diagram illustrating an embodiment of a process for suspending and resuming execution of a thread. At processing block 500, the front end 365 may receive a MONITOR opcode during execution of a first thread (T1). The front end 365, according to one embodiment, may generate a special monitor uOP. The MONITOR uOP may be passed to the execution resources 370. The monitor uOP may have an associated address indicating the address to be monitored (the monitor address). The associated address may be in the form of an explicit operand or an implicit operand (i. e. , the associated address is to be taken from a predetermined register or other storage location). The associated address may"indicate"the monitor address in that it conveys enough information to determine the monitor address (possibly in conjunction with other registers or information). For example, the associated address may be a linear address having a corresponding physical address that may be the appropriate monitor address. Alternatively, the monitor address may be given in virtual address format, or could be indicated as a relative address, or specified in other known or convenient address- specifying manners. If virtual address operands are used, it may be desirable to allow general protection faults to be recognized as break events. [0045] The monitor address may indicate any convenient unit of memory for monitoring. For example, according to one embodiment, the monitor address may indicate a cache line. However, according to another embodiment, the monitor address may indicate a portion of a cache line, a specific/selected size portion or unit of memory, which may bear different relationships to the cache line sizes of different processors, or a singe address. The monitor address may indicate a unit that includes data specified by the operand (and more data) or may indicate specifically an address for a desired unit of data. [0046] Using the illustration of Figure 3, according to one embodiment, the monitor address may be provided to the address translation logic 375 and passed along to the monitor 310, where it may be stored in a monitor address register 335. In response to the MONITOR opcode, the execution resources 370 may then enable and activate the monitor 310 as indicated in processing block 510 and further detailed in Figure 6. According to one embodiment, any store operations that occur after the MONITOR opcode may be fenced to ensure that stores may be processed and therefore detected before any thread suspension occurs. According to one embodiment, some operations <Desc/Clms Page number 12> may need to occur as a result of activating the monitor 310 before any subsequent instructions can be undertaken. However, processing block 510 is shown as occurring in parallel with processing block 505 because the monitor 310 may continue to operate in parallel with other operations until a break event occurs once it is activated by the MONITOR opcode according to one embodiment. [0047] At processing block 505, a MEMORY WAIT (MWAIT) opcode may be received in thread 1. According to one embodiment, MWAIT opcode may be executed for unmasking monitor events. In response to the MWAIT opcode, a test may be performed at processing block 515 to determine whether a monitor event is pending. If no monitor event is pending, then a test may be performed at processing block 520 to determine whether the monitor is active. For example, if MWAIT is executed without previously executing a MONITOR, the monitor 310 may not be active. If either the monitor is inactive or a monitor event is pending, then thread 1 execution may be continued at processing block 565. [0048] According to one embodiment, if the monitor 310 is active and no monitor event is pending, then thread 1 execution may be suspended at processing block 525. The thread suspend/resume logic 377 may include pipeline flush logic 382 to drain the processor pipeline in order to clear all instructions at processing block 530. Once the pipeline has been drained, partition/anneal logic 385 may cause any partitioned resources associated exclusively with thread 1 to be relinquished for use by other threads at processing block 535. These relinquished resources may be annealed to form a set of larger resources for the remaining active threads to utilize. For example, referring to the two-thread example of Figure 4, all instructions related to thread 1 might be drained from both queues. Each pair of queues may then be combined to provide a larger queue to the second thread. Similarly, more registers from the register pool may be made available to the second thread, more entries from the store buffer may be freed for the second thread, and more entries in the re-order buffer may be made available to the second thread. In essence, these structures are returned to single dedicated structures of twice the size. Different proportions resulting from implementations using different numbers of threads are contemplated. [0049] According to one embodiment, at processing blocks 540,545, and 550, various events are tested to determine whether thread 1 may be resumed. Notably, these <Desc/Clms Page number 13> tests may not be performed by instructions being executed as a part of thread 1. Rather, these operations may be performed by the processor in parallel to its processing of other threads. As will be discussed in further detail with respect to Figure 6, the monitor itself may check whether a monitor write event has occurred and so indicates by setting an event pending indicator. The event pending indicator may be provided via an EVENT signal to the suspend/resume logic 377 (e. g. , microcode). Microcode may recognize the monitor event at an appropriate instruction boundary in one embodiment (block 540) since this event was unmasked by the MWAIT opcode at processing block 505. Event detect logic 345 may detect other events, such as interrupts, that are designated as break events at processing block 545. Additionally, according to one embodiment, an optional timer may be used periodically to exit the memory wait state to ensure that the processor does not become frozen due to some particular sequence of events at processing block 550. If none of these events signal an exit to the mwait state, then thread 1 may remain suspended. [0050] If thread 1 is resumed, according to one embodiment, the thread/suspend resume logic 377 may again be activated upon detection of the appropriate event. Again, the pipeline may flushed at processing block 555 to drain instructions from the pipeline so that resources can be once again partitioned to accommodate the soon-to-be-awakened thread 1. At processing block 560, the appropriate resources may be re-partitioned, and thread 1 may resumed at processing block 565. [0051] Figure 6 is a flow diagram illustrating an embodiment of a process for activation and operation of monitoring logic. At processing block 600, front end fetching for thread 1 may be stopped to prevent further thread 1 operations from entering the machine. At processing block 605, associated address operand may be converted from being a linear address to a physical address by the address translation logic 375. At processing block 610, observability of writes to the monitored address may be increased, perhaps to force caching agents to make write operations which would affect the information stored at the monitor address visible to the monitor 310 itself. At processing block 615, physical address for monitoring may be stored earlier or later in this sequence. [0052] Next, according to one embodiment, at processing block 620, the monitor may be enabled. The monitor monitors bus may cycle for writes to the physical address which may be the monitor address stored in the monitor address register 335. Further details of the monitoring operation are discussed below with respect to Figure 7. After the <Desc/Clms Page number 14> monitor is enabled, according to one embodiment, a store fence operation may be executed at processing block 625. The store fence may help ensure that all stores in the machine are processed at the time the MONITOR opcode completes execution. With all stores from before the monitor being drained from the machine, the likelihood that a memory wait (mwait) state is entered erroneously may be reduced. The store fence operation may serve as a precaution, and may be a time consuming operation. [0053] The store fence may be optional because the monitor-mwait mechanism, according to one embodiment, may be designed as a multiple exit mechanism. Stated differently, various events such as certain interrupts, recognitions, system on board timers, etc. , may also cause exit from the mwait state. According to one embodiment, the thread may be awakened because the data value being monitored has changed. Accordingly, according to one embodiment, software may double-check whether the particular value stored in the memory has changed. According to one embodiment, certain events including assertion NonMaskable Interrupt (NMI) and System Management Interrupt (SMI), machine check interrupts, and faults may be considered break events, and others events, such as powerdown events may not. According to one embodiment, for example, assertion of an A20M pin may also be regarded as a break event. [0054] At processing block 630, according to one embodiment, the monitor may continue to test whether bus cycles occurring indicate or appear to indicate a write to the monitor address. If such a bus cycle is detected, the monitor event pending indicator may be set at processing block 635. After execution of the MWAIT opcode (block 505, Figure 5), this event pending indicator may be serviced as an event and cause thread resumption in blocks 555-565 of Figure 5. Furthermore, events that change address translation may cause thread 1 to resume. For example, events that cause a translation look-aside buffer to be flushed may trigger resumption of thread 1 since the translation made to generate the monitor address from a linear to a physical address may no longer be valid. For example, in an x86 Intel Architecture compatible processor, writes to control registers CRO, CR3, and CR4, as well as to certain machine specific registers, may cause exit of the mwait state. [0055] Figure 7 is a flow diagram illustrating an embodiment of a process for handling monitor operations. In particular, Figure 7 illustrates further details of operation of the monitor 310 of Figure 3 and of the processing block 620 of Figure 6. According to <Desc/Clms Page number 15> one embodiment, at processing block 700, the monitor 310 may receive request and address information from a bus controller 340 for a bus transaction. At processing block 710, the monitor 310 may examine the bus cycle type and the address (es) affected. In particular, cycle compare logic 320 may determine whether the bus cycle is a specified cycle. According to one embodiment, an address comparison circuit 330 may compare the bus transaction address to the monitor address stored in the monitor address register 335, and write detect logic 325 may decode the cycle type information from the bus controller 340 to detect whether a write has occurred. If a write to the monitor address occurs, a monitor event pending indicator may be set at processing block 720. A signal (WRITE DETECTED) may be provided to the thread suspend/resume logic 377 to signal the event (and will be serviced assuming it has been enabled by executing MEMORY WAIT (MWAIT) ). Finally, the monitor 310 may be halted at processing block 730. Halting the monitor may save power, but may not be critical as long as false monitor events are masked or otherwise not generated. The monitor event indicator may also be reset at this point. Typically, servicing the monitor event may also mask the recognition of further monitor events until MWAIT may be again executed. [0056] In case of a read to the monitor address, according to one embodiment, the coherency related logic 350 may be activated. At processing block 740, a signal (such as HIT#) may be asserted to prevent another agent from gaining ownership which may allow future writes without coherency broadcasts. According to one embodiment, the monitor 310 may remain active and return to processing block 700 and may stay unaffected by a read of the monitor address. Furthermore, if a transaction is neither a read nor a write to the monitor address, the monitor may remain active and return to processing block 700. [0057] According to one embodiment, the MONITOR instruction may be for certain types of accesses to be monitored. These accesses may be ones chosen as indicative of efficient programming techniques, or may be chosen for other reasons. For example, according to one embodiment, memory access must be a cacheable store in write-back memory that is naturally aligned. A naturally aligned element may refer to an N bit element that starts at an address divisible by N. As a result of using naturally aligned elements, a single cache line may need to be accessed (rather than two cache lines as may be needed in the case where data is split across two cache lines) in order to write to the monitored address. Thus, using naturally aligned memory addresses may simplify bus <Desc/Clms Page number 16> watching. [0058] Figure 8 is a flow diagram illustrating an embodiment of a process for acquiring a lock and monitoring the lock using monitor-memory wait. A typical hyperthreaded or multi-threaded processor may include multiple threads or multiple logical processors (processors). Typically, multiple processors give the appearance of separate physical processors and share the same resources. At processing block 802, a processor may seek to acquire a lock, which may be contended by other processors. At decision block 804, whether the lock that the processor is seeking to acquire is contended by another processor is determined. A contended lock may refer to a lock that one or more processors wait to acquire. If the lock is not contended, the processor may acquire the lock using the conventional way of acquiring locks by claiming the ownership of the available lock at processing block 806. [0059] Typically, if a lock is contended by one or more processors, a waiting queue may be formed to include the processors seeking the contended lock to wait. However, such waiting of the processors is typically"busy waiting"as the waiting processors use the resources available to them to, for example, access the memory location of the contended lock. At processing block 808, according to one embodiment, if the lock is contended, a queue element or node (node), such as node N, may be created for the processor. According to one embodiment, the node may then be initialized at processing block 810. According to another embodiment, the initialization of the node may not be necessary, as the node may already be initialized. At processing block 812, the initialized node may then be linked or associated with the contended lock. According to one embodiment, once associated, the node may then serve as a tail pointer for the contended lock. [0060] According to one embodiment, at processing block 814, a monitor may be set up on the node to monitor the node associated with the contended lock to monitor the contended lock. The monitoring of the contended lock may include monitoring of the lock address of the lock to determine whether the lock has become available for the first processor {Monitor (N. lock)}. According to one embodiment, setting up the monitor may include activating the monitor in response to the front end 365 receiving a MONITOR opcode, and the front end 365 generating a special monitor uOP. The monitor uOP may be passed to the execution resources 370. The monitor uOP may have an associated <Desc/Clms Page number 17> address indicating the address to be monitored (the monitor address). According to one embodiment, the monitor address may include the lock address of the lock to which the node may be linked. The associated address may"indicate"the monitor address in that it may convey enough information to determine the monitor address (possibly in conjunction with other registers or information). [0061] As illustrated in Figure 3, according to one embodiment, the monitor address may be provided to the address translation logic 375 and passed along to the monitor, where it may be stored in a monitor address register 335. In response to the MONITOR opcode, the execution resources 370 may then enable and activate the monitor as indicated in processing block 510 and further detailed in Figure 6. According to one embodiment, the monitor may continue to operate in parallel with other operations until a monitor event occurs once it is activated by the MONITOR opcode according to one embodiment. [0062] At processing block 816, according to one embodiment, memory wait (mwait) instruction may be executed to put the processor to sleep while waiting for the contended lock to become available. According to one embodiment, MWAIT opcode may be received and passed to execution. According to one embodiment, execution of the MWAIT opcode may unmask various monitor events. In response to the MWAIT opcode, a test may be performed to determine whether a monitor event is pending. If no monitor event is pending, then a test may be performed to determine whether the monitor is active. For example, if MWAIT is executed without previously executing a MONITOR, the monitor may not be active. According to one embodiment, if either the monitor is inactive or a monitor event is pending, then processor may not put to sleep. According to one embodiment, the monitor event may refer to an event upon the occurrence of which, the monitor may go inactive ending the monitoring of the node and the processor may be awakened. For example, a monitor event may include the processor reaching its turn to claim the ownership of the lock and/or the lock become available to the processor when released by another processor currently owning the lock. [0063] According to one embodiment, the processor may be put to sleep using the monitor-mwait mechanism on the node at processing block 818. According to one embodiment, if the monitor is active and there is no pending monitor event, the processor may be put to sleep until the monitor event occurs. Stated differently, the first processor <Desc/Clms Page number 18> may sleep until, for example, the processor is recognized to be the first processor in line to claim the ownership of the contended lock. Such recognition may be referred to as the occurring of the monitor event making the monitor inactive and waking up the processor at processing block 820. [0064] According to one embodiment, a monitor event may not be limited to one event, and various events may be tested to determine whether monitoring may be ended the processor may be awakened. As discussed with respect to Figure 6, the monitor itself may check whether a monitor event has occurred and so indicate by setting an event pending indicator. The event pending indicator may be provided via an EVENT signal to the processor sleep/awake logic 377 (e. g. , microcode). Microcode may recognize the monitor event at an appropriate instruction boundary, according to one embodiment, since this event may have been unmasked by the MWAIT opcode. Furthermore, event detect logic 345 may be used to detect various events that are designated as monitor events. Furthermore, according to another embodiment, an optional timer may be used periodically to exit the mwait state to ensure proper workings of the hyperthreaded processor, and to check on some particular sequence of events that may cause the hyperthreaded processor to freeze. If none of these events signal an exit to the mwait state, then the first processor may remain asleep. [0065] At processing block 822, the first processor, now awaken, may claim the ownership of the lock and may also reclaim any previously relinquished resources. Previously relinquished resources may refer to the resources relinquished by the first processor while asleep and waiting for the lock. According to one embodiment, while the processor sleeps, the processor sleep/awake logic 377 may include pipeline flush logic 382 to drain the processor pipeline in order to clear all instructions at processing block 530. Once the pipeline has been drained, partition/anneal logic 385 may cause any partitioned resources associated exclusively with the first processor to be relinquished for use by other processors. These relinquished resources may be annealed to form a set of larger resources for other processors to utilize. For example, referring to the two-thread example of Figure 4, all instructions related to thread 1 might be drained from both queues. Each pair of queues may then be combined to provide a larger queue to the second thread. Similarly, more registers from the register pool may be made available to the second thread, more entries from the store buffer may be freed for the second thread, and more <Desc/Clms Page number 19> entries in the re-order buffer may be made available to the second thread. In essence, these structures are returned to single dedicated structures of twice the size. Different proportions resulting from implementations using different numbers of processors are contemplated. [0066] According to one embodiment, once the first processor wakes up or resumes, the processor sleep/awake logic 377 may again be activated upon detection of the monitor event. Again, the pipeline may be flushed to drain instructions from the pipeline so that the previously relinquished resources can be once again partitioned to accommodate the soon-to-be-awakened or recently-awakened first processor. [0067] Figure 9 is a flow diagram illustrating an embodiment of a process for releasing a lock and monitoring the lock using monitor-memory wait. As described with reference to Figure 8, according to one embodiment, monitor-memory wait (monitor- mwait) may be used to monitor a contended lock by monitoring the corresponding queue element or node (node), such as node N, and to put the processor seeking the contended lock to sleep until, for example, the contended lock become available. Using monitor- mwait with regard to releasing of a lock, at decision block 902, whether the lock is contended is determined. If the lock is not contended, the lock may be released at processing block 904. However, if the lock is contended, the releasing of the lock may not occur until, for example, the processor (releasing processor) owning the lock releases the lock in response to one or more events including one or more monitor event. [0068] According to one embodiment, a monitor event may refer to the processor (sleeping processor) seeking the lock being the next (or first) in line to claim the contended lock. For example, the releasing processor may issue a store the N. next- > Lock to wake up the sleeping processor seeking the contended lock from sleep/mwait ( (If (N. next! = 0) {Store to N. next- > lock//waking up the sleeping processor}) as described in the acquired phase with respect to Figure 8. Stated differently, at decision block 906, whether the node has reached (or circled back to) zero (0) is determined. If the node has reached zero, i. e. , N. next! = 0, the releasing processor may issue a store N. next- > Lock that the sleeping processor is next in line to own the lock, and the sleeping processor is awakened from sleeping at processing block 910. If the node has not reached zero, the lock may not be released at processing block 908. At processing block 912, the lock is released by the releasing processor. According to one embodiment, any store operations <Desc/Clms Page number 20> that occur after the MONITOR opcode may be fenced to ensure that stores may be processed and detected. According to one embodiment, some operations may need to occur as a result of activating the monitor before any subsequent instructions can be undertaken, or may occur in parallel with other operations until a monitor event occurs once it is activated by the MONITOR opcode. [0069] Figure 10 is a block diagram illustrating an embodiment of a system. According to one embodiment, as illustrated, the system includes a set of N hyperthreaded processors, processors 1005-1 through 1005-N. The hyperthreaded processors 1005-1- 1005-N are coupled with a bus 1050. According to another embodiment, a single processor or a mix of hyperthreaded processors and single-threaded processors may be used. Furthermore, other known or otherwise available system arrangements may be used. For example, the processors 1005-1-1005-N may be connected in a point-to-point fashion, and parts such as the memory interface may be integrated into each processor 1005-1- 1005-N. [0070] According to one embodiment, a memory interface 1015 coupled with the bus 1050 is coupled with a memory 1030 and a media interface 1020. The memory 1030 may include a multi-processing ready operating system 1035, and instructions for a first thread 1040 and instructions for a second thread 1045. The instructions 1030 may include an idle loop according to one embodiment. [0071] According to one embodiment, the appropriate software to perform various functions or embodiments may be provided in any of a variety of machine-readable mediums. According to one embodiment, the media interface 1020 may provide an interface to such software. [0072] According to one embodiment, the media interface 1020 may be an interface to a storage medium (e. g. , a disk drive, an optical drive, a tape drive, a volatile memory, a non-volatile memory, or the like) or to a transmission medium (e. g. , a network interface or other digital or analog communications interface). The media interface 1020 may read software routines from a medium (e. g. , storage medium 1092 or transmission medium 1095). Machine-readable mediums are any mediums that may store, at least temporarily, information for reading by a machine interface. This may include signal transmissions (via wire, optics, or air as the medium) and/or physical storage media 1092 such as various types of disk and memory storage devices. <Desc/Clms Page number 21> [0073] Figure 11 is a block diagram illustrating an embodiment of various design representations or formats for simulation, emulation, and fabrication of a design. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language which essentially may provide a computerized model of how the designed hardware is expected to perform. The hardware model 1110 may be stored in a storage medium 1100, such as a computer memory, so that the model may be simulated using simulation software 1120 that may apply a particular test suite 1130 to the hardware model 1110 to determine whether it is performing its intended function. According to one embodiment, the simulation software 1120 may not be recorded, captured, or contained in the medium. [0074] According to one embodiment, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Such model may be similarly simulated, sometimes by dedicated hardware simulators that form the model using programmable logic. This type of simulation, taken a degree further, may be an emulation technique. According to one embodiment, re-configurable hardware may involve a machine-readable medium storing a model employing the disclosed techniques. [0075] Furthermore, according to one embodiment, most designs, at some stage, may reach a level of data representing the physical placement of various devices in the hardware model. Where conventional semiconductor fabrication techniques may be used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. This data representing the integrated circuit may have the techniques disclosed in that the circuitry or logic in the data can be simulated or fabricated to perform these techniques. [0076] According to one embodiment, the data may be stored in any form of a computer-readable medium. An optical or electrical wave 1160 modulated or otherwise generated to transmit such information, a memory 1150, or a magnetic or optical storage 1140 such as a disc may represent the medium. The set of bits describing the design or the particular part of the design may represent an article that may be sold in and of itself or used by others for further design or fabrication. While certain exemplary embodiments have been described and shown in <Desc/Clms Page number 22> the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive, and that the embodiments of the present invention are not to be limited to specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. |
Solid state lighting ("SSL") devices with improved contacts and associated methods of manufacturing are disclosed herein. In one embodiment, an SSL device includes a first semiconductor material, a second semiconductor material spaced apart from the first semiconductor material, and an active region between the first and second semiconductor materials. The SSL device also includes an insulative material on the first semiconductor material, the insulative material including a plurality of openings having a size of about 1 nm to about 20 µ??, and a conductive material having discrete portions in the individual openings. |
CLAIMS I/We claim: 1. A solid state lighting (SSL) device, comprising: a first semiconductor material; a second semiconductor material spaced apart from the first semiconductor material; an active region between the first and second semiconductor materials; and an electrical terminal on the first semiconductor material, the electrical terminal including: a plurality of discrete point contacts spaced apart from one another, the individual point contacts having a size of about 1 nm to about 20 μιη; and an interconnect electrically connecting the plurality of discrete point contacts. 2. The SSL device of claim 1 wherein: the terminal is a first terminal on the first semiconductor material; the SSL device further includes a second terminal on the second semiconductor material; the second terminal is separated from the first terminal by the first semiconductor material, the active region, and the second semiconductor material; the first semiconductor material includes a P-type gallium nitride (GaN) material; the second semiconductor material includes an N-type GaN material; the active region includes at least one of a bulk indium gallium nitride (InGaN) material, an InGaN single quantum well ("SQW"), and GaN/InGaN multiple quantum wells ("MQWs"); the SSL device further includes a plurality of insulative pads on the first semiconductor material; the point contacts individually include a bump extending from the first semiconductor material; an adjacent pair of the bumps are separated from one another by one of the corresponding insulative pads; and the interconnect includes an elongated structure that electrically connects at least some of the plurality of point contacts. 3. The SSL device of claim 1 wherein: the SSL device further includes a plurality of insulative pads on the first semiconductor material; and an adjacent pair of the bumps are separated from one another by one of the corresponding insulative pads. 4. The SSL device of claim 1 wherein: the SSL device further includes a plurality of insulative pads on the first semiconductor material; the point contacts individually include a bump extending from the first semiconductor material toward the interconnect; and an adjacent pair of the bumps are separated from one another by one of the corresponding insulative pads. 5. The SSL device of claim 1 wherein: the SSL device further includes a plurality of insulative pads on the first semiconductor material; the point contacts individually include a bump extending from the first semiconductor material; an adjacent pair of the bumps are separated from one another by one of the corresponding insulative pads; and the interconnect includes an elongated structure that electrically connects at least some of the plurality of bumps. 6. The SSL device of claim 1 wherein: the SSL device further includes an insulating material on the first semiconductor material; the insulating material includes a plurality of apertures; the apertures individually contain one of the point contacts extending from the first semiconductor material; and the interconnect includes an elongated structure that electrically connects at least some of the plurality of point contacts. 7. The SSL device of claim 1 wherein: the SSL device further includes an insulating material on the first semiconductor material; the insulating material includes a plurality of apertures; the point contacts individually include a bump extending from the first semiconductor material; the apertures individually contain one of the bumps; and the interconnect includes an elongated structure that electrically connects at least some of the plurality of bumps. 8. The SSL device of claim 1 wherein: the point contacts individually include a first conductive material; the interconnect includes a second conductive material; and the first and second conductive materials are generally similar and homogeneous. 9. The SSL device of claim 1 wherein: the point contacts individually include a first conductive material; the interconnect includes a second conductive material; and the first and second conductive materials are different from each other. 10. A solid state lighting (SSL) device, comprising: a first semiconductor material; a second semiconductor material spaced apart from the first semiconductor material; an active region between the first and second semiconductor materials; a first terminal on the first semiconductor material; a plurality of apertures extending from the first terminal into the second semiconductor material via the active region and the first semiconductor material, the apertures individually having a side wall; an isolation material having a first isolation portion on the first terminal and a second isolation portion on the side walls of the apertures; and a second terminal in the plurality of apertures. 11. The SSL device of claim 10 wherein the apertures have a size of about 1 nm to about 20 μιη. 12. The SSL device of claim 10 wherein: the second terminal is in contact with the second semiconductor material; and the second isolation portion electrically isolates the second terminal from the active region, the first semiconductor material, and the first terminal. 13. The SSL device of claim 10 wherein the second terminal includes a first contact portion on the first isolation portion and a second contact portion in the aperture, the second contact portion being in contact with the second semiconductor material. 14. The SSL device of claim 10 wherein: the second terminal includes a first contact portion on the first isolation portion and a second contact portion in the aperture, the second contact portion being in contact with the second semiconductor material; the first isolation portion isolates the first contact portion from the first terminal; and the second isolation portion isolates the second contact portion from the active region and the first semiconductor material. 15. The SSL device of claim 10 wherein: the first semiconductor material includes a P-type gallium nitride ("GaN"); the second semiconductor material includes an N-type GaN; the active region includes at least one of a bulk indium gallium nitride (InGaN) material, an InGaN single quantum well ("SQW"), and GaN/InGaN multiple quantum wells ("MQWs"); the second terminal includes a first contact portion on the first isolation portion and a second contact portion in the aperture, the second contact portion being in contact with the second semiconductor material; the first isolation portion isolates the first contact portion from the first terminal; and the second isolation portion isolates the second contact portion from the active region and the first semiconductor material. 16. A solid state lighting (SSL) device, comprising: a first semiconductor material; a second semiconductor material spaced apart from the first semiconductor material; an active region between the first and second semiconductor materials; an insulative material on the first semiconductor material, the insulative material including a plurality of openings having a size of about 1 nm to about 20 μιη; and a conductive material having discrete portions in the individual openings. 17. The SSL device of claim 16 wherein the insulative material includes a plurality of pads separated from one another by one of the corresponding openings. 18. The SSL device of claim 16 wherein the insulative material includes a sheet-like structure blanketing the first semiconductor material, and wherein the openings individually expose a portion of the first semiconductor material. 19. The SSL device of claim 16 wherein: the conductive material is a first conductive material; the SSL device further includes a second conductive material on the second semiconductor material; the openings individually expose a portion of the first semiconductor material; and the discrete portions of the first conductive material are in contact with the first semiconductor material. 20. The SSL device of claim 16 wherein: the conductive material is a first conductive material; the SSL device further includes a second conductive material between the insulative material and the first semiconductor material; the openings of the insulative material extend from the second conductive material to the second semiconductor material through the first semiconductor material and the active region; and the discrete portions of the first conductive material are in contact with the second semiconductor material. 21. A method of forming a solid state lighting (SSL) device, comprising: forming an SSL structure having a first semiconductor material, a second semiconductor material spaced apart from the first semiconductor material, and an active region between the first and second semiconductor materials; forming a plurality of point contacts on the first semiconductor material or the second semiconductor material, the point contacts individually having a contact area; and selecting a size of at least one of the point contacts based on a target current spread to contact area ratio. 22. The method of claim 21 wherein selecting the size of at least one of the point contacts includes selecting a size of at least one of the point contacts such that two adjacent current spreads overlap with each other. 23. The method of claim 21 wherein selecting the size of at least one of the point contacts includes selecting a size of at least one of the point contacts such that two adjacent current spreads overlap with each other while maintaining a total contact area above a target threshold. 24. The method of claim 21, further comprising adjusting the target current spread to contact area ratio based on a target light extraction efficiency of the SSL device. 25. A device, comprising: a semiconductor material; an electrical terminal on the semiconductor material, the electrical terminal including: a plurality of discrete point contacts spaced apart from one another, the individual point contacts having a size of about 1 nm to about 20 μιη; and an interconnect electrically connecting the plurality of discrete point contacts. 26. The device of claim 25 wherein: the electrical terminal is a first terminal; the semiconductor material is a first semiconductor material; the first terminal is on the first semiconductor material; the device further includes a second semiconductor material and a second terminal on the second semiconductor material; the second terminal is separated from the first terminal by the first semiconductor material and the second semiconductor material; the first semiconductor material includes a P-type gallium nitride (GaN) material; the second semiconductor material includes an N-type GaN material; the device further includes a plurality of insulative pads on the first semiconductor material; the point contacts individually include a bump extending from the first semiconductor material; an adjacent pair of the bumps are separated from one another by one of the corresponding insulative pads; and the interconnect includes an elongated structure that electrically connects at least some of the plurality of point contacts. 27. The device of claim 25 wherein: the electrical terminal is a first terminal; the semiconductor material is a first semiconductor material; the first terminal is on the first semiconductor material; the device further includes a second semiconductor material and a second terminal on the second semiconductor material; the second terminal is separated from the first terminal by the first semiconductor material and the second semiconductor material; the device further includes a plurality of insulative pads on the first semiconductor material; the point contacts individually include a bump extending from the first semiconductor material; and an adjacent pair of the bumps are separated from one another by one of the corresponding insulative pads. 28. The device of claim 25 wherein: the device further includes a plurality of insulative pads on the semiconductor material; the point contacts individually include a bump extending from the first semiconductor material; and an adjacent pair of the bumps are separated from one another by one of the corresponding insulative pads. |
SOLID STATE LIGHTING DEVICES WITH POINT CONTACTS AND ASSOCIATED METHODS OF MANUFACTURING TECHNICAL FIELD [0001] The present disclosure is related to solid state lighting ("SSL") devices with point contacts and associated methods of manufacturing. BACKGROUND [0002] Mobile phones, personal digital assistants ("PDAs"), digital cameras, MP3 players, and other portable electronic devices utilize SSL devices (e.g., light emitting diodes (LEDs)) for background illumination. SSL devices are also used for signage, indoor lighting, outdoor lighting, and other types of general illumination. Figures 1A and IB are cross-sectional and plan views, respectively, of a conventional SSL device 10. As shown in Figures 1A and IB, the SSL device 10 includes a substrate 12 carrying an LED structure 11 having N-type gallium nitride (GaN) 14, GaN/indium gallium nitride (InGaN) multiple quantum wells ("MQWs") 16, and P-type GaN 18. The SSL device 10 also includes a first terminal 20 in contact with the N- type GaN 14 and a second terminal 22 in contact with the P-type GaN 18. The first terminal 20 includes a plurality of contact fingers 21 (three are shown for illustration purposes) coupled to one another by a cross member 23. The second terminal 22 includes a sheet- like structure. [0003] In operation, a continuous or pulsed electrical voltage is applied between the first and second terminals 20 and 22. In response, an electrical current flows from the first terminal 20, through the N-type GaN 14, the GaN/InGaN MQWs 16, and the P-type GaN 18, to the second terminal 22. The GaN/InGaN MQWs 16 then convert a portion of the electrical energy into light. The generated light is extracted from the N-type GaN 14 of the SSL devices 10 for illumination, signage, and/or other suitable purposes. [0004] The SSL device 10, however, may have low light extraction efficiencies. According to conventional techniques, the first and second terminals 20 and 22 typically include aluminum, copper, or other nontransparent conductive materials. As a result, light generated in areas directly between the first and second contacts 20 and 22 can be difficult to extract. On the other hand, the areas directly between the first and second contacts 20 and 22 produce the highest intensity of light in the SSL device 10. As a result, a large portion of the light generated in the SSL device 10 may not be extracted, which results in low light extraction efficiencies. Accordingly, several improvements in increasing light extraction efficiency in SSL devices may be desirable. BRIEF DESCRIPTION OF THE DRAWINGS [0005] Figure 1A is a schematic cross-sectional diagram of an SSL device in accordance with the prior art. [0006] Figure IB is a schematic plan view of the SSL device in Figure 1A. [0007] Figures 2A-2C are schematic cross-sectional diagrams of a portion of an SSL device illustrating a current spread in the SSL device in accordance with embodiments of the technology. [0008] Figure 3A is a plan view of an SSL device with point contacts in accordance with embodiments of the technology. [0009] Figures 3B and 3C are cross-sectional views of a portion of the SSL device in Figure 3A. [0010] Figure 3D is a bottom view of a portion of the SSL device in Figure 3B. [0011] Figure 4 A is a cross-sectional view of another SSL device with point contacts in accordance with embodiments of the technology. [0012] Figure 4B is a plan view of a portion of the SSL device in Figure 4A. [0013] Figure 5A is a cross-sectional view of another SSL device with point contacts in accordance with embodiments of the technology. [0014] Figure 5B is a plan view of a portion of the SSL device in Figure 5A. DETAILED DESCRIPTION [0015] Various embodiments of SSL devices with point contacts and associated methods of manufacturing are described below. As used hereinafter, the term "SSL device" generally refers to devices with LEDs, organic light emitting diodes ("OLEDs"), laser diodes ("LDs"), polymer light emitting diodes ("PLEDs"), and/or other suitable sources of radiation other than electrical filaments, a plasma, or a gas. The term "light extraction efficiency" generally refers to a ratio of the light extracted from an SSL device to the total light generated in the SSL device. A person skilled in the relevant art will also understand that the technology may have additional embodiments, and that the technology may be practiced without several of the details of the embodiments described below with reference to Figures 2A-5B. [0016] Figures 2A-2C are schematic cross-sectional diagrams of a portion of an SSL device 100 illustrating current spread profiles in the SSL device in accordance with embodiments of the technology. As shown in Figures 2A-2C, the SSL device 100 can include a substrate material 102, a first terminal 120, a first semiconductor material 104, an active region 106, a second semiconductor material 108, and a second terminal 122 in series. In the illustrated embodiment, the first and second terminals 120 and 122 are arranged vertically relative to each other. In other embodiments, the first and second terminals 120 and 122 can also be arranged laterally relative to each other or can have other suitable contact configurations, as discussed in more detail below with reference to Figures 5A and 5B. In any of these embodiments, the SSL device 100 can optionally include a reflective material (e.g., a silver film), a carrier material (e.g., a ceramic substrate), an optical component (e.g., a collimator), and/or other suitable components. [0017] In certain embodiments, the substrate material 102 can include silicon (Si), at least a portion of which has the Si( 1,1,1) crystal orientation. In other embodiments, the substrate material 102 can include silicon with other crystal orientations (e.g., Si(l,0,0)), AlGaN, GaN, silicon carbide (SiC), sapphire (A1203), zinc oxide (Zn02), a combination of the foregoing materials, and/or other suitable substrate materials. [0018] The first and second semiconductor materials 104 and 108 can be configured as cladding components for the active region 106. In certain embodiments, the first semiconductor material 104 can include N-type GaN (e.g., doped with silicon (Si)), and the second semiconductor material 108 can include P-type GaN (e.g., doped with magnesium (Mg)). In other embodiments, the first semiconductor material 104 can include P-type GaN, and the second semiconductor material 108 can include N-type GaN. In further embodiments, the first and second semiconductor materials 104 and 108 can individually include at least one of gallium arsenide (GaAs), aluminum gallium arsenide (AlGaAs), gallium arsenide phosphide (GaAsP), gallium(III) phosphide (GaP), zinc selenide (ZnSe), boron nitride (BN), AlGaN, and/or other suitable semiconductor materials. [0019] The active region 106 can include a single quantum well ("SQW"), MQWs, and/or a bulk semiconductor material. The term "bulk semiconductor material" generally refers to a single grain semiconductor material (e.g., InGaN) with a thickness greater than about 10 nanometers and up to about 500 nanometers. In certain embodiments, the active region 106 can include an InGaN SQW, GaN/InGaN MQWs, and/or an InGaN bulk material. In other embodiments, the active region 106 can include aluminum gallium indium phosphide (AlGalnP), aluminum gallium indium nitride (AlGalnN), and/or other suitable materials or configurations. [0020] In certain embodiments, the first semiconductor material 104, the active region 106, and the second semiconductor material 108 can be formed on the substrate material 102 via metal organic chemical vapor deposition ("MOCVD"), molecular beam epitaxy ("MBE"), liquid phase epitaxy ("LPE"), and/or hydride vapor phase epitaxy ("HVPE"). In other embodiments, at least one of the foregoing components may be formed via other suitable epitaxial growth techniques. [0021] The second terminal 122 can include a sheet-like structure constructed from copper (Cu), aluminum (Al), silver (Ag), gold (Au), platinum (Pt), and/or other suitable metals or metal alloys. Techniques for forming the second terminal 122 can include MOCVD, MBE, spray pyrolysis, pulsed laser deposition, sputtering, electroplating, and/or other suitable deposition techniques. [0022] The first terminal 120 can have a generally similar structure as the first terminal 20 shown in Figure IB. For example, the first terminal 120 can include a plurality of contact fingers 121 connected to one another by a cross member 123. The contact fingers 121 and/or the cross member 123 can individually include an elongated structure and/or other suitable structures. The contact fingers 121 and the cross member 123 can be constructed from copper (Cu), aluminum (Al), silver (Ag), gold (Au), platinum (Pt), and/or other suitable metals or metal alloys. In other embodiments, the contact fingers 121 and cross member 123 can be made from a transparent conductive oxide. Though three contact fingers 121 are shown for illustration purposes in Figures 2A-2C, in other embodiments, the SSL device 100 can include one, two, four, or any other suitable number of contact fingers. [0023] It has been recognized that light extraction efficiency in the SSL device 100 can be inversely related to an area of the first terminal 120. As shown in Figures 2A-2C, the SSL device 100 can have a current spread (identified as Ri, R2, and R3 in Figures 2A-2C, respectively) in the SSL device 100. As used hereinafter, the term "current spread" generally refers to an effective area in the SSL device 100 through which a current with an effective density flows between individual portions of the first terminal 120 and the second terminal 122. In Figures 2A-2C, all the contact fingers 121 have generally the same length for illustration purposes. As a result, the area of the first terminal 120 is represented by respective widths W (identified as Wi, W2, and W3 in Figures 2A-2C, respectively) of the contact fingers 121 for discussion purposes. One of ordinary skill in the art will appreciate that the following discussion is also applicable to the cross member 123 and/or other components of the first terminal 120. [0024] As shown in Figures 2A-2C, the contact fingers 121 have different widths as follows: WX > W2 > W3 In operation, when an electrical voltage is applied between the first and second terminals 120 and 122, an electrical current flows between the first and second terminals 120 and 122. It is believed that a first portion of the current may flow generally vertically following the shortest paths between the first and second terminals 120 and 122. It is also believed that a second portion of the current may flow generally transversely in the first semiconductor material 104 before flowing vertically toward the second terminal 122. As a result, the current density tends to decrease along a direction away from the edges of the first terminal 120. Thus, the current spread can be generally greater than an area of the first terminal 120. [0025] As discussed above, light generated beneath the first terminal 120 can be difficult to extract. As a result, it is desirable that a ratio of the current spread to the width of the contact figures 121 is large. As shown in Figure 2B, when the contact fingers 121 are narrow (e.g., to less than about 0.5 mm), the ratio of current spread R2 to width of the contact fingers W2 tends to increase from that of the current spread Ri to width of the contact fingers Wi in Figure 2A. As a result, a ratio of the area of the contact fingers 121 to the current spread increases as the area decreases as follows: Thus, more light may be generated in regions from which light may be easily extracted to improve light extraction efficiency. Accordingly, it is advantageous to have a large number of small contacts than a small number of large contacts. [0026] Having a large number of small contacts can also improve the current density profile in the SSL device 100. As shown in Figure 2C, when the contact fingers 121" are narrowed below a threshold value (e.g., 0.1 mm), the adjacent current spreads R3 can overlap to form overlapped areas 133. The overlapped areas 133 associated with contact fingers 121" can thus have a higher current density than in conventional devices to yield a more uniform current density profile in the SSL device 100. [0027] As discussed above, light generated in areas underneath the contact fingers 121 may be difficult to extract, and as the area of the contact fingers 121 decreases, more light may be generated from areas offset from the contact fingers 121 where it can be readily extracted from the SSL device 100. As a result, it may be advantageous to select (e.g., reduce) the area of the contact fingers 121 (e.g., the width W) based on a target light extraction efficiency. However, the contact fingers 121 and the cross member 123 with small areas may have high electrical impedance and thus degrade the electrical performance of the SSL device 100. As a result, the areas of the first terminal 120 may not be reduced excessively. Rather, selecting the areas of the first terminal 120 is a balance between the degree of current spread and the electrical performance (e.g., impedance) of the first terminal 120. [0028] Several embodiments of the current technology can allow a high current spread to area ratio— while maintaining a sufficient contact area for the second terminal 122 by forming A a plurality of point contacts. Figure 3 A is a plan view of an SSL device 200 with point contacts in accordance with embodiments of the technology. Figures 3B and 3C are cross-sectional views of orthogonal portions of the SSL device 200 in Figure 3A. Figure 3D is a bottom view of a portion of the SSL device 200 in Figure 3B. [0029] As shown in Figures 3A and 3B, the SSL device 200 can include a plurality of insulative pads 126 on the first semiconductor material 104. The contact fingers 121 and cross member 123 of the first terminal 120 cover at least a portion of the pads 126. The pads 126 may be constructed from silicon dioxide (Si02), silicon nitride (SiN), and/or other dielectric materials via chemical vapor deposition ("CVD"), atomic layer deposition ("ALD"), spin coating, and/or other suitable deposition techniques. [0030] In Figure 3 A, the pads 126 are illustrated as all having a generally similar size and a rectilinear shape. In other embodiments, the pads 126 can also have circular, oval, trapezoidal, and/or other suitable shapes. In further embodiments, at least one of the pads 126 can have different shape, size, or other characteristics than the other pads 126. Even though sixteen pads 126 are illustrated in Figure 3 A, in other embodiments, the SSL device 200 may include any suitable number of pads. [0031] As shown in Figure 3B, the pads 126 can individually include a first surface 126a in contact with a surface 104a of the first semiconductor material 104 and a second surface 126b opposite the first surface 126a. The first terminal 120 can individually include a first portion 120a on the second surfaces 126b of the pads 126 and a second portion 120b between adjacent pads 126. The second portion 120b includes a plurality of sections in contact with a portion of the surface 104a of the first semiconductor material 104. As a result, the second portion 120b forms an electrical connection with the first semiconductor material 104 while the first portion 120a forms an interconnect that electrically couples all sections of the second portion 120b. Though the first and second portions 120a and 120b are shown as having the same material of construction and are generally homogeneous, in other embodiments, the first and second portions 120a and 120b may include different materials. [0032] As shown in Figure 3C, the individual pads 126 separate the first portion 120a of the individual contact fingers 121 from the first semiconductor material 104. As a result, the first portion 120a of the individual contact fingers 121 is insulated from the first semiconductor material 104. In the illustrated embodiment, the individual contact fingers 121 have a smaller width than the corresponding pads 126. In other embodiments, the individual contact fingers 121 can have generally the same width than the corresponding pads 126. [0033] Figure 3D shows a plan view of the SSL device 200 at an interface between the pads 126 and the first semiconductor material 104. As shown in Figure 3D, the second portion 120b of the first terminal 120 can have a generally rectangular cross section and is arranged in an array. Sections of the second portion 120b can individually form a plurality of point contacts 127 as pillars, bumps, and/or other suitable structures on the first semiconductor material 104. Adjacent point contacts 127 are separated from one another by one of the corresponding pads 126. [0034] Several embodiments of the SSL device 200 can have high current spreads while maintaining an adequate total electrical contact area. In certain embodiments, the individual point contacts 127 can be sufficiently small (e.g., with a width less than about 0.1 mm) to induce large current spreads in the SSL device 200. At the same time, the total area of the point contacts 127 may be maintained because the SSL device 200 may include a sufficient number of point contacts 127 based on a target contact area. [0035] Even though the pads 126 are discussed above with reference to Figures 3A-3D as discrete structures, in other embodiments, the pads 126 can be interconnected and generally conformal to the first semiconductor material 104. Figure 4 A is a cross-sectional view of an SSL device 200 in accordance with additional embodiments of the technology. Figure 4B is a plan view of a portion of the SSL device 200 in Figure 4A. As shown in Figures 4A and 4B, the SSL device 200 includes an insulative material 125 generally blanketing the first semiconductor material 104. The insulative material 125 can include a plurality of vias 128 individually containing the second portion 120b of the first terminal 120. Referring to Figure 4B, the individual sections of the second portion 120b in the vias 128 accordingly define an array of discrete point contacts 127. The insulative material 125 may be constructed from silicon dioxide (Si02), silicon nitride (SiN), and/or other dielectric materials via chemical vapor deposition ("CVD"), atomic layer deposition ("ALD"), spin coating, and/or other suitable deposition techniques. [0036] Figure 5A is a cross-sectional view of an SSL device 300 with point contacts in accordance with additional embodiments of the technology. Figure 5B is a plan view of a portion of the SSL device 300 in Figure 5A. As shown in Figure 5A, the SSL device 300 can include a plurality of openings 140 extending from a surface 120a of the first terminal 120 to the second semiconductor material 108. The SSL device 300 also includes an isolation material 130 on the surface 120a of the first terminal 120 and side walls 142 of the openings 140. [0037] The second terminal 122 includes a first portion 122a on the isolation material 130 and a second portion 122b in the openings 140. Parts of the second portion 122b in the individual openings 140 form the point contacts 127. As a result, the second portion 122b is in electrical connection with the first semiconductor material 104 while the first portion 122a interconnects the second portion 122b. [0038] In certain embodiments, the individual point contacts 127 can have a size generally similar to a thickness of the first semiconductor material 104 or the second semiconductor material 108. For example, in one embodiment, the individual point contacts 127 can have a size (e.g., a width, a length, a diameter, or a diagonal length) of about 2 μιη to about 4 um. In other embodiments, the individual point contacts 127 can have a size of about 1 nm to about 20 μιη. In further embodiments, the individual point contacts 127 can have other suitable sizes. [0039] From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. In addition, many of the elements of one embodiment may be combined with other embodiments in addition to or in lieu of the elements of the other embodiments. Accordingly, the disclosure is not limited except as by the appended claims. |
Capacitors with a carbon-based electrode layer in contact with a ferroelectric insulator. The insulator may be a perovskite oxide. Low reactivity of the carbon-based electrode may improve stability of a ferroelectric capacitor. A carbon-based electrode layer may be predominantly carbon and have a low electrical resistivity. A carbon-based electrode layer may be the only layer of an electrode, or it may be a barrier between the insulator and another electrode layer. Both electrodes of a capacitor may include a carbon-based electrode layer, or a carbon-based electrode layer may be included in only one electrode. |
An integrated circuit (IC), comprising:a capacitor, comprising:a first electrode layer over a substrate;an insulator layer in direct contact with the first electrode layer, wherein the insulator layer comprises oxygen one or more of Sr, Ba Hf, or Bi; anda second electrode layer in direct contact with the insulator layer, wherein at leastone of the first electrode layer or the second electrode layer comprises predominantly carbon; andone or more levels of interconnect metallization electrically coupled to the capacitor through the first and second electrode layers.The IC of claim 1, wherein at least one of the first electrode layer or the second electrode layer comprises substantially pure carbon.The IC of any one of claims 1-2, wherein the first or second electrode layer comprising predominantly carbon has a resistivity <100 mΩ-cm.The IC of any one of claims 1-3, wherein the first or second electrode layer comprising predominantly carbon has a thickness of 5-10 nm.The IC of any one of claims 1-4, wherein both the first and second electrode layers comprise predominantly carbon.The IC of any one of claims 1-4, wherein only the second electrode layer comprises predominantly carbon.The IC of claim 6, wherein the first electrode layer comprises a metal.The IC of claim 7, wherein the metal is at least one of Ti, Ru, or Ir.The IC of any one of claims 1-8, wherein the first or second electrode layer comprising predominantly carbon is between the insulator layer and a third electrode layer comprising a metal.The IC of claim 9, wherein the first or second electrode layer comprising carbon has a thickness of 1-3nm.The IC of claim 10, wherein both the first and second electrode layers comprise predominantly carbon.The IC of any one of claims 1-11, wherein the insulator layer comprises Sr and Ti, or Ba and Ti, or Bi and Sr.The IC of claim 12, wherein the insulator layer is BaxSr1-xTiO3 and wherein X is less than 95 and greater than 12.The IC of any one of claims 1-13, wherein the insulator layer has a thickness between 5 nm and 50 nm.A system comprising:a power supply;a processor coupled to the power supply; anda memory coupled to the processor, wherein the processor or the memory comprises:a transistor, wherein the transistor comprises:a gate stack over a channel region of a semiconductor material;a source coupled to a first end of the channel region; anda drain coupled to a second end of the channel region;a capacitor, comprising:a first electrode layer over a substrate;an insulator layer on the first electrode layer, wherein the insulator layer comprises oxygen one or more of Sr, Ba Bi, or Hf; anda second electrode layer on the insulator layer, wherein at least one of the first electrode layer or the second electrode layer comprises carbon; andone or more levels of interconnect metallization electrically coupling the capacitor tothe transistor. |
BACKGROUNDMany ferroelectric insulators have high relative permittivity and high dielectric strength making them attractive for capacitors in integrated circuit (IC) devices. Such capacitors may be in the form of metal-insulator-metal (MIM) capacitors or metal-insulator semiconductor (MIS) capacitors, which may be employed within a field effect transistor.Relative to a dielectric insulator that is conventionally found MIM and MIS capacitors of IC devices, the integration of such ferroelectric insulators into IC device fabrication presents new materials challenges. For example, materials and processes developed for dielectric capacitors may not be suitable for ferroelectric capacitors. Techniques and architectures that improve the performance of FE capacitors are therefore commercially advantageous.BRIEF DESCRIPTION OF THE DRAWINGSThe material described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. In the figures:FIG. 1A, 1B, and 1C are cross-sectional views of capacitors, in accordance with some embodiments;FIG. 2A is a cross-sectional view of a capacitor, in accordance with some embodiments;FIG. 2B is an isometric view of a capacitor, in accordance with some other embodiments;FIG. 3 is a flow diagram illustrating methods for forming capacitors, in accordance with some embodiments;FIG. 4, 5, 6 , and 7 are cross-sectional views of a capacitor evolving as the methods illustrated in FIG. 3 are practiced in accordance with some embodiments;FIG. 8 is a cross-sectional view of an IC including coupling capacitors, in accordance with some embodiments;FIG. 9 is an isometric sectional view of a ferroelectric field effect transistor (FeFET), in accordance with some embodiments;FIG. 10 illustrates a mobile computing platform and a data server machine employing an IC that includes a capacitor, in accordance with some embodiments; andFIG. 11 is a functional block diagram of an electronic computing device, in accordance with some embodiments.DETAILED DESCRIPTIONEmbodiments are described with reference to the enclosed figures. While specific configurations and arrangements are depicted and discussed in detail, it should be understood that this is done for illustrative purposes only. Persons skilled in the relevant art will recognize that other configurations and arrangements are possible without departing from the spirit and scope of the description. It will be apparent to those skilled in the relevant art that techniques and/or arrangements described herein may be employed in a variety of other systems and applications other than what is described in detail herein.Reference is made in the following detailed description to the accompanying drawings, which form a part hereof and illustrate exemplary embodiments. Further, it is to be understood that other embodiments may be utilized and structural and/or logical changes may be made without departing from the scope of claimed subject matter. It should also be noted that directions and references, for example, up, down, top, bottom, and so on, may be used merely to facilitate the description of features in the drawings. Therefore, the following detailed description is not to be taken in a limiting sense and the scope of claimed subject matter is defined solely by the appended claims and their equivalents.In the following description, numerous details are set forth. However, it will be apparent to one skilled in the art, that embodiments may be practiced without these specific details. In some instances, well-known methods and devices are shown in block diagram form, rather than in detail, to avoid obscuring the embodiments. Reference throughout this specification to "an embodiment" or "one embodiment" or "some embodiments" means that a particular feature, structure, function, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase "in an embodiment" or "in one embodiment" or "some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive.As used in the description and the appended claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.The terms "coupled" and "connected," along with their derivatives, may be used herein to describe functional or structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical, optical, or electrical contact with each other. "Coupled" may be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical or electrical contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause and effect relationship).The terms "over," "under," "between," and "on" as used herein refer to a relative position of one component or material with respect to other components or materials where such physical relationships are noteworthy. For example, in the context of materials, one material or layer over or under another may be directly in contact or may have one or more intervening materials or layers. Moreover, one material between two materials or layers may be directly in contact with the two materials/layers or may have one or more intervening materials/layers. In contrast, a first material or layer "on" a second material or layer is in direct physical contact with that second material/layer. Similar distinctions are to be made in the context of component assemblies.As used throughout this description, and in the claims, a list of items joined by the term "at least one of' or "one or more of' can mean any combination of the listed terms. For example, the phrase "at least one of A, B or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C.A MIM capacitor can be utilized in a variety of applications such as high power microprocessor units, radio frequency circuits and in other analog integrated circuit devices. A decoupling capacitor, for example, provides a bypass path for transient currents in an IC. Transient currents can ordinarily damage active electronic devices such as transistors. A decoupling capacitor can also provide power to an integrated circuit and keep the power supply voltage stable. A decoupling capacitor does this by absorbing excess electrical energy (charge) flowing through the circuit. It is therefore desirable for a decoupling capacitor to have a large capacitance, such as a capacitance above 8 microfarads/cm2, to control the excess electrical energy and stabilize power supply voltages. A large capacitance can be obtained when an insulator in a MIM capacitor has a high relative permittivity, or dielectric constant (e.g., above 20). Typical dielectric constants of known dielectric materials such as oxides of hafnium, aluminum or zirconium, for example are in the range of 25-35.Perovskite oxides are one class of ferroelectric material with BaTiO3, SrBi2Ta2O9, SrTiO3, BaSrTiO3 being some examples of Perovskite oxides having dielectric constants that are substantially greater than many other oxides of other metals such as hafnium or zirconium. But to fully utilize a Perovskite in a capacitor, it is important for the Perovskite insulator to be stable, particularly at temperatures experienced during fabrication or operation of an IC device.One source of instability in a MIM or MIS capacitor is attributable to charge leakage. Charge leakage (or leakage current) can be a limiting factor for such capacitors because the charge leakage leads to energy loss. Poor crystallinity and/or chemical stoichiometry in a Perovskite insulator can increase charge leakage. For example, oxygen deficiencies within a perovskite insulator can alter the electric field characteristics within a capacitor and affect the dielectric strength of the insulator.Leakage current of a given MIM or MIS capacitor architecture may be dependent on the composition of the electrode(s) directly adjacent to the insulator and/or their interface(s) with the insulator. For example, chemical reactions between the electrode and the insulator can lead to oxygen deficiencies within at least a portion of the insulator, which can then alter the leakage current within the insulator as these oxygen deficiencies act as electrically activated dopants. Hence, it is desirable to reduce reactivity of a ferroelectric insulator with the electrode(s). Electrode materials including substantially pure titanium or even predominantly titanium (e.g., TiN), for example, may react with a perovskite insulator including oxygen, altering the stoichiometry of the ferroelectric metal oxide(s) by forming TiOx or TiNOx at the electrode-insulator interface. This is particularly acute at high temperatures (e.g., > 400 °C), which may occur during IC processing.The inventors have found that an electrode comprising carbon, and more specifically an electrode that is predominantly carbon, can mitigate stability problems associated with oxygen deficiencies within a ferroelectric insulator of a capacitor. Although not bound by theory, advantages of a carbon electrode may be attributed to carbon's high temperature tolerance, and more specifically high free energy of oxidation (e.g., -200 kJ/mol). Carbon will therefore undergo little oxidation and this reduced reactivity may improve ferroelectric capacitor stability and/or improve one or more capacitor metrics, such as capacitance and/or charge retention. Carbon's high work function (e.g., ∼4.7 eV) may also be advantageous in capacitor electrode applications.FIG. 1A, 1B, and 1C are cross-sectional views of capacitors, in accordance with some embodiments. Referring first to FIG. 1 , capacitor 101 includes an insulator layer 108 between a lower electrode layer 104 and an upper electrode layer 110. Capacitor 101 is over an IC substrate 102. Substrate 102 may vary depending on the application of capacitor 101 within an IC, and may have a number of material layers of any composition and/or microstructure. For example, substrate 102 may further include part of a workpiece substrate (e.g., a large format semiconductor wafer) that is to become an IC chip, and may further include at least one active device layer (e.g., further including transistors). Substrate 102 may also include one or more interconnect metallization levels interconnecting transistors into an integrated circuit. Substrate 102 may also include a seed layer in contact with electrode layer 104. A seed layer may have any material composition that promotes desirable microstructure within one or more of insulator layer 108 or electrode layers 104, 110. Tantalum is one exemplary seed layer material, but other refractory metals, such as tungsten, may also be suitable.Insulator layer 108 is a ferroelectric material of any thickness T0, some exemplary embodiments, insulator layer thickness T0 is between 5 nm and 50 nm. In advantageous embodiments, insulator layer 108 is a perovskite oxide. In some exemplary perovskite oxide embodiments, majority constituents of insulator layer 108 include oxygen, and two or more of lead, zirconium, barium, bismuth, strontium or titanium. Other constituents may be present, but only in trace impurity levels (e.g., <1e18 atoms/cm3). In a first embodiment, insulator layer 108 is substantially oxygen, strontium and titanium (SrTiOx). As one example, insulator layer 108 is a solid solution of SrTiO3. A ferroelectric SrTiO3 insulator layer 108 may advantageously comprise polycrystalline material having a particular texture. In a second embodiment, the insulator layer 108 is substantially oxygen, barium and titanium (BaTiOx). As one example, insulator layer 108 is a solid solution of BaTiO3.A ferroelectric BaTiO3 insulator layer 108 may also advantageously comprise polycrystalline material having a particular texture. In a third embodiment, insulator layer 108 is substantially oxygen, Sr, Ba and Ti (e.g., BaxSr1-xTiO3). In some examples, X is less than 0.95 and greater than 0.05. In fourth embodiments, insulator layer 108 is substantially oxygen, Pb, Zr and Ti (e.g. Pb(Zr,Ti)O3. In another embodiment, the insulator layer 108 is substantially oxygen, bismuth, strontium and tantalum (SrBiTaOx).Insulator layer 108 may also be other ferroelectric metal oxides that are not perovskites. For example, some high-k metal oxide dielectric materials comprising predominantly oxygen and one more metals can become ferroelectric if their crystal texture is of a particular phase. Trace impurities may be added to promote such a ferroelectric crystal phase with silicon being one example of a ferroelectric phase promoting/stabilizing dopant in HfOx insulator embodiments. These non-perovskite ferroelectrics can also display stability problems potentially attributable to oxygen deficiencies associated with electrode reactivity.Lower electrode layer 104 and upper electrode layer 110 are each in direct contact with insulator 108. In accordance with embodiments herein, at least one of electrode layers 104 and 110 is predominantly carbon (i.e., more than 50 at. % carbon). Such a carbonaceous electrode layer is advantageously substantially carbon (i.e., more than 95 at. % carbon), and may have a purity of as high as 99 at. %. In exemplary embodiments, the carbonaceous electrode layer is graphitic carbon. Graphitic carbon can be more specifically characterized as sp2 carbon, which is to be distinguished from sp3 carbon also known as diamond-like carbon (DLC). While DLC is generally a dielectric with high electrical resistivity, the graphitic carbon in accordance with embodiments herein advantageously has an electrical resistivity less than 300 mΩ-cm, more advantageously less than 150 mΩ-cm, and most advantageously less than 100 mΩ-cm. The inventors have found such a carbon electrode layer 104, and/or carbon electrode layer 110, can improve stability of insulator layer 108 while offering high electrode conductivity at reasonable film thicknesses.In some embodiments, both lower electrode layer 104 and upper electrode layer 110 are predominantly carbon, and are advantageously both substantially pure carbon. The introduction of oxygen vacancies within insulator layer 108 may be minimized with carbon-based electrode layers in contact with both interfaces of insulator layer 108. However, depending on the composition of insulator layer 108, it may be advantageous for a lower electrode layer to not only function as one capacitor terminal, but also to function as a crystalline template that improves the crystallinity of insulator layer 108. For example, where insulator layer 108 is ferroelectric HfOx, a carbon-based lower electrode is perfectly adequate as templating is not critical to achieving ferroelectric properties in HfOx. However, in other embodiments where insulator layer 108 is a perovskite, certain metals may have crystallinity that is advantageous as a template for the formation of the perovskite insulator. For such embodiments, upper electrode layer 110 may be the only carbon-based electrode while lower electrode layer 104 is a non-carbon based material that offers the advantageous templating properties. Even where both electrodes are not carbon-based, the advantages described above for carbon-based electrodes may still be leveraged in single-sided embodiments because the interface area between insulator layer 108 and non-carbon electrode materials is reduced, for example in half for a basic parallel plate architecture.For some embodiments where only upper electrode 110 is carbon-based, lower electrode layer 104 comprises a metal, such as one or more of titanium, ruthenium or iridium. These metals may have FCC or hexagonal crystallinity with lattice parameters that are a good match with a perovskite insulator layer 108. Noting that a metal electrode layer 104 can react with oxygen from insulator layer 108, oxygen may be present within electrode layer 104, for example within grain boundaries of the metal. As such, the oxygen content of a metallic electrode layer 104 may be significantly higher (e.g., by at least an order of magnitude) than within a carbon-based electrode layer 110.The thickness of a carbon-based electrode in accordance with embodiments herein may vary from a barrier/spacer layer of minimal thickness to the full thickness of the electrode. In the example illustrated in FIG. 1A , electrode layers 104 and 110 are the only electrode layers present. For the one or more of electrode layers 104, 110 that are carbon, the thickness T1 of the carbon is at least 5 nm (e.g., 5-10 nm), and may be more.FIG. 1B is a cross-sectional illustration of a capacitor 150, in accordance with some further embodiments. Reference numbers introduced above in the context of FIG. 1A are retained in FIG. 1B for features sharing the same attributes as those introduced in FIG. 1A . Capacitor 150 again includes insulator layer 108 between electrode layers 104 and 110. However, capacitor 150 includes a multilayer electrode stack 120 in which electrode layer 110 is between insulator layer 108 and an electrode layer 115. For such embodiments, electrode layer 110 is carbon-based while electrode layer 115 has a second chemical composition that may, for example, react with insulator layer 108 in the absence of intervening electrode layer 110.In some examples where electrode layer 110 is substantially pure carbon, electrode layer 115 is predominantly a metal, such as one or more of Ti, W, Ru, or Ir. Relative to capacitor 101, the inclusion of electrode layer 115 within multilayer electrode stack 120 may advantageously improve conductivity and/or improve structural resilience of capacitor 150. Within electrode stack 120, electrode layer 110 functions as barrier or spacer at the interface of insulator 108 so that electrode layer 115 does not as readily react with insulator 108. As a barrier, a carbon-based electrode layer 110 may have a lower thickness than for embodiments where the carbon-based material is the only layer of an electrode. For example, electrode layer 110 may be substantially pure carbon with a layer thickness T2 that is less than 5nm (e.g., 1-3nm). As for capacitor 101, both electrode layers 104 and 110 may comprise carbon, in which case capacitor 150 may include a thick (e.g., T1) carbon electrode layer 104 and a thin (e.g., T2) carbon electrode layer 110.FIG. 1C is a cross-sectional illustration of a capacitor 190, in accordance with some further embodiments where both electrode layer 104 and electrode layer 110 are substantially pure carbon. Reference numbers introduced above in the context of FIG. 1A and 1B are retained in FIG. 1C for features sharing the same attributes as those introduced in FIG. 1A and/or FIG. 1B . In capacitor 190, both electrode layers 104 and 110 are barrier layers within respective multi-layer electrode stacks 105 and 120. For such embodiments, each of electrode layer 103 and electrode layer 115 may have chemical compositions distinct from the carbon compositions of electrode layers 104, 110. In some examples, each of electrode layer 103 and electrode layer 115 is substantially metal, such as one or more of Ti, W, Ru, or Ir, for example. Within electrode stack 120, electrode layer 110 again functions as barrier, or spacer, at the interface of insulator 108 so that electrode layer 115 does not as readily react with insulator 108. Within electrode stack 105, electrode layer 104 also functions as barrier, or spacer, at the interface of insulator 108 so that electrode layer 103 does not as readily react with insulator 108. As barriers, carbon-based electrode layers 104 and 110 may each have minimal thickness. For example, both of electrode layers 104 and 110 may be substantially pure carbon with layer thickness T2 that is less than 5nm (e.g., 1-3nm).The various material attributes of the layers in capacitors 101, 150 and 190 are also applicable to capacitors having more complex form factors. For example, FIG. 2A illustrates a cross-sectional view through a non-planar capacitor structure 201 including an electrode layer 104 that has been patterned into a fin or pillar. Insulator 108 is adjacent to, and in direct contact with, a sidewall 104A of electrode layer 104. Electrode layer 110 is similarly adjacent to, and in direct contact with, a sidewall 108A of insulator layer 108. In this configuration, electrode layers 104 and 110 may have any of the same attributes described above for capacitors 101, 150, 190. For example, one or more of electrode layers 104 or 110 may be substantially carbon, and any of the structural variations described above in the context of capacitors 101, 150, 190 may be similarly applied to capacitor structure 201.FIG. 2B illustrates an isometric view of cylindrical capacitor structure 202 typical of a trench or via capacitor often found in memory ICs. This structure is similar to that illustrated in FIG. 2A , where electrode layer 104 is clad by insulator layer 108, and insulator layer 108 is clad by electrode layer 110. One or more of electrode layers 104 or 110 may be substantially carbon and any of the structural variations described above in the context of capacitors 101, 150, 190 may be similarly applied to capacitor structure 202.FIG. 3 is a flow diagram illustrating methods 301 for forming a capacitor including a carbon electrode layer, in accordance with some embodiments. Methods 301 may be practiced, for example, to fabricate any of the capacitors 101, 150, 190 or 201-202. FIGS. 4-7 are cross-sectional views of a capacitor evolving as the methods 301 are practiced, in accordance with some specific embodiments.Methods 301 begin at input 305 where a workpiece including one or more material layers of a monolithic IC is received. In some embodiments, the workpiece is a large format (e.g., 300-450mm) wafer and includes at least a capacitor electrode material layer on a working surface of the wafer. In the example illustrated in FIG. 4 , a substrate 102 includes a conductive interconnect 400 over a substrate 401. In the example, conductive interconnect 400 is embedded within a dielectric 403. Dielectric 403 may be silicon dioxide, silicon nitride, silicon carbide, or a low-k dielectric such as carbon doped silicon oxide. Interconnect 400 includes a barrier layer 400A between dielectric 403 and fill metal 400B. Barrier layer 400A may include tantalum, tantalum nitride or ruthenium, for example. Fill metal 400B may be cobalt, copper, tungsten, or ruthenium for example.As further illustrated in FIG. 4 , substrate 401 includes a lower electrode material layer 404, which is to become the first layer of a thin film capacitor. Electrode material layer 404 may either be a metal or may be carbon-based. In the illustrated example, dielectric layer 403 has a surface that is substantially co-planar with a surface of conductive interconnect 400 so that lower electrode material layer 404, when blanket deposited, is substantially planar. The blanket deposition may be performed with a physical vapor deposition (PVD) process, for example.Returning to FIG. 3 , methods 301 continue at block 310 where a ferroelectric insulator material layer is blanket deposited in direct contact with the lower electrode material layer. Any deposition technique known to be suitable for the insulator may be practiced at block 310, but in some exemplary embodiments ferroelectric insulator material is deposited with a PVD process or a plasma enhanced chemical vapor deposition (PECVD). The PVD process may be performed with the workpiece at an elevated temperature, for example at a temperature of 350-400 °C, or more, which may promote a particular crystal texture and/or dominant phase within the insulator material.At block 320, a graphitic carbon material layer is blanket deposited in direct contact with the insulator material layer. Although any deposition technique known to be suitable for such carbon films may be practiced at block 320, in some exemplary embodiments the carbon layer is deposited with a PVD process in which a target of pyrolytic graphite is sputtered. The carbon sputtering process may again be performed with the workpiece at an elevated temperature, for example at a temperature of 150-200 °C, or more.FIG. 5 illustrates an example where insulator material layer 508 has been blanket deposited directly on lower electrode material layer 404. Insulator material layer 508 may have any of the chemical compositions described above in the context of insulator layer 108 ( FIG. 1A-1C ). As further illustrated in FIG. 5 , a carbon upper electrode material layer 510 has been blanket deposited directly on insulator material layer 508. Hence, at least one of the electrode material layers 510 and 404 is carbon-based, and both may be carbon-based for embodiments where electrode material layer 404 has been deposited in substantially the same manner as electrode material layer 510.Methods 301 ( FIG. 3 ) continue at block 330 where the capacitor material layers are patterned with any subtractive process(es) suitable for various material layer compositions. Following capacitor patterning, any remaining interconnect levels of the IC may be completed at output 340. For example, the upper electrode of the capacitor may be connected to other circuit nodes with an upper-level metalization.In the example shown in FIG. 6 , a mask 614 is formed on upper electrode material layer 510. Mask 614 defines a polygon area and position of the capacitor, for example relative to interconnect 400. Mask 614 may be formed with any lithographic process(es) as embodiments are not limited in this respect. FIG. 7 illustrates capacitor 101 following the patterning of the capacitor material layer stack. In the capacitor material layer stack may be patterned with one or more plasma etch processes. The plasma etch process defines sidewalls into the various material layers 510, 508 and 404 to form electrode layer 110, insulator layer 108, and electrode layer 104, respectively. FIG. 7 further illustrates an example where an upper-level interconnect 720 has been fabricated in contact with the electrode layer 110. Interconnect 720 includes an adhesion layer 720A (e.g., tantalum, tantalum nitride or ruthenium) in contact with electrode layer 110. A fill metal 720B (e.g., cobalt, tungsten copper) has been deposited on adhesion layer 720A. A dielectric material 716 encapsulates capacitor 101.FIG. 8 is a cross-sectional view of an IC structure 800 including a vertically interdigitated coupling capacitor 801, in accordance with some embodiments. Structure 800 illustrates a portion of a monolithic IC that comprises FEOL device circuitry 880 fabricated over and/or on a single crystalline substrate 80. In this example, FEOL device circuitry includes a plurality of MOSFETs 881 that employ a monocrystalline semiconductor material 871 for at least a channel region of each transistor. In other embodiments, FEOL device circuitry includes other types of transistors (e.g., bipolar junction transistor, etc.), or other active devices employing one or more semiconductor materials (e.g., diodes, lasers, etc.). FETs 881 include a gate terminal 870 separated from a semiconductor material 871 by a gate dielectric. The channel region of semiconductor material 871 separates semiconductor terminals (source semiconductor and drain semiconductor). Any materials known to be suitable for FETs may be present in FEOL FETs 881. FETs 881 may be planar or non-planar devices. In some advantageous embodiments, FETS 881 are finFETs, nanoribbon, or nanowire FETS. FETs 881 may include one or more semiconductor materials.FEOL device circuitry may further include one or more levels of interconnect metallization 807 electrically insulated by dielectric materials 808. Interconnect metallization 807 may be any metal(s) suitable for FEOL and/or BEOL IC interconnection. Interconnect metallization 807, may be, for example, an alloy of predominantly copper, an alloy of predominantly tungsten, or ruthenium, etc. Dielectric materials 808 may be any dielectric materials known to be suitable for electrical isolation of monolithic ICs. In some embodiments, dielectric materials 808 comprises silicon, and at least one of oxygen and nitrogen (e.g., SiO, SiN, or SiON).FEOL device circuitry is electrically connected to electrodes of coupling capacitor 801. Coupling capacitor 801 includes an upper electrode layer 110A electrically coupled to one circuit node by interconnect 805. A lower electrode layer 104B is also electrically coupled by interconnect 805 to the same circuit node. These upper and lower electrode layers 110A, 104B are therefore operable as one terminal of coupling capacitor 101. Interconnect 806 is coupled to a second circuit node and in direct contact with both a lower electrode layer 104A and an upper electrode layer 110B. These upper and lower electrode layers 110B, 104A are therefore operable as one terminal of coupling capacitor 101. An insulator layer 108A is between the electrode layers 104A and 110A. Similarly, an insulator layer 108B is between the electrode layers 104B and 110B. Although only two (A and B) capacitor stack iterations are illustrated for coupling capacitor 801, any number of iterations are possible in such a vertically interdigitated capacitor as a means of increasing capacitance of a coupling capacitor.In exemplary embodiments, at least one of electrode layers 104A, 104B, 110A or 110B is predominantly carbon (i.e., a carbon electrode layer), for example substantially as described elsewhere herein. In some embodiments, both electrode layers 110A and 110B are predominantly carbon. In some further embodiments, at least one of electrode layers 104A, 104B, 110A or 110B is a metal. For example, electrode layer 104A may be a metal with templating properties (e.g., Ru, Ir, Ti). For such embodiments, vertically interdigitated coupling capacitor 801 may further include one or more seed layers 832. Seed layer 832 may comprise tantalum, for example, and provides nucleation sites for metal crystals of electrode layer 104A (e.g., ruthenium or iridium). Although thickness may vary with implementation, in some examples, seed layer 832 may have a thickness between 1 nm and 10 nm.As noted above, in addition to MIM capacitors, an IC may also include MIS capacitors that include a ferroelectric insulator layer. The carbon-based electrodes described above are also applicable to such MIS capacitors. For example, a ferroelectric insulator may be integrated into a field effect transistor (FET), which is referred to as ferroelectric FET (FeFET). An electrode layer of predominantly carbon in direct contact with a ferroelectric insulator of a FeFET may be advantageous for substantially the same reasons described above for MIM capacitors.FIG. 9 is an isometric sectional view of a transistor structure 901. As shown, transistor structure 901 includes a plurality of semiconductor fins 904. Each fin 904 has a longitudinal length in a first dimension (e.g., x) over a surface of the substrate, and a transverse width in a second dimension (e.g., y). Each fin 904 extends at a height (e.g., z-dimension) from a plane of a substrate material 905. In this example, fins 904 have been patterned into a partial thickness of substrate material 905. In some embodiments, fins 904 comprise silicon, and may be predominantly (i.e., more than 50%) silicon.In specific examples where substrate material 905 is substantially monocrystalline silicon, fins 904 are also substantially monocrystalline silicon. In other embodiments, fins 904 are a metal oxide or metal chalcogenide semiconductor material. In some examples the metal oxide semiconductor includes oxygen and at least one of Mg, Cu, Zn, Sn, Ti, In, or Ga with one specific example being InGaO3(ZnO)5, often referred to as IGZO. Some exemplary semiconducting metal chalcogenide embodiments include InSx or InSex, WSx or WSex, MoSx or MoSex, GaSx or GaSex, ZnSx or ZnSex, IGZSx or IGZSex.A gate stack is over a channel region of fins 904. The gate stack includes electrode layer 110 in direct contact with insulator layer 108. Insulator layer 108 is again a ferroelectric and may have any of the properties described above for MIM embodiments. Similarly, electrode layer 110 is at least predominantly carbon, and may have any of the other attributes described above. As further illustrated in FIG. 9 , source and drain regions 950 are on opposite sides of the gate stack, and electrically coupled to opposite ends of the channel region. Source and drain region 950 may comprise any P-type or N-type semiconductor materials, such as, but not limited to, silicon doped with boron, phosphorus, or arsenic impurities.Capacitor structures, and the methods of forming such structures described herein may be integrated into a wide variety of ICs and computing systems. FIG. 10 illustrates a system in which a mobile computing platform 1005 and/or a data server machine 1006 includes an IC 1001 including capacitors having at least one carbon-based electrode, for example in accordance with some embodiments described elsewhere herein. The server machine 1006 may be any commercial server, for example including any number of high-performance computing platforms within a rack and networked together for electronic data processing, which in the exemplary embodiment includes a monolithic IC 1001. The mobile computing platform 1005 may be any portable device configured for each of electronic data display, electronic data processing, wireless electronic data transmission, or the like. For example, the mobile computing platform 1005 may be any of a tablet, a smart phone, laptop computer, etc., and may include a display screen (e.g., a capacitive, inductive, resistive, or optical touchscreen), a chip-level integrated system 1010, and a battery 1015.Whether disposed within the integrated system 1010 illustrated in the expanded view 1050, or as a stand-alone packaged chip within the server machine 1006, IC 1001 may include memory circuitry (e.g., RAM), and/or a logic circuitry (e.g., a microprocessor, a multi-core microprocessor, graphics processor, or the like). At least one of these circuitries includes one or more capacitors having at least one carbon-based electrode, for example in accordance with some embodiments described elsewhere herein. IC 1001 may be further coupled to a board or package substrate 1060 that further hosts one or more additional ICs, such as power management IC 1030 and radio frequency IC 1025. RFIC 1025 may have an output coupled to an antenna (not shown) to implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond.FIG. 11 is a functional block diagram of an electronic computing device 1100, in accordance with some embodiments. Device 1100 further includes a motherboard 1101 hosting a number of components, such as, but not limited to, a processor 1104 (e.g., an applications processor). Processor 1104 may be physically and/or electrically coupled to motherboard 1101. In some examples, processor 1104 is part of a monolithic IC structure including capacitors having at least one carbon-based electrode, for example in accordance with some embodiments described elsewhere herein. In general, the term "processor" or "microprocessor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be further stored in registers and/or memory.In various examples, one or more communication chips 1106 may also be physically and/or electrically coupled to the motherboard 1101. In further implementations, communication chips 1106 may be part of processor 1104. Depending on its applications, computing device 1100 may include other components that may or may not be physically and electrically coupled to motherboard 1101. These other components include, but are not limited to, volatile memory (e.g., DRAM 1132), non-volatile memory (e.g., ROM 1135), flash memory (e.g., NAND or NOR), magnetic memory (MRAM 1130), a graphics processor 1122, a digital signal processor, a crypto processor, a chipset 1112, an antenna 1125, touchscreen display 1115, touchscreen controller 1165, battery 1116, audio codec, video codec, power amplifier 1121, global positioning system (GPS) device 1140, compass 1145, accelerometer, gyroscope, speaker 1120, camera 1141, and mass storage device (such as hard disk drive, solid-state drive (SSD), compact disk (CD), digital versatile disk (DVD), and so forth, or the like.Communication chips 1106 may enable wireless communications for the transfer of data to and from the computing device 1100. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. Communication chips 1106 may implement any of a number of wireless standards or protocols, including, but not limited to, those described elsewhere herein. As discussed, computing device 1100 may include a plurality of communication chips 1106. For example, a first communication chip may be dedicated to shorter-range wireless communications, such as Wi-Fi and Bluetooth, and a second communication chip may be dedicated to longer-range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, and others.While certain features set forth herein have been described with reference to various implementations, the description is not intended to be construed in a limiting sense. Hence, various modifications of the implementations described herein, as well as other implementations, which are apparent to persons skilled in the art to which the present disclosure pertains are deemed to lie within the spirit and scope of the present disclosure.It will be recognized that this disclosure is not limited to the embodiments so described, but can be practiced with modification and alteration without departing from the scope of the appended claims. For example, the above embodiments may include specific combinations of features as further provided below.In first examples, an integrated circuit (IC), comprises a capacitor. The capacitor comprises a first electrode layer over a substrate, an insulator layer in direct contact with the first electrode layer. The insulator layer comprises oxygen one or more of Sr, Ba Hf, or Bi. Te capacitor comprises a second electrode layer in direct contact with the insulator layer. At least one of the first electrode layer or the second electrode layer comprises predominantly carbon. The IC comprises one or more levels of interconnect metallization electrically coupled to the capacitor through the first and second electrode layers.In second examples, for any of the first examples, at least one of the first electrode layer or the second electrode layer comprises substantially pure carbon.In third examples, for any of the first through second examples first or second electrode layer comprising predominantly carbon has a resistivity <100 mΩ-cm.In fourth examples, for any of the first through third examples the first or second electrode layer comprising predominantly carbon has a thickness of 5-10 nm.In fifth examples, for any of the first through fourth examples both the first and second electrode layers comprise predominantly carbon.In sixth examples, for any of the first examples only the second electrode layer comprises predominantly carbon.In seventh examples, for any of the sixth examples the first electrode layer comprises a metal.In eighth examples, for any of the seventh examples the metal is at least one of Ti, Ru, or Ir.In ninth examples, for any of the first through eighth examples the first or second electrode layer comprising predominantly carbon is between the insulator layer and a third electrode layer comprising a metal.In tenth examples, for any of the ninth examples the first or second electrode layer comprising carbon has a thickness of 1-3nm.In eleventh examples, for any of the tenth examples both the first and second electrode layers comprise predominantly carbon.In twelfth examples, for any of the first through eleventh examples the insulator layer comprises Sr and Ti, or Ba and Ti, or Sr and Bi.In thirteenth examples, for any of the twelfth examples the insulator layer is BaxSr1-xTiO3 and wherein X is less than 95 and greater than 5.In fourteenth examples, for any of the first through thirteenth examples the insulator layer has a thickness between 5 nm and 50 nm.In fifteenth examples, a system comprises a power supply, a processor coupled to the power supply, and a memory coupled to the processor. The processor or the memory comprises a transistor. The transistor comprises a gate stack over a channel region of a semiconductor material, a source coupled to a first end of the channel region, and a drain coupled to a second end of the channel region. The processor or the memory further comprises a capacitor over the transistor. The capacitor comprises a first electrode layer over a substrate, an insulator layer on the first electrode layer. The insulator layer comprises oxygen one or more of Sr, Bi, or Ba. The capacitor comprises a second electrode layer on the insulator layer. At least one of the first electrode layer or the second electrode layer comprises carbon. The processor or the memory further comprises one or more levels of interconnect metallization electrically coupling the capacitor to the transistor.In sixteenth examples, the system further comprises a battery coupled to the power supply.In seventeenth examples, a method of fabricating an integrated circuit (IC), the method comprises receiving a substrate comprising a first electrode layer, depositing an insulator layer on the first electrode layer. The insulator layer comprises oxygen one or more of Sr, Ba Hf, or Bi. The method comprises depositing a second electrode layer on the insulator layer, wherein depositing the second electrode layer further comprises depositing a layer of carbon, and forming one or more levels of interconnect metallization electrically coupled to the first and second electrodes.In eighteenth examples, for any of the seventeenth examples depositing the layer of carbon further comprises sputtering a target of pyrolytic graphite.In nineteenth examples, for any of the eighteenth examples the sputtering further comprises heating the substrate to at least 200 °C.In twentieth examples, the first electrode layer also comprises predominantly carbon.However, the above embodiments are not limited in this regard and, in various implementations, the above embodiments may include the undertaking of only a subset of such features, undertaking a different order of such features, undertaking a different combination of such features, and/or undertaking additional features than those features explicitly listed. |
System for application priority based on device operating mode. A method is provided for allocating a top visible resource on a device. The method includes receiving a request requesting allocation ofthe top visible resource to a requesting application, and determining that the top visible resource is allocated to an owning application. The method also includes associating owner information withrequester information to form an arbitration request. The method also includes arbitrating the arbitration request to produce an arbitration decision that indicates that the top visible resource is tobe allocated to the requesting application if the owner information indicates that the owning application is privileged and an identifier that identifies the requesting application is contained in arelinquish list associated with the owner information. |
1.A method for operating an application priority system to allocate a top visible resource on a device, the method comprising:Receiving a request to allocate the top visible resource to a request for an application;Determining that the top visible resource is assigned to an all application;Having owner information associated with the requester information to form an arbitration request, wherein the owner information includes information about the all applications and the requester information includes information regarding the requested application;Arbitrating the arbitration request to generate an arbitration decision indicating: if the owner information indicates that all applications have privileges and an identifier identifying the request application is included in the owner In the information related abandonment list, then the top visible resource will be assigned to the requesting application; andA top view resource is allocated based on the arbitration decision.2.The method of claim 1 wherein said step of arbitrating comprises arbitrating said arbitration request to generate an arbitration decision, said arbitration decision indicating: if said owner information indicates that said all applications are privileged and said The identifier identifying the requesting application is not included in the abandonment list associated with the owner information, then the top visible resource will be assigned to all of the applications.3.The method of claim 1 wherein said step of arbitrating comprises arbitrating said arbitration request to generate an arbitration decision indicating said at least one parameter to be included in said owner information The top visible resource is allocated to one of the all applications and the requesting application.4.The method of claim 1 wherein said step of arbitrating comprises arbitrating said arbitration request to generate an arbitration decision based on any information selected from the group of information items, said group information item including said owner information The requester information, device status information, device operation mode information, user preference information, and third party preference information.5.The method of claim 1 wherein said step of arbitrating is performed by a resource arbiter, and wherein said method comprises downloading said resource arbiter to said device.6.The method of claim 1 wherein said device is a wireless device.7.An apparatus for operating an application priority system to dynamically allocate a top visible resource on a device, the device comprising:A resource manager comprising:Logic for receiving a request to allocate the top visible resource to a requesting application;Logic for determining that the top visible resource is assigned to an all application; andLogic for correlating owner information with requester information to form an arbitration request, wherein the owner information includes information about the all applications and the requester information includes information regarding the requested application;a resource arbiter operative to arbitrate the arbitration request to generate an arbitration decision, the arbitration decision indicating: if the owner information indicates that all applications have privileges and an identifier identifying the request application Included in a discard list associated with the owner information, then the top visible resource will be assigned to the request application; andLogic for assigning top view resources based on the arbitration decision.8.The apparatus of claim 7 wherein said resource arbiter operates to arbitrate said arbitration request to generate an arbitration decision, said arbitration decision indicating if said owner information indicates that said all applications are privileged The identifier identifying the requesting application is not included in the abandonment list associated with the owner information, then the top visible resource will be assigned to all of the applications.9.The apparatus of claim 7 wherein said resource arbiter is operative to arbitrate said arbitration request to generate an arbitration decision, said arbitration decision indication being based on at least one parameter included in said owner information The top visible resource is allocated to one of the all applications and the requesting application.10.The apparatus of claim 7 wherein said resource arbiter is operative to arbitrate said arbitration request to generate an arbitration decision based on any information selected from the group of information items, said group information item comprising said all Information, the requester information, device status information, device operating mode information, user preference information, and third party preference information.11.The apparatus of claim 7 further comprising logic for downloading said resource arbiter to said device.12.The device of claim 7 wherein said device is a wireless device.13.A device for operating an application priority system to allocate a top visible resource on a device, the device comprising:Means for receiving a request to allocate the top visible resource to a requesting application;Means for determining that the top visible resource is assigned to an all application;Means for correlating owner information with requester information to form an arbitration request, wherein the owner information includes information about the all applications and the requester information includes information regarding the requested application;Means for arbitrating the arbitration request to generate an arbitration decision, the arbitration decision indicating: if the owner information indicates that all applications have privileges and an identifier identifying the request application is included in an In the abandonment list associated with the owner information, then the top visible resource will be assigned to the requesting application; andA means for allocating top view resources based on the arbitration decision.14.The apparatus of claim 13 wherein said means for arbitrating comprises means for arbitrating said arbitration request to generate an arbitration decision, said arbitration decision indicating: if said owner information indicates said All applications have privileges and the identifier identifying the requesting application is not included in the abandonment list associated with the owner information, then the top visible resource will be assigned to all of the applications.15.The apparatus of claim 13 wherein said means for arbitration includes means for arbitrating said arbitration request to generate an arbitration decision, said arbitration decision indication being based on inclusion in said owner information At least one parameter assigns the top visible resource to one of the all applications and the requesting application.16.The apparatus of claim 13 wherein said means for arbitrating comprises means for arbitrating said arbitration request to generate an arbitration decision based on any information selected from the group of information items, said group information The item includes the owner information, the requester information, device status information, device operation mode information, user preference information, and third party preference information.17.The apparatus of claim 13 wherein said means for arbitration is performed by a resource arbiter, and wherein said apparatus comprises means for downloading said resource arbiter to said means.18.The device of claim 13 wherein said device is a wireless device.19.A computer readable medium comprising instructions that, when executed by a processor in a device, provide an application priority system for allocating a top visible resource on a device, the computer readable medium comprising :An instruction for receiving a request to allocate the top visible resource to a request for an application;An instruction for determining that the top visible resource is assigned to an all application;An instruction for correlating owner information with requester information to form an arbitration request, wherein the owner information includes information about the all applications and the requester information includes information regarding the requested application;An instruction for arbitrating the arbitration request to generate an arbitration decision, the arbitration decision indicating: if the owner information indicates that all applications have privileges and an identifier identifying the request application is included in an In the abandonment list associated with the owner information, then the top visible resource will be assigned to the requesting application; andAn instruction to allocate a top view resource based on the arbitration decision.20.The computer readable medium of claim 19, wherein the instructions for arbitration comprise instructions for arbitrating the arbitration request to generate an arbitration decision, the arbitration decision indicating: if the owner information Instructing all of the applications to have privileges and the identifier identifying the requesting application is not included in the abandonment list associated with the owner information, then the top visible resource will be assigned to all of the applications.21.A computer readable medium according to claim 19, wherein said instructions for arbitration comprise instructions for arbitrating said arbitration request to generate an arbitration decision, said arbitration decision indication being based on inclusion in said At least one parameter in the person information assigns the top visible resource to one of the all applications and the requesting application.22.The computer readable medium of claim 19, wherein the instructions for arbitration comprise instructions for arbitrating the arbitration request to generate an arbitration decision based on any information selected from the group of information items, The group information item includes the owner information, the requester information, device status information, device operation mode information, user preference information, and third party preference information.23.The computer readable medium of claim 19, wherein the instructions for arbitration are performed by a resource arbiter, and wherein the computer readable medium comprises for downloading the resource arbiter to the device Instructions.24.The computer readable medium of claim 19 wherein said device is a wireless device. |
System for applying priority based on device operating modeTechnical fieldThe present invention relates generally to the operation of devices and, more particularly, to a system for applying priority based on device operating modes.Background techniqueTechnological advances have led to the development and use of a large number of data networks. These networks include public data networks (such as the Internet) and professional networks (such as wireless telecommunications networks). Users of these networks have the ability to access a wide variety of information and services available. For example, wireless device owners can now download a wide variety of applications for execution on their devices.Resource allocation has become increasingly important as downloadable applications have increased. Device resources include displays, keyboards, sound processors, modems, storage devices, communication channels, and other types of resources. Because each device has a limited amount of resources, the way in which those resources are allocated to competing applications determines how the device operates. For example, a wireless telephone may be in a voice call, a data call, a running application, or an SMS message, and the device needs to know how to allocate its resources to various applications under certain operational conditions.A very important resource for most devices is called "top visible" resource (TVR) (sometimes referred to as "scoped"). The top visible resources typically include the display of the device and the input device (ie, keyboard) of the device, which allows the application to depict to the user that they are the current concerns of the device. For example, an application that is assigned a top visible resource can interact with a device user via a display and a keyboard.Effective arbitration between competing applications executing on the device is required to determine which application should have access to the top visible resource. For example, the operation of the device that determines the "user experience" is determined by how the application is assigned a top visible resource.In the current system, the top visible resource is assigned to the application according to static rules. For example, in a telephone handset device, application startup is not allowed when a phone call is in progress. In general, application access to top visible resources is controlled by hard-coded allocation rules that do not take into account the dynamic environment in which the current device operates.Therefore, there is a need for a dynamic application priority system that operates to determine which application in a device is assigned a top visible resource based on the operating environment of the device. The system should also provide a mechanism that allows third parties (e.g., network operators) to have input regarding how the top visible resources on the device are distributed, such that the user experience provided by the device can be dynamically controlled.Summary of the inventionIn one or more embodiments, an application priority system is provided that operates to dynamically allocate top visible resources of a device. In one embodiment, a method of operating an application priority system to allocate top visible resources on a device is provided. The method includes receiving a request to allocate a top visible resource to a requesting application; and determining that the top visible resource is allocated to all applications. The method also includes correlating owner information with requester information to form an arbitration request, wherein the owner information includes information about all applications and the requester information includes information about the requested application. The method also includes arbitrating the arbitration request to generate an arbitration decision indicating that if the owner information indicates that all applications have privileges and the identifier identifying the request application is included in the abandonment list associated with the owner information Medium, then the top visible resource will be assigned to the requesting application. The method also includes assigning a top view resource based on the arbitration decision.In one embodiment, an apparatus is provided that operates an application priority system to dynamically allocate top visible resources on a device. The device includes a resource manager operative to allocate a top visible resource to all applications and receive a request from the requesting application to allocate a top visible resource. The device also includes arbitration logic that operates to allocate top visible resources to all applications when all applications have privileges and have not identified the requesting application in the abandonment list. The arbitration logic is also operative to arbitrate the allocation of top visible resources between all applications and the requesting application when all applications are privileged and the requesting application has been identified in the abandonment list. The arbitration logic is also operative to arbitrate the allocation of top visible resources between all applications and the requesting application when all applications are not privileged.In one embodiment, an apparatus is provided that operates an application priority system to dynamically allocate top visible resources on a device. The device includes means for allocating top visible resources to all applications, and means for receiving a request to allocate top visible resources from a requesting application. The device also includes means for allocating top visible resources to all applications when all applications have privileges and have not identified the requesting application in the abandonment list. The apparatus also includes means for arbitrating the allocation of top visible resources between all applications and the requesting application when all applications have privileges and the requesting application has been identified in the abandonment list. The device also includes means for arbitrating the allocation of top visible resources between all applications and the requesting application when all applications are not privileged.In one embodiment, a computer readable medium is provided that includes instructions that, when executed by a processor in a device, provide an application priority system to dynamically allocate top visible resources on the device. The computer readable medium includes instructions for allocating top visible resources to all applications, and instructions for receiving a request to allocate top visible resources from a requesting application. The computer readable medium also includes instructions for allocating top visible resources to all applications when all applications have privileges and have not identified the requesting application in the abandonment list. The computer readable medium further includes instructions for arbitrating the allocation of top visible resources between all applications and the requesting application when all applications are privileged and the requesting application has been identified in the abandonment list. The computer readable medium also includes instructions for arbitrating the allocation of top visible resources between all applications and the requesting application when all applications are not privileged.Other aspects, advantages, and features of the invention will be apparent from the description and appended claims appended claims.DRAWINGSThe above aspects and attendant advantages of the embodiments described herein will be more readily understood by reference to the following detailed description.1 shows an embodiment of a dynamic application priority system that operates to allocate top visible resources on a device;2 shows a functional diagram of one embodiment of an application priority system for a top visible resource in a distribution device (eg, the device shown in FIG. 1);3 shows an embodiment of an apparatus including an embodiment of an application priority system;4 shows an embodiment of a method of providing an embodiment of an application priority system for use in a device;5 shows an embodiment of a method of operating a resource arbiter to provide an embodiment of an application priority system;6 shows one embodiment of a resource control structure suitable for use with one or more embodiments of an application priority system; and7 shows an example of how top-view resources in a device can be distributed between two applications in accordance with one or more embodiments of an application priority system.Detailed waysThe following detailed description describes one or more embodiments of an application priority system that operates to dynamically allocate top visible resources in a device. In one embodiment, the application requests the allocation of the top visible resource by transmitting an allocation request to the resource manager. The allocation request is combined with one or more parameters describing the state of the visible resource at the top and its current owner to form an arbitration request. The arbiter processes the arbitration request in accordance with the arbitration rules to produce an arbitration decision indicating how the resources will be allocated. The arbitration decision is then used to assign the top visible resource. The system is suitable for use with any type of wired or wireless device, including but not limited to a desktop computer, notebook computer, wireless telephone, pager, PDA, email device, tablet computer, or any other type of wired or wireless device. .In one or more embodiments, the application priority system interacts with a runtime environment executing on the device for simplifying operation of the device, for example, by providing a generalized call for device specific resources. One such runtime environment is the Binary Runtime Environment for WirelessTM (BREW(R)) software platform developed by QUALCOMM, Inc. of San Diego, California. In the following description, it will be assumed that one embodiment of an application priority system is constructed using a wireless device that is executing a runtime environment such as the BREW software platform. However, one or more embodiments of the application priority system are suitable for use with other types of runtime environments to dynamically allocate top visible resources on wired and wireless devices. Moreover, the term "top visible resource" is used herein to describe any type of hardware and/or software resources on a device that are used to represent a device's current concerns, including but not limited to a combination of a device display and a keyboard.1 shows one embodiment of a dynamic application priority system 100 that operates to allocate top visible resources on a device. System 100 includes a wireless terminal 102 that communicates with data network 104 via a wireless communication channel 106. Data network 104 includes any type of data network, which may include, but is not limited to, a wired, wireless, private, or public data network, or any combination thereof.System 100 also includes a server 108 coupled to network 104 via communication channel 110 to provide services to devices in communication with network 104. For example, wireless terminal 102 can be a wireless telephone, and server 108 can be part of a nationwide telecommunications network that provides telecommunications services to the telephone. Communication channel 110 can be any type of wired or wireless communication channel.Device 102 includes an application 116 that is executed on the device to provide features and functionality desired by the device user. For example, application 116 can be downloaded from server 108 to device 102 as shown by path 120. During execution, the application 116 attempts to gain access to the top visible resource 118 of the device 102, which in one embodiment includes a device display and a keyboard.In one embodiment, applications 116 each have one or more group IDs that indicate the rights and privileges of the application. For example, the group ID identifies whether the application has privilege (P) or no privilege (nP) relative to the top visible resource. In one embodiment, both privileged and non-privileged applications can access the top visible resource, however, the privileged application is allowed to specify the selected application or group for which control of the top visible resource will be discarded.In one embodiment, device 102 includes an arbiter 112 and an arbitration rule 114. For example, the arbiter 112 and arbitration rules 114 can be downloaded from the server 108 to the device 102, as shown by path 120. The arbiter 112 operates to arbitrate requests for top visible resources 118 from competing applications executing on the device 102. The arbiter 112 determines the arbitration request based on information about all applications that request the application and the resource. In one embodiment, the privilege state and arbitration rules 114 of all applications are used to generate an arbitration decision indicating how to allocate the top visible resource 118 to the competing application. Thus, by relating a particular privileged state to a selected application and downloading a set of specific arbitration rules 114 from the server 108 to the device 102, the server 108 has the potential to control how the top visible resource 118 on the device is assigned to the application 116. This allows the server 108 to control the user experience provided by the device 102.In one or more embodiments, server 108 and terminal 102 can be any type of device, and their associated connections to data network 104 can be wireless, wired, or any combination thereof. Thus, embodiments of the dynamic application priority system operate to allow the server to control how to allocate top visible resources on the device using virtually any network configuration with multiple servers and terminals.2 shows a functional diagram of one embodiment of an application priority system 200 for a top visible resource in a distribution device (eg, device 102 shown in FIG. 1). System 200 includes a top visible resource manager 202, a top visible resource state 204, a resource arbiter 206, and an arbitration rule 208. The top visible resource 210 of the device and the applications (1-4) executed on the device are also shown, which are generally shown at 212.The top visible resource manager 202 includes hardware logic, software logic, or any combination thereof, and operates to manage the top visible resource 210. The top visible resource 210 can include any type of device resource (eg, display and keyboard) that is used to depict to the user which application has the device's attention. For example, an application with a device's focus is an application that is currently interacting with a user.The top visible resource state 204 includes hardware, software, or any combination thereof. In one embodiment, the top visible resource state 204 includes information about the top visible resource 210 and/or information about the current resource owner (owner information). For example, an application that is currently assigned a top visible resource 210 is considered a resource owner, current owner, or all applications. For example, in one embodiment, the top visible resource state 204 includes information about the current owner, including the current owner identifier (ID), one or more group IDs, privileged status, and resource acquisition. Reason, abandonment list, and/or any other information about the current owner or top visible resource 210. In one embodiment, the abandonment list identifies what application or group (ie, a privilege class) the current owner is willing to release the top visible resource 210 to. In one embodiment, this list is controlled during the arbitration process performed by resource arbiter 206. In another embodiment, only this abandonment list is considered a recommendation to the resource arbiter 206 as to how arbitration should be decided. The top visible resource manager 202 operates to save, update, change, add, delete, or otherwise process the information included in the resource state 204.Resource arbiter 206 includes hardware, software, or any combination thereof, and operates to arbitrate access to top visible resource 210 using arbitration rules 208. For example, in one embodiment, resource arbiter 206 can be a program module, and arbitration rule 208 can be a parameter stored in memory that is retrieved by resource arbiter 206 and processed to dynamically allocate top visible Resource 210. In one or more embodiments, top visible resource manager 202 and resource arbiter 206 can be constructed as downloadable extensions of the runtime environment executing on the device; for example, it can be a downloadable BREW extension.During operation of one embodiment of the dynamic application priority system 200, the application submits an allocation request to the top visible resource manager 202 to gain access to the top visible resource 210. If the top visible resource 210 is available, the top visible resource manager 202 assigns the top visible resource 210 to the requesting application. If the top visible resource 210 is currently owned by another application, the top visible resource manager 202 combines the arbitration request including information about the requesting application (requester information) and information about the current owner of the resource (owner information). Respond. Information about the requesting application originates from the allocation request, and information about the current owner of the resource is derived from the top visible resource state 204. The arbitration request is sent to the resource arbiter 206 as shown at 214. The resource arbiter 206 uses the arbitration rules 208 to process the arbitration request to generate an arbitration decision, and sends the arbitration decision back to the top visible resource manager 202, as shown at 216. The top visible resource manager 202 then operates to assign the top visible resource 210 to the appropriate application in accordance with the arbitration decision.In one embodiment, the application 212 has a group ID that determines whether a particular application has privileges with respect to the top visible resource 210. For example, the group ID is associated with a set of rights that apply to all applications with the group ID. One of the rights defines the privileged state of the application relative to the top visible resource 210. When the application first gains access to the top visible resource 210, the application (via the allocation request) provides information to the top visible resource manager 202, including its application ID, one or more group IDs, and a request to the top. The reason why the resource 210 is accessed is visible. In one embodiment, the group ID is not passed because it can be inferred from the application ID. In one embodiment, the reason for requiring access to the top visible resource 210 is selected from one of several enumerated types. However, the group reasons are not limited to an enumeration list, but may also be related to a group ID or other customization reasons.If it is determined from the application's group ID that the application has privileges, then it can be restricted which application can take the top visible resource 210. For example, an application may specify a discard list that identifies an application for which the top visible resource 210 will be discarded. For example, an application in the abandonment list may be identified by one or more of its group IDs.When another application requests access to the top visible resource 210, the top visible resource manager 202 generates an arbitration request containing information about the current resource owner (owner information) and information about the requesting application (requester information). As part of the arbitration request, the privileged state of the top visible resource 210 owner and requester is passed to the resource arbiter 206 along with the associated reason for which the top visible resource 210 is intended to be obtained and any abandonment list. The information passed to the resource arbiter 206 may also include any other parameters or criteria, for example, the information passed to the resource arbiter 206 may include user preferences, current device operating modes, operator preferences, or may be used to arbitrate requests. Any other type of information. Resource arbiter 206 then uses this information to determine how top visible resource 210 will be allocated.In one embodiment, the current top visible resource 210 owner can dynamically change its application priority relative to the top visible resource 210. For example, after an initial access to the top visible resource 210 that other applications may be restricted from accessing, the application may change its abandon list and thus allow other applications to gain access to the top visible resource 210. Thus, the system operates to provide flexibility by allowing an application that owns the top visible resource 210 to release the top visible resource 210 or make the top visible resource 210 more accessible by other applications.In another embodiment, the system provides a callback mechanism that allows an application to register a callback function. The callback function allows the system to notify the application when there is a change in the state of the top visible resource 210. For example, the callback function can be used to notify the application when the top visible resource 210 is idle or when the top visible resource 210 becomes occupied because it is assigned to another application. Accordingly, system 200 operates to provide a dynamic application priority system to assign access to the top visible resource 210 of the device to a particular application.To further accommodate the changing operating environment, resource arbiter 206 and arbitration rules 208 can be downloaded from the network entity to the device, thus allowing the third party to have input regarding how the top visible resource 210 on the device is allocated. For example, in one embodiment, the device is a wireless telephone and the resource arbitrator 206 and arbitration rules 208 are downloaded to the device from a network server that is part of a nationwide telecommunications carrier network. In this way, the telecommunications carrier is provided with input on how to distribute the top visible resource 210 between applications on the device, and the telecommunications carrier is thus able to control the user experience provided by the device.FIG. 3 shows one embodiment of an apparatus 300 that includes an embodiment of an application priority system. Apparatus 300 includes processing logic 302, memory 304, display logic 306, keyboard logic 308, and I/O interface 310, all of which are coupled to internal data bus 312. For the sake of clarity, it will be assumed that top visible resource 332 on device 300 includes display logic 306 and keyboard logic 308. It should be noted that one or more embodiments of the application priority system are suitable for use with other devices having different configurations, and it is possible to use different functional elements to define the top visible resource 332.Processing logic 302 includes a CPU, a processor, a gate array, discrete logic, or other hardware or software, or any combination thereof. Accordingly, processing logic 302 typically includes logic for executing machine readable instructions to perform the functions described herein. For example, instructions can be loaded into device 300 from a computer readable medium, such as a floppy disk, CDROM, flash memory, or other computer readable medium that interfaces with device 300 via interface 310. In another embodiment, instructions may be downloaded to the device 300 from a network resource, such as a network server or any other type of network resource via the interface 310. The instructions, when executed by processing logic 302, provide one or more embodiments of an application priority system as described herein.Memory 304 includes any type of RAM, ROM, hard disk, floppy disk, flash memory, or any other type of memory device. Display logic 306 includes logic for controlling a display device, such as a CRT, LCD, or any other type of display device. Keyboard logic 308 includes logic for controlling a user input device to receive user input, such as a keyboard, pen, joystick, or any other type of user input device. I/O interface 310 includes hardware and/or software or any combination thereof to allow device 300 to interface with an external device or system. For example, I/O interface 310 includes logic for interfacing to an external storage system, such as a disk drive or other memory device. Interface 310 also includes logic for interfacing to an external system, such as a local computer system. Additionally, the interface also includes logic for interfacing with a data network to allow communication with remote computers and servers.During operation of the device, program instructions executed by processing logic 302 activate runtime environment 314. For example, the runtime environment can be a BREW runtime environment. The program instructions executed by processing logic 302 also activate top visible resource manager 318. The top visible resource manager 318 operates to control access to the top visible resource 332, thereby allowing the application to control the display resource 306 and the keyboard resource 308. Thus, the top visible resource manager 318 operates to control access to the top visible resource 332 (display 306 and keyboard 308), allowing the application to interact with the device user.The top visible resource manager 318 receives a request to access the top visible resource 332 from an application (320, 322, 324, 326) running on the device 300. Applications (320, 322, 324, 326) may be any type of application suitable for execution on device 300. For example, the application can be a multimedia application, a calendar application, an email application, a voice processing application, or any other type of application that provides useful features and/or functionality when executed on device 300. To facilitate the allocation of the top visible resource 332, the top visible resource manager 318 saves the top visible resource state 328 in the memory 304. Device state 334 identifies the current mode of operation of the device, for example, the device mode of operation may be idle, running an application, receiving a message, processing a voice call, playing a game, or being in any other type of device mode of operation.When the applications (320, 322, 324, 326) are executed on the device 300, they submit a request to the top visible resource manager 318 to access the top visible resource 332. In the event that the top visible resource 332 is not currently assigned, the top visible resource 332 can be easily assigned to the requesting application. However, if the top visible resource 332 is currently assigned to an application, then any request from another application to access the top visible resource 332 needs to be arbitrated to determine which application the top visible resource 332 will be assigned to.In one or more embodiments, the application priority system operates to arbitrate one of the top visible resources 332 assigned to the application executing on the device. For example, if the top visible resource 332 is currently assigned to an application, the top visible resource manager 318 submits an arbitration request to the resource arbiter 316. The request contains information about the requesting application (requester information) and information about the current owner of the top visible resource 332 (owner information). For example, information about the current owner of the top visible resource 332 is saved in the top visible resource state 328.In one embodiment, resource arbiter 316 processes the arbitration request based on arbitration rules 330 stored in memory 304. For example, in one embodiment, the arbitration rules 330 are downloaded from the web server to the device 300 such that the web server can provide input on how to arbitrate resource requests in the device 300. The arbitration request is processed by resource arbiter 316 to generate an arbitration decision that is returned to top visible resource manager 318. The top visible resource manager 318 then allocates resources based on the arbitration decision.It should be noted that the description of the application priority system shown in apparatus 300 is merely illustrative of one embodiment, and that other configurations are possible to provide the functionality described herein. For example, the functional elements of device 300 may be combined, reconfigured, altered, added, or deleted within the scope of the described embodiments.Resource arbiterIn one or more embodiments, the resource arbiter 316 operates as a central decision maker to determine if the top visible resource 332 can be handed over to the requesting application (or object). In one embodiment, resource arbiter 316 is installed on the device during manufacture. In another embodiment, the resource arbiter 316 can be customized by the web server and constructed as a downloadable module that can be updated or replaced as needed. For example, in an embodiment where the device is a wireless telephone, the resource arbiter 316 can be customized and downloaded to the phone from a web server operated by the communication OEM/operator. Preferably, a single resource arbiter 316 is used to arbitrate requests for top visible resources 332 on the device.In one embodiment, the visible resource manager 318 from the top passes a variety of information to the resource arbiter 316, and the information is used to generate an arbitration decision. In one embodiment, the information passed to the resource arbiter 316 includes information about the requesting application (requester information) and information about the current owner of the top visible resource 332 (owner information). However, in other embodiments, additional types of information are passed to the resource arbiter 316, and this additional information includes device status information 334, user preference information, third party preference information, and is suitable for use by the resource arbiter to generate arbitration decisions. Any other type of information.Additionally, in one embodiment, the resource arbiter 316 is extensible such that the arbitration process can be modified to provide arbitration decisions using different information items during different time periods or operational conditions. A brief description of the requester and owner information that may be passed to the resource arbiter 316 to generate an arbitration decision is depicted below, however, the information that may be passed to the resource arbiter is not limited to the list shown.A. Resource owner information1.Owner class identifier (CLSID) and instance pointer2.Reasons for getting TVR3.Give up control informationa. Discard the identifier listb. list count (-1 = all, 0 = no, otherwise count)B. Requester information1.Requester Class Identifier (CLSID) and instance pointer2.Reasons for getting TVR3.Give up control informationa. Discard the identifier listb. list count (-1 = all, 0 = no, otherwise count)4 shows one embodiment of a method 400 of providing one embodiment of an application priority system for use in a device. For the sake of clarity, the operation of method 400 will be described with respect to apparatus 300 shown in FIG. For example, method 400 demonstrates how to dynamically allocate top visible resource 332 (display 306 and keyboard 308) to one of applications 320, 322, 324, and 326 in one embodiment.At block 402, the first application sends a resource allocation request related to the top visible resource 332 to the top visible resource manager 318. For example, the application 320 sends a resource allocation request to the top visible resource manager 318 requesting the allocation of the top visible resource 332. The allocation request contains information about the application 320; for example, the allocation request includes requester information as described above.At block 404, the top visibility manager 318 assigns the top visible resource 332 to the first application. For example, the top visible resource manager 318 assigns the top visible resource to the application 320. In addition, the top visible resource manager 318 updates the owner information described above using the requester information provided in the allocation request. The resource owner information is then stored in the top visible resource state 328.At block 406, the second application sends a resource allocation request related to the top visible resource 332 to the top visible resource manager 318. For example, the application 322 sends a resource allocation request to the top visible resource manager 318 requesting the allocation of the top visible resource 332. The allocation request contains information about the application 322; for example, the allocation request includes requester information as described above.At block 408, the top visible resource manager 318 sends an arbitration request to the resource arbiter 316. For example, the top visible resource manager 318 sends an arbitration request to the resource arbiter 316. The arbitration request contains resource owner information from the top visible resource state 328 and requester information from the allocation request. Thus, the arbitration request provides information to the resource arbiter 316 regarding the current owner and current requestor of the top visible resource 332.At block 410, the resource arbiter 316 generates an arbitration decision indicating which application should be assigned the top visible resource 332. For example, resource arbiter 316 generates an arbitration decision and transmits the decision to top visible resource manager 318. The resource arbiter 316 generates an arbitration decision based on the arbitration rules 330 stored in the memory 304. In one embodiment, the resource arbiter 316 and the arbitration rules 330 are downloaded from a third party (eg, an OEM/operator) that allows for updates and also provides a mechanism for third parties to determine how the top visible resource 332 on the device is allocated. A more detailed description of how the resource arbiter 316 generates arbitration decisions is provided in another part of this document.At block 412, the top visibility manager 318 allocates the top visible resource 332 based on the arbitration decision. For example, the top visible resource manager 318 assigns the top visible resource 332 to the first application 320 or the second application 322 based on the arbitration decision. The top visible resource manager 318 also updates the top visible resource state 328 with any new resource owner information.Thus, method 400 operates to provide one embodiment of a dynamic application priority system for use in a device. It should be noted that the method 400 is merely illustrative of one embodiment and that it is possible to reconfigure, change, combine, add or delete method steps within the scope of the described embodiments. For example, it is possible for an application to register a callback function to the top visible resource manager 318 such that the state and/or availability of the top visible resource 332 can be provided to the application as needed. Therefore, it is possible for the application priority system to provide additional auxiliary functions, and these auxiliary functions are within the scope of the described embodiments.FIG. 5 shows one embodiment of a method 500 of operating an resource arbiter to provide an embodiment of an application priority system. For the sake of clarity, the operation of method 500 will be described with respect to apparatus 300 shown in FIG. Thus, in one embodiment, method 500 is implemented by resource arbiter 316 shown in FIG.At block 502, an arbitration request is received at resource arbiter 316. For example, the top visible resource manager 318 submits an arbitration request to the resource arbiter 316. The arbitration request includes information about the current owner of the top visible resource 332 (owner information) and information about the application requesting access to the top visible resource 332 (requester information).At block 504, a test is performed on the abandonment list provided by the current owner of the top visible resource 332 to determine which applications the current owner will abandon for the top visible resource 332. The abandon list is part of the current owner information provided in the arbitration request. If the abandon list specifies that any application can obtain the top visible resource 332, then the method proceeds to block 510. If the abandon list specifies that no application or only a particular application can gain control of the top visible resource 332, then the method proceeds to block 506.At block 506, a test is performed to determine if the requesting application is one of the applications identified in the abandonment list. For example, the abandon list specifies a group ID or application ID that can be used to identify the selected application. If the identifier of the requesting application is specified in the abandonment list, then the method proceeds to block 510. If the identifier of the requesting application is not specified in the abandonment list, then the method proceeds to block 508.At block 508, an arbitration decision is made to maintain the current owner of the top visible resource 332. Because the current owner has privileges and the requesting application is not on the abandonment list, the request to the application is assigned a request to allocate the top visible resource 332. The method then proceeds to block 512 where the arbitration decision is returned to the top visible resource manager 318.At block 510, an arbitration request from the requesting application is arbitrated based on the selected criteria. For example, in one embodiment, the request is arbitrated based on arbitration rule 330. In fact, any criteria can be used to determine which application will be assigned a top visible resource 332. For example, arbitration may be based on the reason why each application wants to get the top visible resource 332, the mode of operation of the device, user preferences, operator (third party) preferences, or any other criteria.Implementation exampleAn example implementation of one embodiment of an application priority system that operates to allocate top visible resources in a device is described below. In one embodiment, the system includes a top visible resource manager that provides a means of controlling resource access for applications (objects) (including BREW applications). The top visible resource manager also coordinates and manages the acquisition and release of the top visible resource by the object, and also operates to notify the registered object when the state of the top visible resource changes.In one embodiment, the OEM or operator constructs a set of arbitration rules that are used to determine whether the current application can be paused or placed in background mode in order to launch a new application, assuming the current state of the device. For example, if the device is a wireless telephone that is in a voice call, the OEM can define an arbitration rule to prevent another application from gaining access to the top visible resource and thus interrupt the call.FIG. 6 illustrates one embodiment of a resource control structure 600 suitable for use with one or more embodiments of an application priority system. For each top visible resource 602 being managed, there is a resource interface 604 that controls the object, an IResourceCtl interface 606 that controls access, and a top visible resource manager 608. In addition, resource arbiter 610 is provided to arbitrate access to top visible resource 602.When an instance of the resource interface 604 is created, it contains an IResourceCtl instance 612. The IResourceCtl instance 612 interacts with the top visible resource manager 608 to acquire and release the potential top visible resource 602. It should be noted that even if one application has control over the top visible resource 602, another application can control the same top visible resource 602 based on existing arbitration rules at any time.7 shows a diagram 700 illustrating an allocation example that describes how to allocate top visible resources in a device between two applications in accordance with one or more embodiments of a dynamic arbitration system. For example, diagram 700 shows interactions between various device entities including resource arbiter 702, top visible resource manager 704, application A 706, resource instance A 708, application B 712, and resource instance B 714.At the beginning of the allocation instance, Application A 706 issues to Resource Instance A 708 a resource request 714 that retrieves access to the top visible device resource managed by top visible resource manager 704. The resource request is forwarded from resource instance A 708 to top visible resource manager 704, as shown at 716. It will be assumed that at this point in time, the top visible resource is not allocated, so the top visible resource manager 704 assigns the top visible resource to the application A 706 and issues a "success" indicator that flows back to the application A. 708, which is shown at 718 and 720. At this point, application A 708 has acquired the top visible resource. In addition, application A 706 registers a callback function with resource instance A 708 to receive information regarding any state changes of the top visible resource, as shown at 722.Application B 710 then issues a resource request 724 to resource instance B 712 that retrieves the top visible resource managed by top visible resource manager 704. The resource request is forwarded from resource instance B 712 to top visible resource manager 704, as shown at 726. The request from application B 710 causes top visible resource manager 704 to request arbitration from resource arbiter 702, as shown at 728. Resource arbiter 702 processes arbitration request 730 in accordance with the embodiments described herein. The resource arbiter 702 provides arbitration results indicating that the top visible resource was successfully assigned to the application B 710, as shown at 730, 732, and 734. Therefore, at this point, application B 710 has acquired the top visible resource. Because application A 706 is registered for the state change notification (at 722), application A 706 will be alerted via callback function 736 because the state of the top visible resource has changed. Thus, in response to the callback, Application A 706 issues a "Get Status" command 738 that returns a notification that the top visible resource has been assigned to another application and is now being occupied.Custom resource arbiterThe resource arbiter is a central decision maker that determines whether the top visible resource can be handed over to the request object. The resource arbiter module can be customized by the OEM/operator and can be constructed as a module that can be downloaded using the category identifier (CLSID). In one embodiment, there is a single resource arbiter implementation (IResArbiter) implementation for the top visible resource. In one embodiment, the resource arbiter method IResArbiter ConfirmAcquire is passed with the resource owner's information and requester information as described above to generate an arbitration decision.If the current owner has specified to abandon the CLSID list and the requestor is identified in the list of application IDs or group IDs specified in the abandonment list, or if the owner allows any ID (as in the case of an owner without privileges) Then, the resource arbiter can decide to transfer ownership based on the rest of the information provided (the simplest implementation permits the request). If the requestor is not identified on the abandon CLSID list, the resource arbiter rejects the request. The following is an exemplary implementation of the ConfirmAcquire method suitable for a resource arbiter used in a device that performs a BREW runtime environment.Int OEMResArbiter_ConfirmAcquire(IResArbiter*po,AEECLSID clsReq, AEEResCtlInfo*pOwner, AEEResCtlInfo*pRequester){CResArbiter*pMe=(CResArbiter*)po;Int status=EITEMBUSY;Int i;////First look at the list of categories to see if the owner will allow this//Switch (pOwner->nClsCount){Situation -1: Allow anyone to access resources Status=SUCSESS; Break;Case 0: No one is allowed to access resources Status=EITEMBUSY; Break;Default: View access (abandon) list For(i=0;i<pOwner->nClsCount;i++) { Uint32 privId=pOwner->pClsList[i]; If(privId<QVERSION) { //The reason is acceptable? If(privId==pRequester->dwReason) { Status=SUCCESS; Break; } } Else { / / Is the requester category id matching or has group privileges? If(ISHELL_CheckPrivLevel(pMe->m_pIShell,privId,TRUE)) { Status=SUCCESS; Break; } } } Break;}//At this point, the OEM can choose to accept the access list.//Allow viewing and/or add additional decision algorithms, such as/ / Check the current access reason or allow specific/ / The requester CLSID does not consider the owner's access list, etc.//If the current application responds to EVT BUSY,// Then BREW sets dwReason to RESCTL REASON BUSY. If(pOwner->dwReason==RESCTL_REASON_BUSY&&clsReq== AEECLSID_TOPVISIBLECTL) Status=EITEMBUSY; Return(status); }Thus, although one or more embodiments of an application priority system for use in a device have been illustrated and described herein, it will be appreciated that the described embodiments may be practiced without departing from the spirit or essential characteristics of the described embodiments. The embodiment makes various changes. Accordingly, the disclosure and the description of the invention are intended to be illustrative and not restrictive |
By providing a conductive capping layer (106) for metal-based interconnect lines, an enhanced performance with respect to electromigration may be achieved. Moreover, a corresponding manufacturing technique is provided in which via openings (110) may be reliably etched into the capping layer (106) without exposing the underlying metal (105b), such as copper-based material, thereby also providing enhanced electromigration performance, especially at the transitions between copper lines and vias. |
CLAIMS WHAT IS CLAIMED: 1. A method, comprising: forming a first opening (110) in a dielectric layer (108) formed above a metal region, said metal region comprising a metal-containing portion (105b) and a conductive capping layer (106), said capping layer (106) covering said metal-containing portion (105b) so as to form at least one interface with said dielectric layer (108); etching through said first opening (110) into said capping layer (016) while maintaining said metal- containing portion (105b) covered by said conductive capping layer (106); and filling said first opening (110) at least with a barrier material (114) and a metal-containing material. 2. The method of claim 1, wherein said metal comprises copper. 3. The method of claim 1, further comprising forming said metal region by: forming a second opening in a dielectric layer (102); forming a conductive barrier layer (104) at a bottom and sidewalls of said second opening; filling said second opening with a metal to form said metal-containing portion (105b); and forming said capping layer (106) on said metal-containing portion (105b). 4. The method of claim 3, wherein filling said second opening comprises recessing said metal to form said metal-containing portion (105b). 5. The method of claim 4, wherein recessing said metal comprises depositing said metal in excess to overfill said second opening and removing excess material by at least one of chemical mechanical polishing and an electrochemical removal process. 6. The method of claim 1, wherein forming said capping layer (106) comprises depositing said capping layer by an electrochemical deposition process. 7. The method of claim 6, wherein forming said capping layer comprises forming a catalyst material at least on said metal-containing portion (105b) for initiating said electrochemical deposition process. 8. The method of claim 7, further comprising removing excess material of said capping layer (106) by at least one of chemical mechanical polishing and an electrochemical removal process. 9. A semiconductor device, comprising: a metal region formed in a first dielectric layer (102); a dielectric layer (108) formed above said first dielectric layer (102) and said metal region; a conductive capping layer (106) formed on said metal region (105b) and forming an interface with said dielectric layer (108); and a via (110) formed in said dielectric layer (108) and filled with a conductive material, said via terminating in said conductive capping layer. 10. The semiconductor device of claim 9, wherein said conductive capping layer (106) is comprised of at least one compound of the following compounds: cobalt, tungsten and phosphorous (CoWP); cobalt, tungsten and boron (CoWB); nickel, molybdenum and boron (NiMoB); and nickel, molybdenum and phosphorous (NiMoP). |
TECHNIQUE FOR FORMING A COPPER-BASED METALLIZATION LAYER INCLUDING ACONDUCTIVE CAPPING LAYERBACKGROUND OF THE INVENTION 1. TECHNICAL FIELDGenerally, the present invention relates to the formation of microstructures, such as advanced integrated circuits, and, more particularly, to the formation of conductive structures, such as copper-based metallization layers, and techniques to reduce their electromigration during operating and stress conditions.2. BACKGROUND ARTIn the fabrication of modern microstructures, such as integrated circuits, there is a continuous drive to steadily reduce the feature sizes of microstructure elements, thereby enhancing the functionality of these structures. For instance, in modern integrated circuits, minimum feature sizes, such as the channel length of field effect transistors, have reached the deep sub-micron range, thereby increasing performance of these circuits in terms of speed and/or power consumption. As the size of individual circuit elements is reduced with every new circuit generation, thereby improving, for example, the switching speed of the transistor elements, the available floor space for interconnect lines electrically connecting the individual circuit elements is also decreased. Consequently, the dimensions of these interconnect lines are also reduced to compensate for a reduced amount of available floor space and for an increased number of circuit elements provided per unit die area, as typically the number of interconnections required increases more rapidly than the number of circuit elements. Thus, a plurality of stacked "wiring" layers, also referred to as metallization layers, are usually provided, wherein individual metal lines of one metallization layer are connected to individual metal lines of an overlying or underlying metallization layer by so-called vias. Despite the provision of a plurality of metallization layers, reduced dimensions of the interconnect lines are necessary to comply with the enormous complexity of, for instance, modern CPUs, memory chips, ASICs (application specific ICs) and the like. The reduced cross- sectional area of the interconnect structures, possibly in combination with an increase of the static power consumption of extremely scaled transistor elements, may result in considerable current densities in the metal lines.Advanced integrated circuits, including transistor elements having a critical dimension of 0.13 [mu]m and even less, may, therefore, require significantly increased current densities of up to several kA per cm<2> in the individual interconnect structures, despite the provision of a relatively large number of metallization layers, owing to the significant number of circuit elements per unit area. Operating the interconnect structures at elevated current densities, however, may entail a plurality of problems related to stress-induced line degradation, which may finally lead to a premature failure of the integrated circuit. One prominent phenomenon in this respect is the current-induced material transportation in metal lines and vias, also referred to as "electromigration," which may lead to the formation of voids within and hillocks next to the metal interconnect, thereby resulting in reduced performance and reliability or complete failure of the device. For instance, aluminum lines embedded into silicon dioxide and/or silicon nitride are frequently used as metal for metallization layers, wherein, as explained above, advanced integrated circuits having critical dimensions of 0.18 [mu]m or less, may require signilicantiy reduced cross-sectional areas of the metal lines and, thus, increased current densities, which may render aluminum less attractive for the formation of metallization layers.Consequently, aluminum is being replaced by copper and copper alloys, a material with significantly lower resistivity and improved resistance to electromigration even at considerably higher current densities compared to aluminum. The introduction of copper into the fabrication of microstructures and integrated circuits comes along with a plurality of severe problems residing in copper's characteristic to readily diffuse in silicon dioxide and a plurality of low-k dielectric materials. To provide the necessary adhesion and to avoid the undesired diffusion of copper atoms into sensitive device regions, it is, therefore, usually necessary to provide a barrier layer between the copper and the dielectric material in which the copper-based interconnect structures are embedded. Although silicon nitride is a dielectric material that effectively prevents the diffusion of copper atoms, selecting silicon nitride as an interlayer dielectric material is less then desirable, since silicon nitride exhibits a moderately high permittivity, thereby increasing the parasitic capacitances of neighboring copper lines, which may result in non-tolerable signal propagation delays. Hence, a thin conductive barrier layer that also imparts the required mechanical stability to the copper is formed to separate the bulk copper from the surrounding dielectric material, and only a thin silicon nitride, silicon carbide or silicon carbon nitride layer in the form of a capping layer is frequently used in copper-based metallization layers. Currently, tantalum, titanium, tungsten and their compounds with nitrogen and silicon and the like are preferred candidates for a conductive barrier layer, wherein the barrier layer may comprise two or more sub-layers of different composition so as to meet the requirements in terms of diffusion suppressing and adhesion properties.Another characteristic of copper significantly distinguishing it from aluminum is the fact that copper may not be readily deposited in larger amounts by chemical and physical vapor deposition techniques, in addition to the fact that copper may not be efficiently patterned by anisotropic dry etch processes, thereby requiring a process strategy that is commonly referred to as the damascene or inlaid technique. In the damascene process, first a dielectric layer is formed which is then patterned to include trenches and/or vias which are subsequently filled with copper, wherein, as previously noted, prior to filling in the copper, a conductive barrier layer is formed on sidewalls of the trenches and vias. The deposition of the bulk copper material into the trenches and vias is usually accomplished by wet chemical deposition processes, such as electroplating and electroless plating, thereby requiring the reliable filling of vias with an aspect ratio of 5 and more with a diameter of 0.3 [mu]m or even less in combination with trenches having a width ranging from 0.1 [mu]m to several [mu]m. Electrochemical deposition processes for copper are well established in the field of electronic circuit board fabrication. However, the void-free filling of high aspect ratio vias is an extremely complex and challenging task, wherein the characteristics of the finally obtained copper-based interconnect structure significantly depend on process parameters, materials and geometry of the structure of interest. Since the geometry of interconnect structures is substantially determined by the design requirements and may not, therefore, be significantly altered for a given microstracture, it is of great importance to estimate and control the impact of materials, such as conductive and non-conductive barrier layers, of the copper microstracture and their mutual interaction on the characteristics of the interconnect structure to insure both high yield and the required product reliability. In particular, it is important to identify, monitor and reduce degradation and failure mechanisms in interconnect structures for various configurations to maintain device reliability for every new device generation or technology node.Accordingly, a great deal of effort has been invested in investigating the degradation of copper interconnects, especially in combination with low-k dielectric materials having a relative permittivity of 3.1 or even less, in order to find new materials and process strategies for forming copper-based lines and vias with a low overall permittivity. Although the exact mechanism of electromigration in copper lines is still not quite fully understood, it turns out that voids positioned in and on sidewalls and especially at interfaces to neighboring materials may have a significant impact on the finally achieved performance and reliability of the interconnects.One failure mechanism, which is believed to significantly contribute to a premature device failure, is the electromigration-induced material transport, particularly along an interface formed between the copper and a dielectric capping layer acting as an etch stop layer during the formation of vias in the interlayer dielectric. Frequently used materials are, for example, silicon nitride and silicon carbon nitride, which exhibit a moderately high etch selectivity to typically employed interlayer dielectrics, such as a plurality of low-k dielectric materials, and also suppress the diffusion of copper onto the interlayer dielectric. Recent research results seem to indicate, however, that the interface formed between the copper and the etch stop layer is a major diffusion path for material transport during operation of the metal interconnect.In view of the above-described problems, there exists a need for a technique that allows reduction of electromigration in copper-based interconnect structures without unduly increasing production costs and affecting the electrical conductivity of the metal interconnect.DISCLOSUREOF INVENTION The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an exhaustive overview of the invention. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.Generally, the present invention is directed to a technique that enables the formation of metal regions and metal lines, in particular embodiments copper-based metal lines, in metallization layers, which may, in some embodiments, include low-k dielectric materials, wherein the confinement of the metal line in the dielectric material is enhanced by providing a conductive capping layer, such as a layer comprising cobalt, tungsten and phosphorous (CoWP), a layer comprising cobalt, tungsten and boron (CoWB), a layer comprising nickel, molybdenum and boron (NiMoB) or a layer comprising nickel, molybdenum and phosphorous (NiMoP), at some interface portions between the dielectric material and the metal. In the following, a conductive capping layer may be understood as a layer including at least one metal as a major component. For example, the materials as specified above may represent suitable materials for forming a conductive capping layer. Moreover, any contacts to the metal line or metal region may be formed such that they terminate within the conductive capping layer, thereby reducing tne [pi]sK of metal exposure, in particular copper exposure, during the manufacturing process for forming metallization layers in highly advanced semiconductor devices. Consequently, an enhancement with respect to stress-induced material transport phenomena in the metallization layer may be achieved due to the superior characteristics of the conductive capping layer.According to one illustrative embodiment of the present invention, a method comprises forming a first opening in a dielectric layer stack formed above a metal region, which comprises a metal-containing portion and a conductive capping layer, wherein the conductive capping layer covers the copper-containing portion to form at least one interface with the dielectric layer stack. Moreover, the method comprises etching through the first opening into the conductive capping layer while maintaining the metal-containing portion covered. Finally, the method comprises filling the first opening at least with a barrier material and a copper-containing metal.According to another illustrative embodiment of the present invention, a semiconductor device comprises a metal-containing region formed in a first dielectric layer and a dielectric layer stack formed above the first dielectric layer and the metal-containing region. The semiconductor device further comprises a conductive capping layer formed on the metal-containing region so as to form an interface with the dielectric layer stack. Furthermore, the semiconductor device comprises a via formed in the dielectric layer stack and filled with a conductive material comprising a metal, wherein the via terminates in the conductive capping layer.BRIEF DESCRIPTION OF DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:Figures Ia-Ii schematically show cross-sectional views of a semiconductor device during various manufacturing stages for forming a copper-based metal region having enhanced electromigration performance in accordance with illustrative embodiments of the present invention; andFigure 2 schematically shows a cross sectional view of a semiconductor device during the formation of a via terminating in a conductive capping layer in accordance with further illustrative embodiments of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.MODEfS) FOR CARRYING OUT THE INVENTIONIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.The present invention will now be described with reference to the attached figures. Various structures, systems and devices are schematically depicted in the drawings for purposes of explanation only and so as to not obscure the present invention with details that are well known to those skilled in the art. Nevertheless, the attached drawings are included to describe and explain illustrative examples of the present invention. The words and phrases used herein should be understood and interpreted to have a meaning consistent with the understanding of those words and phrases by those skilled in the relevant art. No special definition of a term or phrase, i.e., a definition that is different from the ordinary and customary meaning as understood by those skilled in the art, is intended to be implied by consistent usage of the term or phrase herein. To the extent that a term or phrase is intended to have a special meaning, i.e., a meaning other than that understood by skilled artisans, such a special definition will be expressly set forth in the specification in a definitional manner that directly and unequivocally provides the special definition for the term or phrase.The present invention is based on the concept that in metal lines and regions, and particularly in copper- based metal lines and regions, an enhanced performance with respect to electromigration or other stress-induced metal migration phenomena may be enhanced by providing a "reinforced" interface between the metal material and the dielectric compared to conventional interfaces formed by dielectric materials, such as silicon nitride, silicon carbide, nitrogen enriched silicon carbide, and the like. For example, certain materials may result in an interface to the adjacent copper, which may significantly increase the resistance against electromigration effects, thereby extending the operational margin of devices and/or enhancing the reliability of the corresponding metallization layers. According to the present invention, a conductive capping layer that may be comprised of one or more of the materials specified above may be provided such that especially failure-prone locations in metallization layers, for instance, the transition areas between vias and metal lines, may be significantly reinforced in that the via may not extend through the conductive capping layer but reliably terminates therein, thereby ensuring a strong interface with the underlying metal, in particular embodiments the copper or copper alloy, which may not even be exposed during the entire fabrication process of the via. For this purpose, appropriately designed etch regimes may be used that allow enhanced etch control during the formation of respective via openings, wherein, in some embodiments, an etch step for opening an etch stop layer provided in the dielectric layer stack accommodating the via opening is designed so as to remove a major portion of the etch stop layer in a highly controlled fashion. Consequently, the conductive capping layer may be provided with a moderately low thickness, while nevertheless ensuring the desired superior characteristics with respect to electromigration. With reference to Figures Ia-Ii and 2, further illustrative embodiments of the present invention will now be described in more detail. Figure Ia schematically illustrates a cross-sectional view of a semiconductor device 100 during a moderately advanced manufacturing stage. The semiconductor device 100 comprises a substrate 101, which may represent any substrate that is appropriate for the formation of circuit elements thereon. For instance, the substrate 101 may be a bulk semiconductor substrate, an insulating substrate having formed thereon a semiconductor layer, such as a crystalline silicon region, a silicon/germanium region, or any other III-V semiconductor compound, or II- VI compound, and the like. Typically, the substrate 101 may represent a carrier having formed thereon a large number of circuit elements, such as transistors, capacitors and the like, as are required for advanced integrated circuits. These circuit elements may be electrically connected in accordance with a specific circuit design by means of one or more metallization layers, wherein, for convenience, the formation of a single metallization layer including a single metal line or metal region will be described herein. It may, however, be readily appreciated that the concept of enhancing the electromigration or stress-induced material migration behavior by using a conductive capping layer comprised of one or more of the above- identified materials may be applied to any complex device configuration including a plurality of metallization layers and a plurality of interconnect lines and vias. In illustrative embodiments, the metal regions or lines may be a copper-based metal line and regions, which may, in particular embodiments, be formed in a low-k dielectric material. Moreover, although the present invention is particularly advantageous for extremely scaled semiconductor devices, since here, as previously discussed, moderately high current densities are usually encountered during the operation of the device, the present invention may also be readily applicable and advantageous for moderately scaled devices, due to a significantly enhanced reliability and lifetime that may be obtained by further reducing stress-induced metal migration phenomena, such as electromigration.The semiconductor device 100 may comprise a dielectric layer 102, which may represent the dielectric material of a metallization layer, or any other interlayer dielectric material and the like. In highly advanced semiconductor devices, the dielectric layer 102 may comprise a low-k dielectric material so as to reduce the parasitic capacitance between neighboring metal lines. In this respect, a low-k dielectric material is to be understood as a dielectric having a relative permittivity that is less than approximately 3.0 and hence exhibits a significantly smaller permittivity than, for instance, well-established "conventional" dielectrics, such as silicon dioxide, silicon nitride and the like. A trench 103 is formed in the dielectric layer 102 and may be filled with a conductive material comprising a barrier layer 104 and a metal 105, which in particular embodiments may be a copper-containing metal, which may be provided in excess so as to reliably fill the trench 103.A typical process flow for forming the semiconductor device 100 as shown in Figure Ia may comprise the following processes. After any well-established process techniques for forming any circuit elements and microstructural elements in and on the substrate 101, the dielectric layer 102 may be formed, which may comprise two or more sub-layers, depending on device requirements. For example, the dielectric layer 102 may be formed on the basis of well-established plasma enhanced chemical vapor deposition (PECVD) techniques, when comprising silicon dioxide, silicon nitride and the like. However, other deposition techniques may be used, such as spin-on techniques for any low-k polymer materials and the like. Thereafter, an appropriately designed photolithography process may be performed to provide an appropriate resist mask (not shown), which may be used to pattern the trench 103 on the basis of well-established anisotropic etch techniques. Next, the barrier layer 104 may be formed by any appropriate deposition technique, such as sputter deposition, chemical vapor deposition, atomic layer deposition and the like. For instance, the barrier layer 104 may be comprised of conductive materials, such as tantalum, tantalum nitride, titanium, titanium nitride, tungsten, tungsten nitride, or any other appropriate material, wherein, in some embodiments, two or more different material compositions and layers may be provided, as is required for achieving the desired adhesion and diffusion blocking characteristics. In one illustrative embodiment, the barrier layer 104 is comprised of one or more of CoWP, CoWB, NiMoB and NiMoP, at least as an uppermost layer, if the barrier layer 104 is provided in the form of a layer stack. For example, the barrier layer 104 may be deposited on the basis of an electrochemical deposition process so as to form a conductive capping layer, wherein an appropriate catalyst material may be deposited prior to the actual formation of the barrier layer 104. For instance, palladium may act as a catalyst material for initiating the deposition of the conductive capping layer in an electroless plating process, wherein, after an initial deposition of the material, such as CoWP, the subsequent deposition process is auto catalyzed by the previously deposited material. In other embodiments, a first barrier layer may be deposited, which may comprise an appropriate catalyst material, such as palladium, for instance by sputter deposition and the like, and subsequently an electrochemical deposition of the conductive capping layer may follow.After the deposition of the barrier layer 104, in some embodiments, a copper seed layer may be deposited by any appropriate deposition technique, such as sputter deposition, electroless deposition and the like, if a copper-based material is to be filled in on the basis of well-established electroplating techniques. In other embodiments, the provision of a seed layer may not be required. Corresponding recipes for forming a seed layer are well-established in the art. Thereafter, the metal material 105, for example in the form of a copper- containing metal, may be deposited on the basis of well-established techniques, such as electroplating, electroless plating and the like, wherein typically a certain amount of excess material is provided to ensure a reliable filling of the trench 103.Figure Ib schematically shows the semiconductor device 100 in a further advanced manufacturing stage. In the embodiment shown, the excess material of the metal layer 105 and the barrier layer 104 is removed to provide a substantially planarized surface topology, which is indicated as 105 A. The removal of excess material of the layer 105 and the barrier layer 104 may be accomplished by chemical mechanical polishing(CMP) and/or electrochemical polishing on the basis of well-established recipes. For example, the layer 105 as shown in Figure Ia may be treated by CMP so as to provide a substantially planarized surface topology 105 A, and subsequently an electrochemical etch process may be performed for removing the residual excess material and to form a recess in the trench 103. In other embodiments, the chemical mechanical polishing process resulting in the planarized surface topology 105 A may be continued and may be performed with a specific over-polish time so as to form a desired recess in the trench 103. For this purpose, process parameters and the CMP tool configuration may be selected such that a corresponding "dishing" effect is achieved. For example, the down force and/or, the relative speed between polishing pad and substrate, and/or the configuration of the slurry and polishing pad may be appropriately selected to result in a substantially uniform recessing of the trench 103. Figure Ic schematically shows the semiconductor device 100 after the completion of the above- described process sequence. Hence, the device 100 comprises the trench 103 filled with a metal portion, which is now indicated as 105B, and also comprises a recess 105R. Moreover, depending on the process strategy, the barrier layer 104 may still be in place with a more or less reduced thickness on horizontal portions, depending on the preceding processes for forming the recess 105R. In other embodiments, in the previous removal process, the barrier layer 104 may be removed from horizontal portions by CMP or any other removal techniques, such as selective etching and the like. In one illustrative embodiment (not shown), the barrier layer 104 may be substantially maintained and may comprise a catalyst material, such as palladium, to enable a subsequent electrochemical deposition of a conductive material, such as CoWP, CoWB, NiMoP, NiMoB. In other embodiments, as previously explained, the barrier layer 104 may be comprised, at least partially, of one or more of CoWP, CoWB, NiMoP, NiMoB and hence an auto catalytic deposition of this material may be obtained. In this case, a layer of these materials may also be grown within the recess 105R, since a lateral growth of the material may also occur. In still other embodiments, a corresponding catalyst material may be deposited prior to the subsequent electrochemical deposition of the conductive capping material, wherein, in some embodiments, the catalyst material may be provided in a highly selective manner, for instance by selectively depositing the catalyst material on the metal-based material 105 in an electroless plating process. In this case, the conductive capping material may be substantially deposited within the recess 105R only. In still other embodiments, an appropriate catalyst material may have been included during the deposition of the metal-based material, at least at a certain deposition phase, so that at least a surface portion of the metal-based portion 105B may include the catalyst material. Consequently, also in this case, a highly selective deposition of the conductive capping layer material may be achieved in the subsequent electrochemical deposition process. For example, in one illustrative embodiment, copper-based metal may have been deposited as the metal 105 in an electrochemical deposition process, in which an appropriate catalyst material may be added to the plating solution, permanently or tempo- rarily at a final phase, so that at least a central portion of the copper-based portion 105b may comprise the catalyst material, which may then also serve as a "growth center" for a further capping layer material deposition.Figure Id schematically shows the semiconductor device 100 after the completion of the electrochemical deposition process for selectively forming, in one illustrative embodiment, a conductive capping layer comprised of one or more of CoWP, CoWB, NiMoP, NiMoB 106, thereby filling the recess 105R.Consequently, the metal-containing portion 105B forms an interface 105C with the conductive capping layer 106, thereby significantly enhancing the characteristics of the interface 105C with respect to its electromigration behavior. Thereafter, any excess material of the layer 106, if provided, may be removed and the surface topography of the device 100 may be planarized on the basis of well-established techniques, such as chemical mechanical polishing, electrochemical etching, and the like, if necessary.Figure Ie schematically shows the semiconductor device 100 after the completion of the above- described process sequence and with an etch stop layer 107 formed on the dielectric layer 102 and the layer 106. The etch stop layer 107, which may represent a first portion of a dielectric layer stack still to be formed, may be comprised of any appropriate material, such as silicon nitride, silicon carbide, nitrogen enriched silicon carbide, and the like. The layer 107 may be formed on the basis of well-established process techniques, such as PECVD and the like. Thereafter, a further dielectric material may be deposited on the etch stop layer 107 in accordance with device requirements. In illustrative embodiments, for example, in highly advanced semiconductor devices, a low-k dielectric material, such as SiCOH, or polymer materials and the like, may be formed above the etch stop layer 107 in any appropriate configuration. For instance, two or more different dielectric materials, partly in the form of a low-k material and partly in the form of "conventional" dielectrics, such as fluorine-doped silicon dioxide and the like, may be used. It should be appreciated that the dielectric layer to be formed on the etch stop layer 107 and its configuration may also depend on the manufacturing strategy used. For example, in a so-called dual damascene technique, the dielectric layer to be formed on the etch stop layer 107 may be designed such that it accommodates metal lines and vias, wherein the corresponding via openings and trench openings may be formed in a specified sequence, wherein the vias may be formed first and subsequently the trenches may be formed, while in other strategies, the trenches may be formed first and subsequently the vias may be fabricated. In still other strategies, so-called single damascene techniques, the dielectric layer to be formed on the etch stop layer 107 may be designed to receive corresponding vias and subsequently a further dielectric layer may be formed in which corresponding trenches are to be patterned. Without intending to restrict the present invention to any specific manufacturing strategy unless set forth in the appended claims, in the following it is referred to a so-called via-f[iota]rst-trench-last approach, wherein it is to be appreciated that any other sequence may be used as well.Figure If schematically shows the device 100 in a further advanced manufacturing stage, wherein the device 100 comprises a dielectric layer stack 109 including the etch stop layer 107 and a further dielectric layer 108, which, as previously discussed, may be comprised of two or more individual dielectric layers. Moreover, a resist mask 111 is formed above the dielectric layer stack 109 and a via opening 110 is formed in the dielectric layer 108 and extends into the etch stop layer 107.The dielectric layer 108 may have been formed in accordance with the process techniques described above and the resist mask 111 may be formed on the basis of well-established photolithography techniques. Thereafter, an anisotropic etch process 112 may be performed on the basis of well-known etch recipes to etch through the dielectric layer 108, wherein the etch process may stop on and in the etch stop layer 107. For instance, well-known recipes including fluorine and carbon or fluorine, carbon and hydrogen compounds may be used wherein, in some illustrative embodiments, the etch process 112 may be stopped upon reaching the etch stop layer 107 or after removal of only a minor portion thereof, as is indicated by a residual thickness 107R of the etch stop layer 107. Hence, in some illustrative embodiments, the etch process 112 may be performed such that only a minor amount of approximately 0-30% of the initial layer thickness of the etch stop layer is removed. A corresponding controlled end of the etch process 112 may be accomplished on the basis of endpoint detection, which optically detects specific volatile components in the etch ambient, when the material of the etch stop layer 107 is increasingly removed. It should be appreciated that, in these embodiments, pronounced etching of the etch stop layer 107, as may be performed on the basis of conventional etch recipes, which may also be used in other illustrative embodiments, may be avoided to reduce etch non-uniformities, since a further highly controllable etch step designed to remove the resist mask 111 and adjust a thickness of the residual material of the etch stop layer 107 in a highly controlled manner may be performed afterwards, as will be described with reference to Figure Ig. Thus, in these embodiments, the etch process 112 may be stopped on the basis of process requirements with respect to the process 112, without necessitating any extended over-etch times provided in other techniques as a compromise between reliable material removal of the layer 108, etch stop layer reduction and avoiding damage of the underlying material, as is typically the case in conventional strategies for forming copper-based metallization layers without the capping layer 106. In other embodiments, enhanced process control during the formation of the via opening 110 and the subsequent reduction of the thickness 107R may not be considered necessary, and thus conventional process strategies may be used.During the etch process 112, any volatile by-products may form fluorine-containing polymers, which may deposit on process chamber surfaces of the respective etch tool, the back side of the substrate 101, whereas this polymer material may not substantially deposit on the resist mask 111 due to the on-going particle bombardment caused by the plasma-based etch process 112. Consequently, in one illustrative embodiment, a source of fluorine is available for a subsequent highly controlled etch process to reduce the thickness 107R of the etch stop layer 107 and also remove the resist mask 111.Figure Ig schematically shows the semiconductor device 100 during a subsequent etch process 113 designed to reduce the thickness of the etch stop layer 107 to a specified target value in a highly controllable manner. In one particular embodiment, the etch process 113 is designed to remove the resist mask 111, wherein an intermediate stage is shown in which a substantial portion of the resist mask is already removed, while a remaining portion H lA is still present. Thus, in one particular embodiment, the substrate 101 may be kept in the same process chamber as previously used for the etch process 112 so that exposed chamber surfaces may have formed thereon the fluorine-containing polymer material previously deposited. Moreover, the etch process 113 may comprise a plasma ambient on the basis of oxygen, which is typically used for resist ashing. During the etch process 113, the polymer material deposited is also attacked and dissolved, thereby liberating fluorine which then enters the plasma ambient of the process 113 and is now available for the removal of material of the etch stop layer 107. In other illustrative embodiments, the fluorine may be supplied by an external source so as to establish the desired etch ambient for removing the resist mask 111 and etching the etch stop layer 107. Consequently, during the removal of the resist mask 111, the residual thickness 107R (Figure If) may also be reduced in a highly controllable manner such that a high across-substrate uniformity of the etch process 113 and thus of a target thickness 107T may be achieved, thereby providing the conductive capping layer 106 with a reduced thickness, since etching the etch stop layer 107 is highly uniform, thereby reducing the risk for etching through the capping layer 106 in a final etch process for opening the etch stop layer by removing the target thickness 107T and etching into the capping layer 106. It should be appreciated that, in other illustrative embodiments, the etch process 113 for removing the resist mask 111 and etching into the etch stop layer 107 and into the capping layer 106 may comprise separate steps.Next, according to the via-first-trench-last approach, a further lithography and etch sequence may be performed on the basis of well-established recipes to form a trench in an upper portion of the dielectric layer stack 109. Finally, the etch stop layer 107 may be opened, wherein, as explained above, in some embodiments, the highly uniform and reduced target thickness 107T may provide enhanced etch control so that the etch stop layer material may be reliably removed and it may be etched into the capping layer 106 without exposing the underlying metal portion 105B.Figure Ih schematically shows the semiconductor device 100 after completion of the etch process 113 and the above-described sequence for forming a trench above the via opening 110 and opening the etch stop layer 107. The device 100 now comprises the via opening 110 extending into the capping layer 106, wherein, however, the remaining thickness 106B is provided to avoid exposure of the underlying metal-containing portion 105B. For example, the thickness 106B may range from approximately 5-30 nm, thereby keeping the resulting via resistivity at a moderately low level. Moreover, a trench 116 is formed to connect to the via opening 110.Furthermore, a barrier layer 114 is formed on exposed surfaces of the trench 116 and the via opening 110, wherein the barrier layer 114 may be comprised of any appropriate material as is also explained with reference to the barrier layer 104.The barrier layer 114 may be formed by any appropriate deposition technique, such as CVD, PVD, electrochemical deposition, atomic layer deposition and the like. In one illustrative embodiment, the barrier layer 114 may be formed by a sputter deposition process 115, wherein a preceding sputter clean process, which is usually performed prior to depositing the barrier material on a copper-based metal region, due to the increased tendency of copper to form oxidized portions, may not be necessary or may be performed with reduced intensity due to the provision of the capping layer 106, thereby reducing the risk for undue material erosion of the exposed capping layer 106. Moreover, in some illustrative embodiments, after the deposition of the barrier layer 114, an appropriately designed re-sputtering process may be performed to substantially completely remove the material of the barrier layer 114 from a bottom HOB of the via opening 110. Consequently, the thickness 106B may then substantially determine the resulting contact resistance from the via 110 to the metal-containing portion 105B, since any contribution of the barrier layer 114 may be significantly reduced. In other embodiments, the barrier layer 114 may also be provided on the bottom HOB in accordance with established via formation techniques. Thereafter, an appropriate copper seed layer may be formed in embodiments in which a copper-based material is to be formed within the via. Subsequently, the trench 116 and the via opening 110 may be filled with a metal, such as a copper-based material, on the basis of well-established deposition recipes, such as electrochemical deposition techniques. After the deposition of the metal material, a similar process sequence may be performed as is previously described with reference to Figures Ia-Ie, in which is described the formation of the metal-based portion 105B including the capping layer 106.Figure Ii schematically shows the semiconductor device 100 after the completion of the above- specified process sequence. Hence, the semiconductor device 100 comprises a via 117 and a metal line 118 formed in an upper portion 118U of the dielectric layer 108. Moreover, in one embodiment, a capping layer 119 comprised of one or more of the materials as are specified above for the layer 106 may be formed on the metal line 118, thereby forming an interface 118C having an enhanced resistance against electromigration. As a result, the semiconductor device 100 comprises an enhanced interconnect structure, which may include copper-based metals that may in advanced applications be formed within low-k dielectric materials, wherein a significantly enhanced performance with respect to electromigration or other stress-induced material migration effects may be achieved due to the presence of one or more capping layers 119 and 106, wherein any via terminates within the layer 106 without exposing the underlying metal.In the embodiments described with reference to Figures Ia-Ii, the capping layers 119 and 106 are formed within recesses in the underlying metal portion. However, other techniques may be used, as will be described with reference to Figure 2, for exemplary embodiments of the present invention.Figure 2 schematically shows a semiconductor device 200 comprising a substrate 201 and a dielectric layer 202 formed thereabove, which may include a metal region 205B, such as a copper-based region, separated from the dielectric layer material 202 by an appropriate barrier layer 204. Regarding the characteristics of the various components 201, 202, 205B and 204, it is referred to the corresponding components as previously described with reference to Figures Ia- Id. Moreover, the semiconductor device 200 comprises a conductive capping layer 206 comprised of one or more of the materials as specified above for the layers 106 and 119, which is formed above the metal region 205B and the dielectric layer 202. Moreover, in some illustrative embodiments, an etch stop layer 207 may be provided, followed by a dielectric layer 208, in which may be formed a via opening 210.In one illustrative embodiment, the capping layer 206 may be formed in a substantially self-aligned manner by providing a catalyst material at least on top of the metal region 205B or a portion thereof, depending on the process strategy, as indicated by 205C, wherein the catalyst material 205C may be provided during the deposition of the copper-based material for forming the metal region 205B, as is also previously explained, or wherein the catalyst material 205C may be deposited in a selective manner, for instance by electroless selective deposition, after a process sequence as previously explained with reference to Figures Ia- Id. Consequently, any processes for recessing the copper region 205B may be omitted and the capping layer 206 may "grow" in a self- aligned fashion, thereby significantly reducing process complexity. Subsequently, the etch stop layer 207 may be formed according to well-established process recipes and the subsequent processing for forming the dielectric layer 208 and etching the via opening 210 may be performed in a similar fashion as previously described with reference to the components 108 and 110. Thereafter, the further processing may be performed as is previously described.As a result, the present invention provides an enhanced technique for the formation of metallization layers, in particular embodiments copper-based metallization layers, in which enhanced electromigration performance may be achieved, wherein particularly failure-prone portions, such as transition regions between vias and copper-based metal lines, may receive a highly efficient conductive capping layer comprised of materials, such as CoWP, CoWB, NiMoP and NiMoB, which may be reliably maintained throughout the entire manufacturing process. A thickness of the capping layer may be selected in accordance with device requirements, wherein, in some particular embodiments, a highly efficient etch strategy may be used, which may provide a precise opening of the etch stop layer and etching into the capping layer without exposing the underlying copper-based metal. Hence, the required layer thickness of the capping layer with respect to process margins may be selected moderately thin so as to not unduly affect the electrical resistance of the corresponding via.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. For example, the process steps set forth above may be performed in a different order. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
A semiconductor wafer with reduced misalignment errors at its periphery and a method for producing such a semiconductor wafer are described. The wafer includes one or more global alignment sites, having global alignment marks, on its periphery. Some patterning is located on the global alignment sites, but not covering the global alignment marks. The patterning covering the global alignment sites reduces the amount of non-correctable misalignment errors experienced by the wafer. A buffer zone is provided around the global alignment marks to inhibit patterning over the marks. |
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A semiconductor wafer comprising:a substrate; one or more mask patterns overlying said substrate; and one or more global alignment sites overlying said substrate, each said site including a global alignment mark and a partial mask pattern overlying a portion of said site and not overlying said alignment mark. 2. The semiconductor wafer of claim 1, further comprising a buffer zone between each said global alignment mark and said partial mask patterns partially overlying each said site.3. The semiconductor wafer of claim 1, comprising four global alignment sites.4. The semiconductor wafer of claim 3, wherein each said global alignment site is spaced from an adjacent said site by approximately ninety degrees.5. The semiconductor wafer of claim 1, wherein said partial mask patterns decrease thermally-induced distortions at the global alignment sites.6. A semiconductor wafer comprising:a substrate; one or more mask patterns overlying said substrate; and one or more global alignment sites overlying said substrate, each said site including a global alignment mark and a partial mask pattern overlying a portion of said site and not overlying said alignment mark, wherein said partial mask patterns reduce the amount of nonpatterned area with each said global alignment site and decrease thermally-induced distortions at the global alignment sites. 7. The semiconductor wafer of claim 6, further comprising a buffer zone between each said global alignment mark and said partial mask patterns partially overlying each said site.8. The semiconductor wafer of claim 6, comprising four global alignment sites.9. The semiconductor wafer of claim 8, wherein each said global alignment site is spaced from an adjacent said site by approximately ninety degrees. |
FIELD OF THE INVENTIONThe present invention generally relates to semiconductor wafer fabrication, and more particularly to a semiconductor wafer having a decreased degree of misalignment errors and a method for decreasing the degree of misalignment errors.BACKGROUNDFor more than a decade, rapid thermal process (RTP) reactors have been utilized in the processing of semiconductor wafers. RTP reactors have a process cycle which takes considerably less time than conventional reactors. For example, while conventional reactors may require forty to ninety minutes to perform a particular processing function on wafers, RTP reactors need only two to fifteen minutes to accomplish the same processing function.A problem associated with RTP reactors is that high temperature gradients are created across the wafers-in-process, leading to thermal stress which leads to plastic deformation of the wafers-in-process, particularly in unpatterned and unprocessed areas at the edges of the wafers-in-process. Plastic deformation in turn may cause photolithography pattern misregistration because alignment marks for lithographic pattern registration are typically provided at the edges of wafers. If these alignment marks are distorted, due to wafer distortion, misalignment of the photolithograph step from one wafer layer to another may occur, causing device failure as device features are misaligned from one wafer layer relative to another.For example, a stepper mechanism prints patterns on a photoresist layer of a wafer-in-process in sequence, moving a predetermined distance from one area of the wafer-in-process to another for each printing operation. The stepper continues this process until an entire layer of die patternings have been printed across the surface of the substrate. The stepper uses global alignment marks, also called combis, to ascertain its position above the wafer-in-process to determine where each die pattern is to be printed on a layer of photoresist. If the wafer-in-process has distortions in the combi sites, the unpatterned and unfabricated areas containing the combis which are typically at the unpatterned wafer periphery, the printing of the photoresist may be misaligned from where actual printing should occur. Thus, since the global alignment marks have moved due to wafer distortion, the stepper may print the next layer of photoresist misaligned relative to the previous layer, creating fabrication misregistrations between the layers.Wafer distortions occurring at the periphery of wafers-in-process where the alignment marks are located are difficult to correct using conventional methods due to the random nature of such distortions. Specifically, with reference to FIGS. 1-4, the misalignments found at the periphery of a wafer due to distortion often do not conform, either in magnitude or phase, to the misalignments which may occur at the wafer's center. FIG. 1 illustrates raw grid data from the wafer's center, while FIG. 2 shows non-correctable grid data from the wafer's center. FIGS. 3 and 4 respectively illustrate the raw and non-correctable grid data from the wafer's periphery. It should be noted that while the misalignments in the wafer's center can be virtually completely corrected in the stepper device, a majority of the misalignments were retained along the wafer's periphery where the alignment marks are located. The retained misalignments as they relate to the global alignment marks will lead to a misregistration with the next patterning layer when the stepper uses the alignment marks for pattern printing.Referring to FIG. 5, a patterned wafer 10 is shown with patterned portions 14 and non-patterned portions 13. Some of the nonpatterned portions 13 serve as global alignment mark sites, also called combi sites, 12. As illustrated, four combi sites 12 are positioned about the periphery of the wafer 10, each separated from adjacent sites 12 by generally ninety degrees and offset from x- and y-axes. FIG. 6 shows a patterned wafer 20 having patterned portions 24 and non-patterned portions 23. As with wafer 10, some of the non-patterned portions 23 serve as combi sites 22. The four illustrated combi sites 22 are located on the x- or y-axes. Both wafers 10 and 20 show conventional patterning and locations of combi sites 12, 22 on the periphery of the wafers. Each of the wafers 10, 20 experience thermal stress-induced misalignments at the unpatterned combi sites which may make it difficult for a lithographic patterning device, such as a stepper, to correctly pattern a photoresist layer.Accordingly, a technique is needed to lessen peripheral distortions at combi sites due to thermally-induced stresses to thereby diminish registration errors in semiconductor fabrication processes.SUMMARYThe present invention provides a semiconductor wafer that includes a substrate, one or more mask patterns located on the substrate, and one or more global alignment sites, each of the sites including an mask pattern partially overlying the site and not overlying a global alignment mark.The present invention also provides a method for diminishing misalignments on a periphery of semiconductor wafers. The method includes the steps of determining the locations of global alignment marks on a wafer, determining the optimal size of partial fields to minimize nonpatterned areas adjacent to the global alignment marks, printing the partial fields at each masking layer during exposure of a photoresist material, and developing the photoresist material and processing the wafer at each mask layer.The foregoing and other advantages and features of the invention will be more readily understood from the following detailed description of preferred embodiments, which is provided in connection with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a representation of grid misalignments in the center of a conventionally fabricated wafer.FIG. 2 is a representation of the wafer of FIG. 1 after correction of the grid misalignments with a stepper device.FIG. 3 is representation of grid misalignments on the edge of a conventionally fabricated wafer.FIG. 4 is a representation of the wafer of FIG. 3 after correction of the grid misalignments with a stepper device.FIG. 5 is representation of a patterned wafer with conventionally placed global positioning marks.FIG. 6 is a representation of a patterned wafer with global positioning marks placed on Cartesian coordinate axes.FIG. 7 is a representation of a patterned wafer constructed in accordance with another embodiment of the present invention.FIG. 8 is representation of a patterned wafer constructed in accordance with an embodiment of the present invention.FIG. 9 is a graph showing grid non-correctable errors along the x-axis and the y- axis for conventionally fabricated wafers and for wafers constructed in accordance with an embodiment of the present invention.FIG. 10 is a flow diagram of the method for minimizing noncorrectable misalignments experienced near a wafer's periphery in accordance with an embodiment of the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe invention, exemplary embodiments of which are described herein with reference to the figures, relates to printing patterning and fabricating portions of a die structure near global alignment marks to reduce the amount of unpatterned and unfabricated area around the marks and thereby reduce the effects of thermally-induced stress on the wafer in the peripheral areas of the wafer, including around the global alignment marks.As noted above, numerous patterning and associated fabrication levels are generally provided on any given wafer. Several of the wafer levels are alignment critical, meaning that accurate registration must exist between lower levels and upper levels in order to maintain adequate die yield. For modern DRAM device manufacturing, for example, some of the alignment critical levels are at the capacitor level, the field isolation level, the gate stack level, and the conductive plug formation level. With reference to FIGS. 7-9, the effects of RTP were evaluated by examining an alignment critical level and by examining the registration between two alignment critical levels. Specifically, the capacitor level and the field isolation level were examined. The effects on the registration of these two levels relative to one another were quantified by looking at combi displacement and combi residual.The effects of RTP on overlay appear to be directly dependent on the amount of unpatterned area onto which the combis are placed. The larger this area is, the stronger the effects are and the greater the misalignment becomes across the wafer. As a consequence of this effect, heat-induced wafer deformation increases with increasingly larger unpatterned areas, and the largest periphery misalignments tend to aggregate around combi locations.The terms "wafer" and "substrate" as used herein are to be understood as including silicon, silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technology, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures. Furthermore, when reference is made to a "wafer" or "substrate" in the foregoing and following descriptions, previous process steps may have been utilized to form regions or junctions in the base semiconductor structure or foundation. In addition, the semiconductor need not be silicon-based, but could be based on silicon-germanium, germanium, or gallium arsenide.FIG. 7 illustrates a patterned wafer 100 which includes first portions 104 and second portions 103 on a substrate 114. The first portions 104 are characterized as being mask patterns, whereas the second portions 103 are either non-patterned or are partially mask patterned as described below. Some of the second portions 103 serve as combi sites 102. The combi sites 102 are generally located on the periphery 101 of the wafer 100. Along the periphery 101, any patterning is electrically non-functional, but provides a certain mechanical property which lessens thermally-induced misalignments.The combi sites 102 each include a combi 110. While two combi sites 102 are shown in FIG. 7, more than two combi sites may be located on the wafer 100, each being offset from x- and y-axes of a Cartesian coordinate system. If four combi sites 102 are positioned on the wafer 100, each may be separated from adjacent sites 102 by about ninety degrees. To alleviate to some extent the problem of misalignment of the combis 110 due to thermal stresses, partial mask patterning 106 is added to the combi sites 102. Generally, a stepper (not shown) is utilized to place rectangularly configured mask patterning 104 down on a photoresist layer over the wafer 100. The stepper can be programmed to put down only a portion of the amount of patterning which theoretically could be output, thereby allowing it to put down the mask patterning 106 in the combi sites 102 without mask patterning over the combis 110.While it is important to minimize the amount of non-patterned area at the periphery 101 of the wafer 100, the combis 110 themselves are not mask patterned over. An imaginary buffer 112 surrounds each combi 110, and the stepper puts down the mask patterning 106 outside of the buffers 112 to prevent any of the patterning 106 from extending over the combis 110.FIG. 8 illustrates a wafer 200 having combi sites 202 located along either the x- or y-axis of the Cartesian coordinate system along or near the wafer's periphery 201. Although FIGS. 7 and 8 show wafers 100, 200 with combi sites 102, 202 located either offset from a Cartesian coordinate system or along the Cartesian coordinate system, it is to be understood that the invention is not so limited. The combi sites 102,202 may be located anywhere along the periphery of the wafers 100,200.The wafer 200 includes first portions 204 and second portions 203 on a substrate 214. The first portions 204 include fill patterning, while the second portions 203 are wholly non-patterned or partially mask patterned. Some of the second portions 203 include the combi sites 202. Each combi site 202 has a combi 210, which is surrounded by an imaginary buffer 212. A stepper (not shown) which places mask patterning in the first portions 204, can be programmed to place smaller rectangularly-shaped mask patterning 206 in the combi sites 202 to reduce the amount of non-patterned area. The mask patterning 206 is put down outside of the buffers 212 to prevent mask patterning 206 from being placed over the combis 210.FIG. 9 illustrates the effect on grid non-correctable errors caused by placing partial mask patterning 106, 206 in combi sites 102,202. For standard combi sites, such as sites 12 or 22 on, respectively, wafers 10 or 20, the non-correctable errors found are 0.0115 [mu]m in the direction of the x-axis and 0.0078 [mu]m in the direction of the y-axis. In comparison, the non-correctable errors found for combi sites 102, 202 are 0.0086 [mu]m along the x-axis and 0.0073 [mu]m along the y-axis.A test was conducted of various combi designs to ascertain whether certain designs would result in an increased die yield, especially around a wafer's periphery. The different combi designs tested included a standard combi and a standard combi with partial field overlay. The yield of dies from the standard combi with partial field overlay was forty dies greater than the yield from the standard combi. Specifically, the average yield of dies from the standard combi with partial field overlay was 466, with 426 dies on average yielded from the standard combi. Further, the increase in die yield occurred at the wafers' peripheries.With specific reference to FIG. 10, next will be described a method for minimizing the deleterious effects of thermally-induced wafer misalignments affecting the positioning of combis. At step 400, a determination is made of the locations of the global alignment marks. As noted above, generally the global alignment marks or combis 110, 210 are located near a wafer's periphery and may be equally spaced from adjacent combis 110, 210. Next, at step 402, the optimal size of partial field mask patterns is determined. Taken into consideration is the optimal size of a rectangularly-shaped mask pattern that does not impinge on the area bounded by the buffer zones 112, 212. At step 404, the partial field mask patterns are printed at each masking layer during exposure of a photoresist material. Finally, at step 406, the photoresist material exposed during step 404 is developed.While the invention has been described in detail in connection with preferred embodiments known at the time, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims. |
A method and apparatus is provided for fault notification based on a severity level. The method comprises detecting a fault associated with a processing tool that is adapted to process one or more workpieces, determining a fault severity level of the detected fault and selecting at least one user to notify of the fault based on the severity level of the fault. |
What is claimed is:1. A system, comprising:a processing tool adapted to process a semiconductor device; anda fault notification module adapted to:detect a fault associated with the processing tool;identify at least one user to identify the fault based on a fault severity level; andtransmit information related to the fault to the at least one selected user; andan advanced process control framework coupled between the processing tool and the fault notification module. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to a semiconductor fabrication process, and, more particularly, to providing notification of faults detected in the semiconductor fabrication process based on a severity level associated with the faults.2. Description of the Related ArtThere is a constant drive within the semiconductor industry to increase the quality, reliability and throughput of integrated circuit devices, e.g., microprocessors, memory devices, and the like. This drive is fueled by consumer demands for higher quality computers and electronic devices that operate more reliably. These demands have resulted in continual improvements in the manufacture of semiconductor devices, e.g., transistors, as well as in the manufacture of integrated circuit devices incorporating such transistors. Additionally, reducing the defects in the manufacture of the components of a typical transistor also lowers the overall cost per transistor as well as the cost of integrated circuit devices incorporating such transistors.During the fabrication process, various events may take place that affect the performance of the devices being fabricated. That is, variations in the fabrication process steps result in device performance variations. Factors, such as feature critical dimensions, doping levels, contact resistance, particle contamination, etc., all may potentially affect the end performance of the device. Various tools in the processing line are controlled, in accordance with performance models, to reduce processing variation. Commonly controlled tools include photolithography steppers, polishing tools, etching tools, and deposition tools. Pre-processing and/or post-processing metrology data is supplied to process controllers for the tools. Operating recipe parameters, such as processing time, are calculated by the process controllers based on the performance model and the metrology information to attempt to achieve post-processing results as close to a target value as possible. Reducing variation in this manner leads to increased throughput, reduced cost, higher device performance, etc., all of which equate to increased profitability.Semiconductor manufacturing processes, which have become more reliable and robust over the past few years, may include a plurality of processing tools that cooperate with each other to process semiconductor devices, such as microprocessors, memory devices, ASICs, etc. To verify that the processing tools are operating within acceptable parameters, it has become increasingly desirable to monitor the operating conditions of such processing tools.Today's semiconductor manufacturing processes may include an intricate network of multiple processing tools for manufacturing semiconductor devices. While the benefits of linking multiple processing tools are inherently obvious, there can, however, be some drawbacks, particularly from the standpoint of troubleshooting problems or faults, and then timely notifying the appropriate personnel so that corrective action may be taken. Failing to notify the appropriate personnel of the detected faults in a timely manner may naturally delay any potential corrective measures that can be taken to address the problem. Because of these delays, the operation of the semiconductor manufacturing process may be adversely affected, thereby resulting in a potential increase in costs for the manufacturer and consumer.The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.SUMMARY OF THE INVENTIONIn one embodiment of the present invention, a method is provided for fault notification based on a severity level. The method comprises detecting a fault associated with a processing tool that is adapted to process one or more workpieces, determining a fault severity level of the detected fault and selecting at least one user to notify of the fault based on the severity level of the fault.In another embodiment of the present invention an apparatus is provided for fault notification based on a severity level. The apparatus comprises a control unit communicatively coupled to a storage unit. The storage unit is adapted to store a first contact and a second contact. The control unit is adapted to receive information related to faults from a processing tool, the faults having at least one of a first fault severity level and a second fault severity level associated therewith and transmit the information related to the faults of the first fault severity level to the first contact and the information related to the faults of the second fault severity level to the second contact.In a further embodiment of the present invention, an article comprising one or more machine-readable storage media containing instructions is provided for fault notification based on a severity level. The one or more instructions, when executed, enable the processor to detect a fault associated with a processing tool that is adapted to process one or more semiconductor devices, determine a fault severity level of the detected fault and select at least one user to notify of the fault based on the severity level of the fault. The instructions further enable the processor transmit information related to the fault to the at least one selected user.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which:FIG. 1 illustrates a manufacturing system, including an APC framework, in accordance with one embodiment of the present invention;FIG. 2 illustrates a block diagram of the APC framework of FIG. 1, in accordance with one embodiment of the present invention;FIG. 3 illustrates a flow diagram of a method that may be implemented in the manufacturing system of FIG. 1 to transmit fault-related information based on a fault severity level, associated with a fault in accordance with one embodiment of the present invention;FIG. 4 illustrates a flow diagram of a method of assigning a fault severity level to a detected fault in the manufacturing system of FIG. 1, in accordance with one embodiment of the present invention;FIG. 5 illustrates an exemplary fault database for storing detected faults and the associated fault severity levels in the manufacturing system of FIG. 1, in accordance with one embodiment of the present invention;FIG. 6 illustrates an exemplary database identifying one or more recipients to be notified of detected faults in the manufacturing system of FIG. 1, in accordance with one embodiment of the present invention;FIG. 7 illustrates a flow diagram of a method for modifying the fault severity level of faults that are detected in the manufacturing system of FIG. 1, in accordance with one embodiment of the present invention; andFIG. 8 illustrates exemplary fault notification displayed on a communications device that is adapted to receive information relating to faults that are detected in the manufacturing system of FIG. 1, in accordance with one embodiment of the present invention.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.Turning now to the drawings, and specifically referring to FIG. 1, a block diagram of a manufacturing system 100 for a semiconductor fabrication process is illustrated in accordance with one embodiment of the present invention. Although the invention is described as it may be implemented in a semiconductor fabrication facility, the invention is not so limited and may be applied to other manufacturing environments. The techniques described herein may be applied to a variety of workpieces including, but not limited to, microprocessors, memory devices, digital signal processors, application specific integrated circuits (ASICs), or other similar devices. The techniques may also be applied to workpieces other than semiconductor devices.The system 100 includes a plurality of processing tools 105(1-n). In the illustrated embodiment, the processing tools 105(1-n) are coupled to respective equipment interfaces (EI) 110 (shown as EI 110(1-n) in FIG. 1). Each of the equipment interfaces 110 retrieves various operational data from its respective processing tool 105, and communicates this data to an Advanced Process Control (APC) framework 120 to determine whether the processing tool 105 is experiencing a faulty operation. Each equipment interface 110 may further receive control signals from the APC framework 120 that may be used to control the respective processing tool 105. For example, a control signal from the APC framework 120 may be used to shut down the first processing tool 105(1) if the operational data that was sent by the first equipment interface 110(1) was deemed faulty by the APC framework 120. As utilized herein, the term "operational data" may include metrology data.Exemplary processing tools 105(1-n) for a semiconductor device fabrication environment include photolithography steppers, etch tools, deposition tools, polishing tools, rapid thermal processing tools, test-equipment tools, implantation tools, etc. In one embodiment, the processing tool 105 may be a multi-chambered processing tool, where, for example, each chamber may represent a "processing tool" for the purposes of this discussion. A processing tool 105, in one embodiment, may be a metrology tool that provides metrology data through its associated equipment interface 110 based on the lot of wafers received by the processing tool 105. A metrology tool may measure a variety of parameters related to the wafers that have been processed by the processing tool 105, parameters such as critical dimensions, layer-to-layer overlay, film thickness, and the like.In one embodiment, the processing tools 105(1-n) may be downstream to each other. That is, the second processing tool 105(2) may be downstream to the first processing tool 105(1), the third processing tool 105(3) may be downstream to the second processing tool 105(2), and so forth. As such, a workpiece that is processed by the first processing tool 105(1) may, for example, be provided to the second processing tool 105(2), which may further process the workpiece before it is processed by the next processing tool 105(3). This process may continue until the last processing tool 105(n) has completed processing the workpiece.The processing tools 105(1-n) of the system 100, in the one embodiment, may perform various processing steps to create a packaged semiconductor device. For example, the processing tools 105(1-n) may be used for manufacturing the raw semiconductor material, slicing the semiconductor crystal ingot into individual wafers, fabricating (e.g., etching, doping, ion implanting) the wafers, testing and packaging the completed semiconductor devices. The number of processing tools 105(1-n) employed in the system 100 may be implementation specific, and thus may vary from one embodiment to another depending on the particular processing steps desired.Generally, each processing tool 105 performs selected processing steps in accordance with the recipe defined for the workpiece to be processed in the processing tool 105. In one embodiment, the processing tool 105 may have more than one recipe associated therewith. For example, a processing tool 105 may perform selected processing steps on one workpiece according to a first recipe, and other processing steps on another workpiece according to a second recipe. Several discrete processes may be involved in semiconductor manufacturing, and, as such, multiple manufacturing processing tools 105(1-n) may be utilized to process the workpiece before arriving at the final product.The APC framework 120 may be any one of a variety of arrangements that facilitates communications to and from the processing tools 105(1-n). In one embodiment, the APC framework 120 may include a control unit 121 that manages the communications to and from the APC framework 120. The control unit 121 may also control the overall operations of one or more of the processing tools 105(1-n).The processing tools 105(1-n) may include one or more internal sensors (not shown) for measuring operational data, which may then be transmitted through the associated EI 110 of the processing tools 105(1-n). In additional to internal sensors, the processing tools 105 may also be coupled to respective external sensors 115(1-n). The sensors 115(1-n) measure additional operational data that may or may not be ascertained by the associated processing tool 105 itself. For example, the sensor 115 may be used to determine a temperature range or other environmental or ambient data near or around the associated processing tool 105. In alternative embodiments, the sensor 115 may be used to sense various other operational parameters associated with the processing tool 105, and, thus, need not be limited to the aforementioned examples. It should be appreciated that, in one embodiment, some or all of the features of the sensors 115(1-n) may be integrated within the processing tools 105(1-n) themselves.The sensor 115 may be embodied as a simple data acquisition program, such as a C++ standalone program acquiring data from a thermocouple wire, for example. Alternatively, the sensor 115 may be embodied as a full-fledged LABVIEW application, acquiring data through multiple transducers (not shown). It will further be appreciated that the sensor 115 need not be used at all, and the APC framework 120 may rely upon the operational data forwarded from the processing tool 105. If used, in one embodiment, the sensor 115 forwards the additional operational data to the APC framework 120 for analysis.The system 100 includes a fault detection and notification unit 122 having a control unit 123 and a storage unit 124. The fault detection and notification unit 122, which in the illustrated embodiment includes a fault detection and classification module 125 and a fault notification module 127, receives the operational data associated with the processing tools 105, processes the data to determine if a fault occurred in the manufacturing system 100, determines a severity level of the fault, if detected, and notifies the appropriate personnel upon detecting the fault based on the determined fault severity level. For the purposes of this discussion, the operational data that is associated with the processing tool 105 may be received through the EI 110 or the sensor 115 or any other desirable source.The fault detection and classification module 125 stores fault classification related information in a fault database 126, an example of which is provided later in FIG. 5. As described in more detail below, in one embodiment, the fault detection and notification unit 122 may increase the determined fault severity level of detected faults in response to an occurrence of selected conditions and then notify the appropriate personnel based on the new fault severity level.The fault detection and notification unit 122, which is coupled to the APC framework 120, receives the operational data of the processing tool 105 via the APC framework 120. In one embodiment, the operational data provided to the fault detection and notification unit 122 via the APC framework 120 may include a date and time stamp that may be utilized by the fault detection and classification module 125 to determine at least an approximate (if not substantially the exact) time and date the fault occurred in the processing tool 105. Prior to sending the operational data to the fault detection and notification unit 122, the APC framework 120 may, in one embodiment, translate the operational data to a format that is recognizable by the fault detection and notification unit 122. In an alternative embodiment, the fault detection and notification unit 122 may be integrated into the APC framework 120, and, as such, the translation of the operational data to a format that is recognizable by the fault detection and notification unit 122 may not be necessary.In accordance with one embodiment, the fault detection and classification module 125 includes a commercially available software package, such as ModelWare, for example, that provides fault detection analysis of the processing tools 105. It will be appreciated, however, that other types of commercially available fault detection software may also be used in lieu thereof without departing from the spirit and scope of the present invention.As mentioned, the fault detection and notification unit 122, in the illustrated embodiment, includes the fault notification module 127. The fault notification module 127, based on the faults detected by the fault detection and classification module 125, notifies one or more recipients identified in a database 128 based on the determined severity level of the faults, as described in more detail below. The database 128, in one embodiment, may be any one of a variety of available databases, including, but not limited to, commercially available databases from Microsoft(R) and IBM(R). Alternatively, the database 128 may be a compilation of information that is stored in a data file and is accessible by the fault notification module 127.The fault detection and classification module 125 of the fault detection and notification unit 122, in one embodiment, compares the received operational data from the APC framework 120 to fault model data. The fault model data includes operational data of other similar-type tools, where it was previously known that such tools had operated within acceptable operational limits. The types of faults that may be detected by the fault detection and classification module 125 include processing and/or operational faults in fabrication. Examples of processing faults, in the context of semiconductor manufacturing, may include, but are not necessarily limited to, non-optimal preheating of the chamber, catastrophic failure where a broken wafer is detected, abnormal nitrogen (N2) flow rate, temperature overshoots at the top of a ramp, tube temperature measurement drifts, excessive pressures, etc. Examples of operational faults detected by the fault detection and classification module 125 may include, in the context of semiconductor manufacturing, interrupted/resumed processing, or improper wafer sleuth prior to Rapid Thermal Anneal (RTA), etc. Thus, what constitutes a "fault" may vary depending upon the type of workpieces processed and the nature of the processing operation performed in the processing tool 105. Furthermore, as utilized herein, the term "fault" may include any undesirable condition associated with the processing tool 105, and may include errors or alerts that are generated or received as a result of an occurrence of the undesirable condition.The fault detection and notification unit 122, in the illustrated embodiment, includes an interface 135 that provides the communications interface to a data network 140. The data network 140 may be a packet-switched data network, such as a data network according to the Internet Protocol (IP). Examples of the data network 140 include local area networks (LANs), wide area networks (WANs), intranets, and the Internet. One version of IP is described in Request for Comments (RFC) 791, entitled "Internet Protocol," dated September 1981. Other versions of IP, such as IPv6, or other connectionless, packet-switched standards may also be utilized in further embodiments. A version of IPv6 is described in RFC 2460, entitled "Internet Protocol, Version 6 (IPv6) Specification," dated December 1998. The data network 140 may also include other types of packet-based data networks in further embodiments. Examples of such other packet-based data networks include Asynchronous Transfer Mode (ATM) and Frame Relay networks. As utilized herein, a "data network" may refer to one or more communications networks, channels, links, or paths, and systems or devices (such as routers) used to route data over such networks, channels, links, or paths.Associated with the interface 135 may be a network protocol stack (not shown), with one example being a UDP/IP (User Datagram Protocol/Internet Protocol) stack. UDP is described in RFC 768, entitled "User Datagram Protocol," dated August 1980. In one embodiment, both inbound and outbound packets may be passed through the interface 135.In the exemplary arrangement of FIG. 1, the fault notification module 127 may be communicatively coupled to a plurality of communications devices 145(1-3) via the data network 140. It should be appreciated, however, that in an alternative embodiment, the fault notification module 127 may communicate with one or more of the communications devices 145(1-3) over mediums without the intervening data network 140. As such, the fault detection and notification unit 122 may include interfaces that are adapted to facilitate communications with the various types of communications devices 145(1-3). Although not shown in FIG. 1, in one embodiment, the communications devices 145(1-3) may communicate with the fault detection and notification unit 122 through an Internet service provider.The communications device 145 may be any suitable device that is capable of receiving and displaying the fault-related information that is transmitted by the fault notification module 127. Examples of the communications device 145 may include a data processing system, a telephone, and a wireless communications device in the form of a personal digital assistant, cellular telephone, pager, and the like.The fault detection and notification unit 122, in one embodiment, includes a web server module 150, which may be capable of receiving requests over the data network 140 and responding to such requests. For example, the web server module 150 may include an HTTP (Hypertext Transfer Protocol) service routine 155 that is capable of receiving HTTP requests over the data network 140, as well as sending HTTP responses over the data network 140. The Hypertext Transfer Protocol specifies how a client and server may establish a connection, how the client may request data from the server, how the server may respond to the request, and how the connection may be closed. One version of HTTP is described in RFC 2068, entitled "Hypertext Transfer Protocol-HTTP/1.1," dated January 1997. In one embodiment, the web server module 150 and the HTTP service routine 155 may be stored in the storage unit 124 or in another storage unit (not shown).The storage unit 124, in one embodiment, may include a central web page module 157 that, as explained below, allows users to access useful information pertinent to the processing tools 105(1-n). The information may be presented to the user in a graphical format, in one embodiment, thereby allowing users to view fault related information, for example, regarding one or more of the processing tools 105(1-n). The central web page module 157, in one embodiment, may include a hypertext markup language (html) file and may include an executable script that is able to retrieve the desired information for the processing tools 105(1-n). The term "module," as utilized herein, may be implemented in hardware, software, or a combination thereof. The modules 125, 127, 157 if implemented in software, may be storable in the storage unit 124.It should be appreciated that the illustrated components shown in the block diagram of the system 100 in FIG. 1 are illustrative only, and that, in alternative embodiments, additional or fewer components may be utilized without deviating from the spirit or scope of the invention. For example, in one embodiment, the one or more processing tools 105 may not have an external sensor 115. Additionally, it should be noted that although various components, such as the equipment interface 110 of the system 100 of FIG. 1 are shown as stand-alone components, in alternative embodiments, such components may be integrated into the processing tool 105. Similarly, the fault detection and notification unit 122 may be integrated into the APC framework 120.Turning now to FIG. 2, a more detailed representation of the APC framework 120 is provided. The APC framework 120 is a component-based architecture comprised of interchangeable, standardized software components enabling run-to-run control and fault detection of the processing tool 105. The APC framework 120 includes a machine interface (MI) 210 for communication with the processing tool 105 and the APC framework 120 to collect operational data therefrom. The APC framework 120 further includes a sensor interface (SI) 220 for communication between the sensor 115 and the APC framework 120. The sensor interface 220 also collects operational data of the processing tool 105 through the sensor 115. The APC framework 120 further includes an applications interface (AI) 240 for interfacing with third-party applications that run on the fault detection and classification module 125 to analyze the operational data received via the machine and sensor interfaces 210, 220. In the illustrated embodiment, the third-party application is the fault detection and notification unit 122. A data channel 250 is further provided to allow for communication of data from the machine and sensor interfaces 210, 220 and the applications interface 240 of the APC framework 120.The machine interface (MI) 210 couples to the equipment interface 110 to serve as an interface between the processing tool 105 and the APC framework 120. The machine interface 210 supports the setup, activation, monitoring, and data collection of the processing tool 105. The machine interface 210 receives commands, status events, and collected data from the equipment interface 110 and forwards this information to other components of the APC framework 120, namely the applications interface 240. Any responses that are received by the machine interface 210 from the other components of the APC framework 120 are routed to the equipment interface 110 for delivery to the processing tool 105. As previously discussed, this may include a control signal from the fault detection and notification unit 122 (see FIG. 1) to manipulate the processing tool 105 if a faulty condition is detected.The machine interface 210 may also reformat and restructure the messages between the specific communications protocol utilized by the equipment interface 110 and the Common Object Request Broker Architecture Interface Definition Language (CORBA IDL) communications protocol used by the components of the APC framework 120. The manner in which the machine interface 210 performs such translation between the equipment interface-specific communications protocol and the CORBA IDL protocol of the APC framework 120 is well known to those of ordinary skill in the art. Accordingly, the specific translation process between these two formats will not be discussed herein to avoid unnecessarily obscuring the present invention.The sensor interface 220 is coupled to the sensor 115 and serves as an interface between the sensor 115 and the APC framework 120. The sensor interface 220 provides setup, activation, monitoring, and data collection for the sensor 115. Similar to the machine interface 210, the sensor interface 220 may also reformat and restructure the messages between the specific communications protocol utilized by the sensor 115 and the CORBA IDL protocol used by the components of the APC framework 120.The applications interface 240 supports the integration of third-party tools (e.g., commercial software packages, such as ModelWare, MatLab, and Mathematica, for example) to the APC framework 120. Typically, these third-party tools do not provide the standard CORBA IDL protocol known to the APC framework 120. Accordingly, the applications interface 240 provides the necessary translation between the communications protocol utilized by the third-party tool and the CORBA protocol used by the APC framework 120.Referring now to FIG. 3, a flow diagram of a method that may be implemented in the manufacturing system 100 of FIG. 1is illustrated, in accordance with one embodiment of the present invention. The fault detection and notification unit 122 detects (at 310) a fault associated with at least one of the processing tools 105(1-n). The fault may be detected (at 310) in any one of a variety of ways. In one embodiment, assuming that metrology data is received from one of the processing tools 105(1-n), the fault detection and classification module 125 may determine (at 312) the fault based on the received metrology data. That is, a fault may, for example, be detected if the metrology data indicates that the measured parameters of the workpieces are outside the range of acceptable parameters. In another embodiment, the fault may be detected (at 310) based on the fault detection and classification module 125 receiving (at 314) operational data from at least one of the processing tools 105(1-n) and comparing (at 316) the received operational data with the fault model data. As mentioned earlier, a variety of faults may be detected in the manufacturing system 100, including processing faults and operational faults.The fault detection and classification module 125, in one embodiment, stores (at 317) information that is associated with the detected fault, such as a fault identifier, in the fault database 126. The "fault identifier" may be any information that identifies the detected fault (at 310), and may include information such as a fault code, for example, associated with the detected fault. In one embodiment, the time and date of the occurrence or detection of the fault may also be stored in the fault database 126. An example of the fault database 126 is provided below in FIG. 5.Referring again to FIG. 3, the fault detection and classification module 125 determines (at 318) a severity level of the fault that is detected (at 310). The severity level of the fault may be determined (at 318) in a variety of ways, described in more detail below. In one embodiment, the severity level of the fault may be determined (at 318) based on the impact of the fault on the operation of the processing tool 105. In another embodiment, the severity level of the fault may be determined (at 318) based on the number of times a fault is detected within a selected period. Other ways of determining the fault severity levels of the detected faults are described later. In one embodiment, a variety of fault severity levels may be assigned to the detected faults, such as a "very high" severity, "high" severity, "medium" severity, "low" severity, "very low" severity, and the like. In an alternative embodiment, the severity level may be assigned to the faults based on a numeric scale, such as, for example, a scale of 1 to 10, where the fault severity level "1" may represent the highest level of severity and the fault severity level "10" may represent the lowest level of severity.The fault notification module 127 identifies (at 320) one or more recipients to notify of the detected fault (at 310). In one embodiment, the fault notification module 127 may identify the recipients (at 320) by accessing (at 325) the database 128 (described below in FIG. 6) and determining the recipients listed therein. Thus, in one embodiment, all of the recipients identified in the database 128 may be notified of the fault detected (at 310) by the fault notification module 127. In an alternative embodiment, the fault notification module 127 may identify (at 328) the recipients based on a notification condition that is associated with each of the recipients. The "notification condition," for example, may dictate notifying selected recipients of the detected fault based on the determined severity level of the fault. Thus, for instance, a first group of recipients may be notified of faults having only the associated fault severity level of "high severity", while another group of recipients may be notified of faults that only have an associated fault severity level of "low" to "medium" and yet another group of recipients may be notified of all the faults regardless of the associated fault severity level. Of course, the precise manner of associating faults and the fault severity level to the appropriate recipients is a matter of design choice, and thus may vary from one implementation to another.The fault notification module 127 determines (at 330) the fault-related information to transmit to the one or more recipients identified (at 320). The term "fault-related information," as utilized herein, refers to information that is pertinent to the detected fault, and thus, may include information such as the processing tool 105 with which the fault is associated, fault identifier, the time and location of the fault, the impact of the fault, the determined severity level of the fault, list of other intended recipients, and the like. The fault detection and notification unit 122 transmits (at 340) the fault-related information determined (at 330) to each of the identified recipients (at 320). In one embodiment, the fault-related information may be transmitted to another processing tool 105 that may also be affected by the fault.Referring now to FIG. 4, a flow diagram of a method for determining the fault severity level of a detected fault is illustrated, in accordance with one embodiment of the present invention. The fault detection and classification module 125 determines (at 410) whether the operation of the processing tool 105 is affected by the occurrence of the fault. It may be possible that not all faults affect the operation of the processing tool 105, particularly faults that comprise alerts or informative messages that inform of potential future problems (e.g., a chemical supply is low, a preventative maintenance is desired) rather than faults that reflect current operational problems. If it is determined (at 410) that the operation of the processing tool 105 is not affected, then the fault detection and classification module 125 determines (at 415) the number of times the fault has been detected previously during a preselected time period. The preselected time period may be, for example, a 24-hour period, or it may be any other desirable time interval.Based on the number of detections (at 415) of the fault during the preselected time period, the fault detection and classification module 125, in one embodiment, determines the fault severity level that is associated with the fault. For example, if the fault detection and classification module 125 determines (at 420) that the number of detections is less than a preselected threshold, then the fault may be assigned (at 425) a "low" fault severity level (relative to other faults) because the fault does not affect the operation of the processing tool 105 (see block 410) and has been detected less than the number of times defined by the preselected threshold value (see block 420) within the preselected time period (see block 415). If, however, it is determined (at 420) that the number of detections is greater than the preselected threshold, then the fault, in one embodiment, may be assigned (at 430) a relatively higher (e.g., "medium") fault severity level.If the fault detection and classification module 125 determines (at 410) that the detected fault affects the operation of the processing tool 105, then the fault detection and classification module 125 determines (at 435) whether the processing tool 105 is operable in a degraded mode. If it is determined (at 435) that the processing tool 105 is not capable of being operated in the degraded mode, then, in one embodiment, the fault may be assigned (at 440) a "very high" fault severity level, in part, because the processing tool 105 may be completely inoperable. If, however, it is determined (at 435) that the processing tool 105 may be operated in the degraded mode, then the fault detection and classification module 125 may assign (at 450) less than the maximum fault severity level, such as a "high" fault severity level as opposed to a "very high" fault severity level to the fault. The fault severity level assigned to the faults may be stored in the fault database, in one embodiment, as shown in FIG. 5.Referring now to FIG. 5, the fault database 126 with exemplary fault-related contents stored therein is illustrated. The fault database 126 may include a plurality of entries 510(1-5), where each detected fault is stored in the entry 510. Each entry 510 may include a plurality of fields 520(1-4), where, in the illustrated embodiment, the first field 520(1) includes a fault identifier for the fault that is detected by the fault detection and classification module 125, and the second field 520(2) includes the time and date the fault is detected. In the illustrated embodiment, the third field 320(3) identifies the impact of the detected fault on the processing tool 105, and the fourth field 320(4) indicates the fault severity level to the detected fault. The fault severity level may be determined, for example, using the method illustrated in FIG. 4.The contents depicted in the fault database 126 of FIG. 5 are exemplary in nature. In the illustrated example, the first entry 510(1) of the fault database 126 indicates that the detected fault having an identifier of "123098" was detected at "11:02:35" on "01/01/02" (see the second field 520(2)), where the fault rendered the processing tool 105 inoperable (see the third field 520(3)) and that the fault has been assigned a "very high" fault severity level (see the fourth field 520(4)). As an additional example, the second entry 510(2) of the fault database 126 indicates that a fault having an identifier of "321890" was detected at "1:03:05" on "01/02/02," and that the fault is an alert and has been assigned a "high" fault severity level. In the illustrated example, the alert identified in the second entry 510(2) may have a "high" fault severity level associated with it because the fault detection and classification module 125 may have detected multiple occurrences of the fault in the fault database 126 within the preselected time period. The remaining entries 510(3-5) contain additional data for illustrative purposes. It should be appreciated that the fault database 126 may employ additional or fewer fields 320(1-4) in other embodiments, depending on the implementation needs.Referring now to FIG. 6, the database 128 with exemplary contents stored therein is illustrated. The database 128, in the illustrated embodiment, includes a plurality of entries 610(1-4), with each entry 610 having a plurality of associated fields 620(1-2). In the disclosed embodiment, each entry 610 includes a name of the recipient that is identified in the first field 620(1), and notification conditions associated with that recipient that are identified in the second field 620(2).As mentioned, the fault notification module 127, based on the contents of the database 128, notifies the appropriate recipients of the detected fault based on the associated fault severity level according to notification conditions associated with the recipient. Thus, for example, with respect to the first entry 610(1), the fault notification module 127 notifies Recipient#1 of detected faults having a "high" or greater associated fault severity level. Recipient#2 of the second entry 610(2), for example, is configured to receive notifications of all faults, regardless of the associated fault severity level of those faults. Similarly, other recipients identified in the first field 620(1) of the remaining entries 610(3-4) receive fault notifications according to the notification conditions specified in the second field 620(2) of the respective entries 610(3-4). By way of example, a fault that is detected and assigned a "high" fault severity level is transmitted, according to the illustrated database 128 of FIG. 6, to Recipient#1, Recipient#2, and Recipient#4, but not to Recipient#3 (who is configured to receive faults that only have a "very high" fault severity level).As shown, each recipient identified in the first field 620(1) may have a different notification condition 620(2) associated therewith. These variations in the notification conditions may be desired to account for the variations in job responsibilities assigned to each recipient. For example, a system engineer directly responsibility for the operation of the processing tools 105(1-n) in the manufacturing system 100 may wish to be informed of all faults, regardless of the associated fault severity level. In contrast, a supervisor of the system engineer may not wish to be informed of all the faults, but rather be notified of only the faults with a "high" fault severity level. Extending this example one step further, a senior supervisor may wish to be notified of only the faults with a "very high" fault severity level, and so forth.Referring now to FIG. 7, a flow diagram of a method that may be employed in the fault detection and notification unit 122 is illustrated, in accordance with one embodiment of present invention. In particular, and as described below, the flow diagram of FIG. 7 illustrates a method of modifying the fault severity level of a detected fault. The fault detection and classification module 125 determines (at 710) if a detected fault has been resolved (i.e., cured since the last notification was sent). In one embodiment, the fault detection and classification module 125 may determine whether the fault has been resolved based on an entry stored in the fault detection and notification unit 122, for example. That is, the administrator, in one embodiment, may update the entry to reflect that the fault has been resolved. The fault detection and classification module 125 may periodically check the updateable entry to determine if the fault has been resolved.If it is determined (at 710) that the fault has been resolved, then the fault detection and classification module 125 continues (at 720) with normal (or other) operations. If, however, it is determined (at 710) that the fault has not been resolved, then the fault detection and classification module 125 determines (at 730) the amount of time that has elapsed since the one or more recipients were last notified of the fault. If it is determined (at 740) that the elapsed time is less than a preselected threshold time value, then the fault detection and classification module 125 continues (at 720) with normal (or other) operations. If, however, it is determined (at 740) that the elapsed time is greater than the preselected threshold time value, then the fault detection and notification module 125 increases (at 750) the severity level of the fault by incrementing the existing fault severity level stored in the field 520(4) of the fault database 126 by one. Thus, in accordance with one embodiment of the present invention, the fault severity level associated with a detected fault may increase over time if the fault is not addressed within the time defined by the preselected threshold time value. And, as the severity level of the fault increases, different recipients may start receiving the fault notifications because the higher fault severity level may satisfy the notification condition (defined in the second field 620(2) of the database 128 of FIG. 6) that is associated with the recipients listed in the first field 620(1) of the database 128 of FIG. 6. In one embodiment, the fault detection and classification module 125 may retransmit a notification to the intended recipient without increasing the severity level of the fault.FIG. 8 illustrates exemplary fault-related information displayed on the communications device 145 of the recipient. In particular, FIG. 8 illustrates an exemplary web page 802, which is shown in a browser window 805 that may be shown on a display device of the communications device 145. As can be seen, the fault-related information illustrates that the first processing tool 105(1), located at fab#1, experienced a fault (123098) at 9:02 a.m. on Jan. 2, 2002, and, as a result of the detected fault, the first chamber of the processing tool 105(1) is inoperable. The fault-related information also indicates that the fault severity level of the detected fault is "very high" and that Recipient#2 and Recipient#3 have also been notified of this fault. In one embodiment, the other intended recipients of the fault-related information may be identified based on the contents of the database 128. In FIG. 8, by way of example, the fault-related information includes a variety of useful information. Of course, in other embodiments, the fault-related information may include additional or less information, depending on the implementation.One or more embodiments of the present invention allow the fault detection and notification unit 122 to notify selected recipients based on the severity level of the fault. Furthermore, as described above, under some conditions, the severity level of the faults may be changed, which in turn may change potential recipients who are notified of the fault. Accordingly, notifying recipients based on the severity level of the fault enables the fault detection and notification unit 122 to reach the appropriate personnel so that the more urgent faults, for example, may be attended to in a timely manner.The various system layers, routines, or modules may be executable by the control units 121, 123 (see FIG. 1). As utilized herein, the term "control unit" may include a microprocessor, a microcontroller, a digital signal processor, a processor card (including one or more microprocessors or controllers), or other control or computing devices. The storage unit 124 (see FIG. 1) referred to in this discussion may include one or more machine-readable storage media for storing data and instructions. The storage media may include different forms of memory including semiconductor memory devices such as dynamic or static random access memories (DRAMs or SRAMs), erasable and programmable read-only memories (EPROMs), electrically erasable and programmable read-only memories (EEPROMs) and flash memories; magnetic disks such as fixed, floppy, removable disks; other magnetic media including tape; and optical media such as compact disks (CDs) or digital video disks (DVDs). Instructions that make up the various software layers, routines, or modules in the various systems may be stored in respective storage devices. The instructions when executed by a respective control unit cause the corresponding system to perform programmed acts.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
Three-dimensional memory cells and methods of making and using the memory cells are discussed generally herein. In one or more embodiments, a three-dimensional vertical memory can include a memory stack. Such a memory stack can include memory cells and a dielectric between adjacent memory cells, each memory cell including a control gate and a charge storage structure. The memory cell can further include a barrier material between the charge storage structure and the control gate, the charge storage structure and the barrier material having a substantially equal dimension. |
ClaimsWhat is claimed is: 1. A vertical memory comprising:a stack of memory cells, a cell of the stack comprising:a control gate;a charge storage structure having a dimension; and a barrier film between the charge storage structure and the control gate, wherein the barrier film has a dimension corresponding to the dimension of the charge storage structure, and wherein the dimension of the charge storage structure is substantially equal to or greater than the dimension of the barrier film. 2. The memory of claim 1, wherein the barrier film has a face and the charge storage structure has a face opposing the face of the barrier film and substantially parallel to the face of the barrier film, wherein each part of the face of the barrier film is separated from the face of the charge storage structure by a substantially equal distance.3. The memory of claim 1, wherein the charge storage structure has a substantially planar side facing the barrier film, the control gate has a substantially planar side facing the barrier film, and the barrier film has a first substantially planar side facing and substantially parallel to the substantially planar side of the charge storage structure and a second substantially planar side facing and substantially parallel to the substantially planar side of the control gate.4. The memory of claim 1, wherein the dimension of the charge storage structure substantially equal to or greater than the dimension of the barrier film comprises the dimension of the charge storage structure being substantially equal to the dimension of the barrier film.5. The memory of claim 1, further comprising a pillar adjacent to the charge storage structure and wherein the dielectric is also between the pillar and the charge storage structure. 6. The memory of claim 5, wherein the pillar comprises polysilicon, the charge storage structure comprises polysilicon, the dielectric comprises oxide, and the barrier film comprises nitride.7. The memory of claim 1, wherein the stack of memory cells comprises a NAND string of memory cells.8. The memory of claim 1 , wherein the barrier film is entirely between a plane corresponding to a side of the charge storage structure and a plane corresponding to a side of the control gate opposing the side of the charge storage structure.9. The memory of claim 1, wherein the charge storage structure and the barrier film are formed in a control gate recess adjacent to the control gate. 10. A vertical stack of memory cells comprising a vertical pillar, wherein a cell of the stack comprises:a charge storage structure adjacent to the pillar along a dimension;a dielectric and a barrier film adjacent to the charge storage structure along the dimension; anda control gate adjacent to the dielectric and barrier film along the dimension, wherein the barrier film of the memory cell has a substantially uniform thickness across the entire dimension11. The stack of claim 10, wherein the charge storage structure is substantially rectangular.12. The stack of claim 10, wherein the control gate comprises doped polysilicon.13. The stack of claim 10, wherein the pillar comprises polysilicon, the charge storage structure comprises polysilicon, the dielectric comprises oxide, and the barrier film comprises nitride.14. The stack of claim 10, wherein the stack comprises a NAND string of memory cells.15. The stack of claim 10, wherein the dielectric surrounds the charge storage structure and the barrier film.16. The stack of claim 10, wherein the charge storage structure and the barrier film are formed in a control gate recess.17. A vertical stack of memory cells, wherein a cell of the stack comprises: a charge storage structure having a dimension; anda control gate having a dimension corresponding to the dimension of the charge storage structure, wherein the dimension of the control gate and the corresponding dimension of the charge storage structure are substantially equal.18. The stack of claim 17, wherein the cell further comprises a dielectric and a barrier film between the charge storage structure and the control gate, wherein the dimension of the control gate is substantially equal to a corresponding dimension of the barrier film.19. The stack of claim 18, wherein the barrier film is substantially rectangular.20. The stack of claim 19, wherein, in a vertical cross-section of the memory cell, a surface area of the barrier film of the cell is less than a surface area of the charge storage structure of the cell.21. The stack of claim 18, wherein the charge storage structure comprises polysilicon, the control gate comprises polysilicon, and the barrier film comprises nitride.22. The stack of claim 18, wherein the dielectric is between the charge storage structure and the barrier film, and the dielectric is between the control gate and the barrier film.23. The stack of claim 18, wherein the dielectric surrounds the charge storage structure and the barrier film.24. The stack of claim 18, wherein the charge storage structure and the barrier film are formed, at least partially, in a control gate recess adjacent to the control gate and between tier dielectric layers that separate the cell from adjacent cells of the stack.25. A vertical memory array comprising:a plurality of vertical memory strings, wherein a string of the plurality comprises:a vertical pillar; andat least two tier dielectric layers; anda memory cell between two adjacent tier dielectric layers of the at least two tier dielectric layers comprising:a charge storage structure having a dimension;a control gate;a dielectric layer between the charge storage structure and the vertical pillar; anda barrier film between the charge storage structure and the control gate, the barrier film having a dimension corresponding to the dimension of the charge storage structure, the dimension of the barrier film and the dimension of the charge storage structure being substantially equal.26. The memory array of claim 25, wherein the barrier film has a face and the charge storage structure has a face opposing the face of the barrier film and substantially parallel to the face of the barrier film, wherein each part of the face of the barrier film is separated from the face of the charge storage structure by a substantially equal distance.27. The memory array of claim 25, wherein the charge storage structure has a planar side facing the barrier film, the control gate has a planar side facing the barrier film, and the barrier film has a first planar side facing and substantially parallel to the planar side of the charge storage structure and a second planar side facing and substantially parallel to the planar side of the control gate. 28. The memory array of claim 25, wherein the control gate has a dimension corresponding to the dimension of the charge storage structure and the dimension of the control gate is substantially equal to the dimension of the charge storage structure. 29. The memory array of claim 25, wherein the pillar comprises polysilicon, the charge storage structure comprises polysilicon, the control gate comprises polysilicon, and the barrier film comprises nitride.30. The memory array of claim 25, wherein the memory strings are NAND memory strings.31. The memory array of claim 25, wherein the control gate has a dimension corresponding to the dimension of the charge storage structure and wherein the dimension of the control gate is greater than the corresponding dimension of the charge storage structure.32. The memory array of claim 25, wherein the charge storage structure and the barrier film are formed in a control gate recess between the adjacent tier dielectric layers and adjacent to the control gate.33. A method of forming a memory stack, the method comprising:forming a plurality of control gates and control gate recesses between tier dielectric layers; forming a first layer of dielectric material on the plurality of control gates in the control gate recesses;forming barrier material in the control gate recesses and on the first layer of dielectric material;removing portions of the barrier material to form barrier films adjacent to the control gates;forming a second layer of dielectric material on the barrier films;forming charge storage structure material on the second layer of dielectric material; andremoving portions of the charge storage structure material to form charge storage structures, each of the charge storage structures having a dimension that is substantially equal to a corresponding dimension of a respective one of the barrier films.34. The method of claim 33, further comprising:before removing the portions of the barrier material, forming sacrificial material on the barrier material and removing portions of the sacrificial material; andbefore forming the second layer of dielectric material, removing the remaining sacrificial material.35. The method of claim 33, wherein removing portions of the barrier material to form the barrier films includes removing the portions of the barrier material to form each of the barrier films to have a dimension that is substantially equal to a corresponding dimension of a respective one of the control gates.36. The method of claim 33, wherein forming the plurality of control gates comprises forming a plurality of polysilicon control gates.37. The method of claim 33, wherein forming barrier material comprises forming nitride.38. The method of claim 33, wherein forming charge storage material comprises forming polysilicon.39. The method of claim 33, wherein forming the memory stack includes forming a NAND memory stack.40. A method of forming a memory stack, the method comprising:forming a plurality of control gates and control gate recesses between tier dielectric layers;forming a first layer of dielectric material on the plurality of control gates in the control gate recesses;forming barrier material in the control gate recesses and on the first layer of dielectric material;forming a second layer of dielectric material on the barrier material; forming charge storage structure material on the second layer of dielectric material;removing portions of the charge storage structure material to form charge storage structures, each of the charge storage structures having a dimension that is substantially equal to a corresponding dimension of a respective one of the barrier films;removing portions of the barrier material to form barrier films adjacent to the control gates; andforming a third layer of dielectric material on exposed surfaces of the plurality of control gate recesses.41. The method of claim 40, further comprising:before removing the portions of the barrier material, forming sacrificial material on the barrier material and removing portions of the sacrificial material; andbefore forming the second layer of dielectric material, removing the remaining sacrificial material.42. The method of claim 40, wherein removing portions of the barrier material to form the barrier films includes removing the portions of the barrier material to form each of the barrier films to have a dimension that is substantially equal to a corresponding dimension of a respective one of the control gates. 43. The method of claim 40, wherein forming the plurality of control gates comprises forming a plurality of polysilicon control gates.44. The method of claim 40, wherein forming barrier material comprises forming nitride.45. The method of claim 40, wherein forming charge storage material comprises forming polysilicon.46. The method of claim 40, wherein forming the memory stack includes forming a NAND memory stack.47. The method of claim 40, wherein removing portions of the barrier material includes converting portions of the barrier to dielectric through an in situ steam generation process; andthe method further comprises etching dielectric material covering the barrier material. |
3D MEMORYPriority Application[0001] This application claims the benefit of priority to U.S. Application13/748,747, filed 24 January 2013, which is incorporated herein by reference in its entirety.Background[0002] Some memory cells can include a floating gate and a nitride wrapped around three sides of the floating gate. Undesired charges may become trapped in the nitride, particularly in portions of the nitride that are not directly between the control gate and the floating gate. The threshold voltage (Vt) of a cell may be altered by the undesired charges trapped in the nitride.Brief Description of the Drawings[0003] FIG. 1 illustrates a cross-section view of an example of a memory cell with an inter-gate dielectric (IGD) partially wrapped around a floating gate.[0004] FIG. 2 illustrates a cross-section view of an example of a memory cell.[0005] FIG. 3 illustrates a cross-section view of an example of a memory cell.[0006] FIG. 4 illustrates, by way of example, a graph of control gate bias voltage vs. pillar current in different memory cells.[0007] FIGS. 5A-G illustrate an example of a technique of making a vertical memory.[0008] FIGS. 6A-K illustrate another example of another technique of making a vertical memory.[0009] FIGS. 7A-D illustrate another example of a technique of making a vertical memory.[0010] FIGS. 8A-F illustrate other examples of techniques of making a vertical memory. [0011] FIG. 9 illustrates a cross-section view of an example of a vertical memory.[0012] FIGS. 10A-B illustrate an example of a technique of making a vertical memory.[0013] FIG. 11 illustrates an example of a memory array.Description of the Embodiments[0014] The following detailed description refers to the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter.[0015] The term "horizontal" as used in this application is defined as a plane parallel to the conventional plane or surface of a wafer, such as a substrate, regardless of the actual orientation of the wafer or substrate. The term "vertical" refers to a direction perpendicular to the horizontal as defined above.Prepositions, such as "on", "side", "higher", "lower", "over" and "under" are defined with respect to the conventional plane or surface being on the top surface of the wafer or substrate, regardless of the actual orientation of the wafer or substrate. The terms "wafer" and "substrate" are used herein to refer generally to any structure on which integrated circuits are formed, and also to such structures during various stages of integrated circuit fabrication. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled.[0016] Generally discussed herein are three-dimensional (3D) memories, memory cells, and methods of making and using the same. In one or more embodiments, a 3D vertical memory can include a memory stack. A memory stack can include a stack of at least two memory cells and a dielectric between adjacent memory cells, where each memory cell includes a control gate (CG) and a charge storage structure, such as a floating gate (FG) or charge trap (CT), configured to store electrons or holes accumulated on it. Information is represented by the amount of electrons or holes stored by the cell. The memory stack can further include a barrier material, such as nitride in an inter-gate dielectric (IGD) comprising a composite of oxide-nitride-oxide ("ONO"), where the IGD can be between the charge storage structure and the CG. The barrier material and the charge storage structure can be laterally positioned adjacent, horizontally aligned to each other, or have substantially equal heights.[0017] A NAND array architecture is an array of memory cells arranged such that the memory cells of the array are coupled in logical rows to access lines (which are coupled to, and in some cases are at least partially formed by, the CGs of the memory cells), which are conventionally referred to as word lines. Some memory cells of the array are coupled together in series between a source line and the data line, which is conventionally referred to as a bit line.[0018] Memory cells in NAND array architecture can be programmed to a desired data state. For example, electric charge can be accumulated (e.g., placed) on, or removed from, an FG of a memory cell to program the cell into a desired one of a number of data states. A memory cell conventionally referred to as a single level cell (SLC) can be programmed to a desired one of two data states, e.g., a "1" or a "0" state. Memory cells conventionally referred to as multilevel cells (MLCs) can be programmed to a desired one of more than two data states.[0019] When electrons are stored on the FG, they modify the Vtof the cell. Thus, when the cell is "read" by placing a specific voltage on the CG (e.g., by driving the access line coupled to the cell with a read voltage), electrical current will either flow or not flow in the cell's channel depending on the Vtof the cell and the specific voltage placed on the CG. This presence or absence of current can be sensed and translated into l's and 0's, reproducing the stored data.[0020] Each memory cell may not directly couple to a source line and a data line. Instead, the memory cells of an example array may be arranged together in strings, typically of 4, 8, 16, 32, or more cells each, where the memory cells in the string are coupled together in series between a common source line and a data line.[0021] A NAND array can be accessed by a row decoder activating a row of memory cells by driving the access line coupled to those cells with a voltage. In addition, the access lines coupled to the unselected memory cells of each string can be driven with a different voltage. For example, the unselected memory cells of each string can be driven with a pass voltage so as to operate them as pass transistors, allowing them to pass current in a manner that is unrestricted by their programmed data states. Current can then flow from the source line to the data line through each memory cell of the series coupled string, restricted by the memory cell of each string that is selected to be read. This places the currently encoded, stored data values of the row of selected memory cells on the data lines. A page of data lines is selected and sensed, and then individual data words can be selected from the sensed data words from the page and communicated from the memory apparatus.[0022] The flash memory, such as a NAND array, may be formed as a 3D memory with stacks of more than one memory cells. The CGs for the memory cells may be adjacent to CG recesses.[0023] FIG. 1 shows an example of a memory cell 100 from a stack of memory cells within a 3D memory that can include a charge storage structure, such as FG 102A, a dielectric (e.g., oxide) 108, a barrier film (e.g., nitride) 104A, a CG 106, and a pillar 110. In the illustrated example, the barrier film 104A is between the FG 102A and the CG 106. The barrier film 104A can be substantially rectilinear as generally illustrated, but may not be substantially rectangular. Charge can get trapped on portions of the barrier film 104 A, such as on portions of the barrier film 104A that do not directly separate the FG 102A and the CG 106.[0024] FIG. 2 shows a cross-section view of an example of a vertical memory cell 200. The memory cell 200 can include an FG 102B, a dielectric 108, a barrier film 104B, and a CG 106. The vertical memory cell 200 can be used in a NAND string, NOR string, or other type of string. The barrier film 104 can be substantially rectangular, as illustrated in FIG. 2.[0025] FIG. 3 shows a cross-section view of an example of a memory cell 300, such as a vertical memory cell, that can include an FG 102B, a barrier film 104B, a CG 106, a dielectric 108, and a semiconductor pillar 110. The FG 102B can be made of a semiconductor material, such as conductively doped polysilicon. The FG 102B can have a first dimension 312A (e.g., height) that is substantially equal to a first dimension 312B of the barrier film 104B (e.g., within one or two times a standard variation in a fabrication process used to make the memory cell), such as shown in FIG. 3. The first dimension 312A of the FG 102B could also be greater than a first dimension 312B of the barrier film 104B. The FG 102B can have a second dimension (e.g., length) 314A perpendicular to the first dimension 312A that is greater than the second dimension 314B of the barrier film 104B through the entire first dimension 312A of the FG 102B, such as shown in FIG. 2. The first dimension 312A of the FG 102B can be smaller than the first dimension 312C of the CG 106 or substantially equal to the first dimension 312C of the CG 106. The second dimension 314C of the CG 106 can be greater than the second dimension 314A of the FG 102B through the entire first dimension 312A of the FG 102B. The CG 106, oxide 108, FG 102, or barrier film 104 can be deposited using a PECVD process.[0026] The barrier film 104B can include a second dimension 314B that is substantially equal through its first dimension 312B (e.g., the barrier film 104B can include a substantially uniform thickness across its entire first dimension 312B), such as shown in FIG. 3. The barrier film 104B can be substantially rectangular in a vertical cross-section of the vertical memory cell 300, such as shown in FIG. 3. The barrier film 104B can include a surface area (e.g., second dimension 314B times first dimension 312B) that is less than a surface area of the FG 102B (e.g., second dimension 314A times first dimension 312A), such as shown in FIG. 3. The barrier film 104B can be entirely between a plane 316A corresponding to a side of the FG 102B and a plane 316B corresponding to a side of the CG 106 opposing the side of the FG 102B, such as shown in FIG. 3. The barrier film 104B can be adjacent to only one side of the FG 102B, such as shown in FIG. 3.[0027] The barrier film 104B can include a face and the FG 102B can have a face, such as the face corresponding to the plane 316A, opposing the face of the barrier film 104B and substantially parallel to the face of the barrier film 104B. Each part of the face of the barrier film 104B can be separated from the face of the floating gate 102B by a substantially equal distance, such as shown in FIG. 3.[0028] The FG 102B can have a planar side (e.g., the side corresponding to the plane 316A) facing the barrier film 104B. The CG 106 can have a planar side (e.g., the side corresponding to the plane 316B) facing the barrier film 104B. The barrier film 104B can have a first planar side facing and substantially parallel to the planar side of the FG 102B and a second planar side facing and substantially parallel to the planar side of the CG 106. The first dimension 312C of the CG 106 can be substantially equal to the corresponding first dimension 312B of the barrier film 104B, such as shown in FIG. 3.[0029] FIG. 4 shows an example of a graph of CG bias vs. pillar current. Line 418 is an example of a CG bias vs. pillar current in a memory cell that includes a barrier film 104 such as the barrier film 104B shown in FIG. 2. Line 420 is an example of a CG bias vs. pillar current in a cell that includes a barrier film 104 adjacent to the FG 102 on three sides, such as shown in FIG. 1. For the same pillar current, the CG 106 bias for line 418 can be less than the CG 106 bias for line 420. For example, as illustrated in FIG. 4, the bias voltage difference may be about 2.9 Volts. Other voltage differences can be realized. For example, the bias voltage difference may be up to about 7 Volts. The voltage differences can be a function of how much charge is trapped on the barrier film 104 or the alignment of the FG 102 to the CG 106. For example, the lower CG bias can be due, at least in part, to a reduction in the amount of charge trapped on the barrier film 104B as compared to the charge trapped on the barrier film 104A. Also, the lower CG bias can be due, at least in part, to the alignment between the FG 102B and the CG 106.[0030] As used herein, "vertical memory string" can mean a "vertical memory stack" (e.g., alternating CG 106 and tier dielectric 524 layers with CG recesses 530 between tier dielectric 524 layers) with a CG recess 530 filled in with dielectric 108, an FG 102, and barrier film 104, and including a pillar 110 (e.g., a filled trench 528, such as a trench filled with polysilicon). Also, the term "vertical memory" can be used to indicate a final form.[0031] FIGS. 5A-G show an example of a technique of making a vertical memory 500 with a planar barrier film 104. FIG. 5A shows a first CG 106A-B over a substrate 522, a first tier dielectric 524 A-B over the first CG 106A-B, a second CG 106C-D over the first tier dielectric 524A-B, a second tier dielectric 524C-D over the second CG 106C-D, and a mask material (e.g., dielectric, such as oxide, nitride, or polysilicon) 526 over the second tier dielectric 524C-D. The vertical memory 500 can include a trench 528 and a plurality of CG recesses 530. A first layer of dielectric 108, such as oxide, can be formed on the sidewalk of the trench 528 and on exposed surfaces of the CGs 106 in the CG recesses 530, such as shown in FIG. 5A. The CG recesses 530 can be gaps between tier dielectric layers 524 adjacent to the CGs 106 formed between the tier dielectric layers 524.[0032] The trench 528 and the CG recesses 530 can be at least partially filled with a barrier material 532, such as shown in FIG. 5B. For example, the barrier material 532 can be nitride. The barrier material 532 may be deposited or otherwise formed in the trench 528 and CG recesses 530. The barrier material 532 can be partially removed, such as by using a mechanical, chemical, laser, vapor, or photo etching process. The barrier material 532 can be partially removed from the trench 528 and CG recesses 530 to leave at least some of the barrier material 532 in the CG recesses 530 to form barrier films 104 adjacent to the CGs 106, such as shown in FIG. 5C. The portions of the barrier material 532 removed can be removed using hot phosphoric acid. The size or shape of the barrier material 532 remaining after the process can be controlled by using hot phosphoric acid at different temperatures or concentrations, or by exposing the barrier material 532 to the hot phosphoric acid for varying amounts of time.[0033] A second layer of dielectric 108 (which may or may not be the same dielectric material as the first layer) can be formed, such as by growing the dielectric 108 using an in situ steam generation process (ISSG), on the barrier films 104, such as shown in FIG. 5D. The trench 528 and the CG recesses 530 can be at least partially filled with a charge storage material 534, such as shown in FIG. 5E. The charge storage material 534 may be conductively doped polysilicon. The charge storage material 534 may be deposited to at least partially fill the CG recesses 530. The charge storage material 534 can be at least partially removed, such as shown in FIG. 5F. The charge storage material 534 may be at least partially removed from the trench 528 and CG recesses 530, and remaining portions of the charge storage material 534 may be left in the CG recesses 530, such as to form FGs 102. The portions of charge storage material 534 can be removed using a Certas™ (e.g., a vapor ammonia), an ammonium fluoride and nitric acid mix (NH4F-HN03), an ozone (03) or hydrofluoric acid (HF) mix or cycle (e.g., exposed surfaces can be exposed to ozone to create oxide (e.g., oxidize) the surface and the oxidized surface can be exposed to hydrofluoric acid to remove the oxide), hydrofluoric acid and nitric acid mix (HF-HN03), hydrofluoric acid and hydrogen peroxide mix (HF-H202), or a tetra methyl ammonium hydroxide (TMAH) process. The process used to remove portions of charge storage material 534 can be a function of the doping of the charge storage material 534. For example, if the charge storage material 534 is n-type polysilicon, the TMAH process can be used to remove the portions of charge storage material 534.[0034] A third layer of dielectric 108, such as a tunnel oxide, can be formed (e.g., grown) on the FGs 102, and a pillar 110 can be formed in the trench 528, such as shown in FIG. 5G. Forming a pillar 110 can include forming a liner, such as a polysilicon liner, on exposed surfaces of the trench 528, such as the sidewalls of the trench 528. The liner can protect or shield the dielectric 108 from a downstream process. The dielectric 108 (e.g., poly liner) in the bottom of the trench 528 can be punched through or otherwise removed, such as to allow electrical contact to the substrate 522 or channel 1138 (see FIG. 11). As shown in FIG. 5G, the pillar 110 can be formed to at least partially fill the trench 528. The vertical memory 500 formed by the technique can include a memory cell substantially similar to the vertical memory cell 300 shown in FIG. 3 with the first dimension 312A of the FG 102 and the first dimension 312B of the barrier film 104 less than the first dimension 312C of the CG 106. FIG. 5G shows a vertical memory 500 with two vertical memory strings, each vertical memory string including two memory cells.[0035] FIGS. 6 A- J show an example of a technique of making a vertical memory 600. The vertical memory 600 in FIG. 6A can be substantially similar to the vertical memory 600 shown in FIG. 5 A without the dielectric 108. A layer of dielectric 108 can be formed on the sidewalls of the trench 528 and on exposed surfaces of the CGs 106 adjacent to the recesses 530. As shown in FIG. 6B, portions of the dielectric 108 can be removed, such as by using hydrofluoric acid, from the sidewalls of the trench 528 and portions of exposed surfaces of the CG recesses 530. Alternatively, the dielectric 108 can be grown on exposed portions of the CG 106, such as through an In Situ Steam Generation (ISSG) process. Such a technique can leave a dielectric 108 adjacent to the CG 106 in a respective CG recess 530 that has a dimension (e.g., height) that is substantially equal to a corresponding dimension (e.g., height) of the CG 106. The trench 528 and the CG recesses 530 can be at least partially filled with a barrier material 532 to provide barrier material 532 on exposed surfaces of the CG recesses 530 and sidewalls of the trench 528, such as shown in FIG. 6C. [0036] The trench 528 and the CG recesses 530 can be at least partially filled with a sacrificial material 636. As shown in FIG 6D, the sacrificial material 636 can be deposited or otherwise formed on the barrier material 532 in the trench 528 and CG recesses 530. The sacrificial material 636 can be deposited using an Atomic Layer Deposition (ALD) process, High Aspect Ratio Process (HARP), or other process. The sacrificial material 636 can be a polysilicon, oxide, Tetraethyl Orthosilicate (TEOS), an organic, such as carbon Bottom Anti-Reflective Coating (BARC) or resist, nitride, doped versions thereof, or combinations thereof. A sacrificial material 636 can be useful in techniques where a downstream process, such as phosphoric acid barrier material removal, can damage the material that would otherwise become the FG 102 if the sacrificial material 636 were not used. The sacrificial material 636 can be at least partially removed from the trench 528, leaving some sacrificial material 636 in the CG recesses 530, such as shown in FIG. 6E. When the sacrificial material 636 comprises polysilicon a TMAH, ammonia (NH40H), or vapor ammonia process can be used to at least partially remove the sacrificial material 636. When the sacrificial material 636 comprises an oxide or nitride deposited by means of an ALD or other process, hydrofluoric acid or hot phosphoric acid can be used to at least partially remove the sacrificial material 636. When the sacrificial material 636 comprises TEOS or a HARP material hydrofluoric acid can be used to at least partially remove the sacrificial material 636. When the sacrificial material comprises BARC or resist an anisotropic dry etch or plasma dry strip (e.g., "descum") can be used to at least partially remove the sacrificial material 636.[0037] The barrier material 532 can be etched to at least partially remove the barrier material 532 from the trench 528 and the CG recesses 530. As shown in FIG. 6F, the etching can form a barrier film 104 adjacent to the dielectric 108 in a respective CG recess 530 that has a dimension (e.g., height) that is substantially equal to a corresponding dimension (e.g., height) of the CG 106 adjacent to that recess 530. The sacrificial material 636 can be resistant to a removal process such as to be protected from the removal process. The removal process can include a chemical etch that includes a chemical, such as hot phosphoric acid, that selectively removes portions of the barrier material 532 and does not remove the dielectric 108 or other portions of the vertical memory 600. The sacrificial material 636 can be removed, such as shown in FIG. 6G.[0038] A second layer of dielectric 108 can be grown on exposed surfaces of the barrier films 104, such as shown in FIG. 6H. The grown dielectric 108 in a respective CG recess 530 can have a dimension (e.g., height) substantially equal to a corresponding dimension (e.g., height) of the CG 106 adjacent to that recess 530.[0039] The trench 528 and the CG recesses 530 can be at least partially filled with a charge storage material 534, such as shown in FIG. 61. The trench 528 and the CG recesses 530 can be filled using a conformal deposition process. The charge storage material 534 can be at least partially removed from the trench 528 and CG recesses 530. Some charge storage material 534 can be left in the CG recesses 530. The charge storage material 534 that is left can form FGs 102. The FG 102 in a respective CG recess 530 can have a dimension (e.g., height) that is substantially equal to a corresponding dimension (e.g., height) of a CG106 adjacent to that CG recess 530, such as shown in FIG. 6J. As shown in FIG. 6K, a third layer of dielectric 108 (which may or may not be the same type of dielectric used in the first and/or second layer) and a pillar 110 can be formed (e.g., grown) in the trench 528. The vertical memory 600 formed by the technique can include a memory cell substantially similar to the vertical memory cell 300 shown in FIG. 3.[0040] FIGS. 7A-D illustrate another technique of forming a vertical memory 700. The technique can include the process described with regard to FIGS. 6A-C. A vertical memory, such as the vertical memory 600 depicted in FIG. 6C, can have a second layer of dielectric 108 formed on the barrier material 532 in the trench 528 and CG recesses 530. The second layer of dielectric 108 can be at least partially removed, such as shown in FIG. 7A. As shown in FIG. 7B, the trench 528 and the CG recesses 530 can be at least partially filled with a charge storage material 534 (e.g., such that the charge storage material 534 is on the second layer of dielectric 108). The charge storage material 534 can be at least partially removed from the trench 528 to form a FG 102, such as shown in FIG. 7C. As shown in FIG. 7D, the barrier material 532 can be at least partially removed, such as by using hot phosphoric acid, and a third layer of dielectric 108 can be formed on exposed surfaces of the trench 528 and the CG recesses 530. The third layer of dielectric 108, such as high temperature oxide, can be formed using a deposition process. The dielectric 108 can form a tunnel oxide. A pillar 110 can be formed in the trench 528, such as shown in FIG. 5G.[0041] The vertical memory 600 depicted in FIG. 6C can be filled, such as by using an ALD process. The ALD process can fill the CG recesses 530 and at least partially fill the trench 528 with dielectric 108A, such as shown in FIG. 8A. At least some of the dielectric 108A in the trench 528 can be removed. The dielectric 108A can be left substantially flush with barrier material 532 in the trench 528, such as shown in FIG. 8B. FIG. 8C shows the vertical memory 800 after barrier material 532 has been removed by converting it to dielectric through an in situ steam generation (ISSG) process. Such a process can remove portions of barrier material 532, such as by converting portions of barrier material 532 to dielectric 108. FIG. 8D shows the vertical memory 800 after the dielectric 108 A has been etched back using wet chemistry (e.g., hydrofluoric acid). The dielectric 108 produced from the ISSG process can be etched selectively to the dielectric material 108A in the CG recesses 530. The dielectric 108 on the sidewall (e.g., nitride converted to oxide using an ISSG process) can etch away slower than the other dielectric 108A. An FG 102 can be formed in the CG recess 530 to form a vertical memory 800 including memory cells substantially similar to the memory of FIG. 1. Such a vertical memory can include an FG 102 that includes a larger dimension (e.g., length), that extends into the trench 528 to be flush with the dielectric 108 in the trench 528.[0042] Alternatively, the vertical memory 800 depicted in FIG. 8C can be etched using hot phosphoric acid. The hot phosphoric acid can etch dielectric 108A and 108 and barrier material 532 to form barrier film 104 in the CG recesses 530, such as shown in FIG. 8E. The dielectric 108 can be more resistant to hot phosphoric acid etching than the dielectric 108 A. For example, exposing dielectric 108 to hot phosphoric acid for one minute can remove less dielectric 108 than would be removed by exposing dielectric 108 A to the same hot phosphoric acid for the same amount of time. A dielectric 108 can be formed adjacent to the barrier film 104 and an FG 102 can be formed adjacent to the dielectric 108. The resulting structure is depicted in FIG. 8F.[0043] FIG. 9 shows an example of a vertical memory 900, which can be formed using substantially the same technique as a memory cell corresponding to FIGS. 7A-D. The dielectric 108 that forms the tunnel oxide can be grown. Such growing can include using an ISSG process. Using such a process can convert silicon to oxide, such as to convert some of the FG 102 to oxide. Such a process can round corners of the FG 102 or remove a portion of the FG 102 adjacent to the tier dielectric 524, such as shown in FIG. 9. Such a process can alter the geometry of subsequent material formed on the FG 102, such as dielectric 108 and pillar 110, such as shown in FIG. 9.[0044] FIGS. 10A-B show an example of a technique of forming a vertical memory 1000. The vertical memory 1000 can include a structure substantially similar to the vertical memory 600 depicted in FIG. 6B. A barrier material 532 can be deposited on sidewalls of the trench 528 and within the CG recesses 530, such as shown in FIG. 10A. The memory cell 1000 can be oxidized, such as by using an ISSG process, to convert portions of the barrier material 532 to a dielectric 108, such as an oxynitride dielectric. An example of the resulting structure is shown in FIG. 10B. The dielectric 108 can be removed and some of the remaining barrier material 532 can be removed, such as to form a barrier film 104, such as shown in FIG. 6G. The remaining portions of the memory cell 1000 can be formed using a technique substantially similar to the technique depicted in FIGS. 6H-K, such as to form a vertical memory 1000 substantially similar to vertical memory 600 depicted in FIG. 6K.[0045] FIG. 11 shows an example of a memory array 1 100. In the memory array 1100, memory cells 1142A-C can be electrically coupled through a channel 1138. The channel 1138 can be electrically coupled to one or more data line contacts 1140A-B. Memory cells 1142A-D of the memory array 1100 can be substantially similar to memory cells discussed herein, such as those shown in FIGS. 2, 5G, 6K, 7D, 9, or 10B.[0046] A problem associated with memory cells that include a barrier film, such as nitride, adjacent to an FG on more than one side can be charges getting trapped in portions of the nitride that do not separate the FG and a CG (e.g., in portions of the nitride that are not directly between the FG and the CG). Also, trapped charge can migrate along the IGD, such as through program, erase, or temperature cycling. Such charge trapping or movement can alter the threshold voltage (Vt) of the memory cell or degrade incremental step pulse programming (ISPP) relative to memory cells that do not have such charge trapping in the nitride.[0047] Such charge trapping or migration on the nitride can be at least partially eliminated by including nitride adjacent to only one surface of the FG (e.g., by including nitride that is substantially rectangular and not "U" shaped). Such a configuration can include charge being trapped on the FG rather than on the nitride.[0048] An advantage of one or more embodiments can include reducing the incidents of erase saturation in memory cells. Another advantage can include improved alignment between the FG and CG due to eliminating a source of variation in manufacturing, such as the nitride wrapping in irregular shapes around corners in a CG recess or a tier oxide. Instead the FG shape and size can be defined by a plasma enhanced chemical vapor deposition (PECVD) process, which can be a substantially uniform stack deposition process.[0049] Program and erase properties of a memory cell are a function of a gate coupling ratio, which is a function of a capacitance between the FG and the CG of a memory cell. With a wrapped nitride, such as shown in FIG. 1, the capacitance is a function of the distance between the opposing surfaces of the CG 106 and the FG 102A and the distances between the top and bottom surfaces of the FG and the nitride adjacent thereto, such as shown by the arrows in FIG. 1. With a memory cell 200 that includes a planar barrier film 104B, such as shown in FIG. 2, the capacitance created between the IGD and the FG can be reduced or eliminated, such as to make the capacitance a function of the distance between a surface of the FG 102B and an opposing surface of the CG 106. Such a configuration can reduce the sources of variation in the gate coupling ratio, such as to improve the uniformity in memory cell program and erase performance. A device with improved FG to CG alignment can include an improved VgVt. Another advantage can include reducing ISPP degradation issues or maintaining a sufficiently low Vt, such as by reducing the Vt shift caused by cycling by reducing the charge trapped on the nitride.[0050] Another advantage can include an increased channel length to memory cell first dimension ratio, such a configuration can increase the reliability of the respective memory cell. [0051] The above description and the drawings illustrate some embodiments of the invention to enable those skilled in the art to practice the embodiments of the invention. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Examples merely typify possible variations. Portions and features of some embodiments may be included in, or substituted for, those of others. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. |
Electroless plating can be utilized to form electrical interconnects associated with semiconductor substrates. For instance, a semiconductor substrate can be formed to have a dummy structure thereover with a surface suitable for electroless plating, and to also have a digit line thereover having about the same height as the dummy structure. A layer can be formed over the dummy structure and digit line, and openings can be formed through the layer to the upper surfaces of the dummy structure and digit line. Subsequently, a conductive material can be electroless plated within the openings to form electrical contacts within the openings. The opening extending to the dummy structure can pass through a capacitor electrode, and accordingly the conductive material formed within such opening can be utilized to form electrical contact to the capacitor electrode. |
CLAIMS The invention claimed is:1. A semiconductor processing method for forming an electrical contact, comprising: providing a semiconductor substrate having a surface suitable for electroless plating, a layer over the surface, and an electrically conductive node supported by the layer; forming an opening through the layer and to the surface, a periphery of the opening including an electrically conductive portion of the electrically conductive node; and electroless plating a conductive material within the opening and from the suitable surface, the electroless-plated material forming an electrical contact to the electrically conductive node.2. The method of claim 1 wherein the layer is electrically insulative.3. The method of claim 1 wherein an insulative material is over the node, and wherein the opening extends through the insulative material.4. The method of claim 1 wherein the insulative material over the node is part of the layer supporting the node.5. The method of claim 2 wherein the layer is comprised by a stack of two or more electrically insulative layers, and wherein the opening extends through the stack.6. The method of claim 2 wherein the layer comprises BPSG.7. The method of claim 1 wherein the opening is formed to a depth of at least about 3 microns.8. The method of claim 1 wherein the electroless-plated conductive material comprises one or both of nickel and cobalt.9. The method of claim 1 wherein the node includes an electrically conductive layer, and wherein the opening extends through the electrically conductive layer.10. The method of claim 1 wherein the semiconductor substrate comprises a monocrystalline silicon base supporting a structure having the surface suitable for electroless plating.11. The method of claim 10 wherein the structure is a block over the monocrystalline silicon base; and wherein the surface suitable for electroless plating is an uppermost surface of the block.12. The method of claim 11 wherein the structure is a block over the monocrystalline silicon base; and wherein the surface suitable for electroless plating comprises one or more of palladium, zinc, silver, nickel and cobalt.13. The method of claim 11 wherein the structure is a block over the monocrystalline silicon base; and wherein the surface suitable for electroless plating comprises one or both of nickel and cobalt.14. A semiconductor processing method for forming an electrical contact, comprising: providing a semiconductor substrate which supports an electrically insulative material and a pair of electrical nodes, the electrical nodes being a first node and a second node, a first opening extending through the electrically insulative material to the first node and a second opening extending through the electrically insulative material to the second node, the first node being at a first elevational height over the substrate and the second node being at a second elevational height over the substrate, the first elevational height being less than the second elevational height and accordingly the first opening being deeper than the second opening, the first electrical node having a first surface exposed within the first opening and the second electrical node having a second surface exposed within the second opening, the first surface being suitable for electroless plating and the second surface not being suitable for electroless plating; electroless plating a first conductive material within the first opening and from the first surface to form a first conductive material plug extending to a height within the first opening that is about the same as the second elevational height; activating the second surface to render the second surface suitable for electroless plating; and after activating the second surface, electroless plating a second conductive material within the first and second openings, the second conductive material within the first opening forming a second conductive material plug extending upwardly from the first conductive material plug, and the second material within the second opening forming a second conductive material plug extending upwardly from the second surface.15. The method of claim 14 wherein the first and second conductive materials are compositionally the same as one another.16. The method of claim 14 wherein the second conductive material plugs within the first and second openings extend to an upper surface of the electrically insulative material.17. The method of claim 14 wherein the second conductive material plugs within the first and second openings extend upwardly beyond an upper surface of the electrically insulative material.18. The method of claim 14 wherein the first node is part of a digit line and the second node is part of a capacitor electrode.19. A semiconductor processing method for forming electrical contacts to a capacitor electrode and a digit line, comprising: providing a semiconductor substrate, the semiconductor substrate supporting a digit line and a spacer structure, the digit line comprising a region and the spacer structure comprising another region; the digit line region having an upper surface and the spacer structure region having another upper surface, the digit line region upper surface being about the same elevational height over the substrate as the spacer structure region upper surface, the semiconductor substrate comprising a first insulative material over the digit line region and a second insulative material over the spacer structure region, the semiconductor substrate comprising a capacitor electrode supported by the substrate; forming openings through the first and second insulative materials, the opening through the first insulative material being a first opening and extending to the upper surface of the digit line region, the opening through the second insulative material being a second opening and extending to the upper surface of the spacer structure region, the second opening having a periphery which includes an electrically conductive portion of the capacitor electrode; and electroless plating a conductive material within the first and second openings, the electroless plating initiating from the upper surfaces of the digit line region and spacer structure region, the electroless-plated material forming an electrical contact to the digit line in the first opening and forming an electrical contact to the capacitor electrode in the second opening.20. The method of claim 19 wherein the capacitor electrode comprises one or more of TiN, WN, WSi and conductively-doped silicon, with the listed compositions being shown in terms of the elements contained therein rather than in terms of a particular stoichiometry of the elements within the compositions.21. The method of claim 19 wherein the spacer structure is a dummy structure.22. The method of claim 19 wherein the first and second insulative materials are comprised by a common layer.23. The method of claim 22 wherein the common layer comprises BPSG.24. The method of claim 22 wherein the common layer comprises a thickness of at least about 3 microns over the digit line region and spacer structure region.25. The method of claim 22 wherein second opening extends through the capacitor electrode.26. The method of claim 24 wherein the second opening extends through a segment of the capacitor electrode, and wherein the common layer comprises a thickness of less than or equal to about 1 micron over the segment of the capacitor electrode.27. The method of claim 19 wherein the capacitor electrode is a capacitor plate electrode, the method further comprising: forming a capacitor storage node electrode supported by the substrate; forming at least one dielectric material over the capacitor storage node electrode; and forming the capacitor plate electrode over the at least one dielectric material.28. The method of claim 27 wherein the second opening extends through the capacitor plate electrode and the at least one dielectric material, but does not extend through the capacitor storage node electrode.29. The method of claim 19 wherein the upper surfaces of the digit line region and spacer structure region have the same composition as one another prior to formation of the layer.30. The method of claim 29 wherein the upper surfaces of the digit line region and spacer structure region have a suitable composition for the electroless plating prior to the formation of the layer.31. The method of claim 30 wherein the suitable composition for the electroless plating comprises one or more of palladium, silver, zinc, nickel and cobalt.32. The method of claim 30 wherein the suitable composition for the electroless plating comprises one or both of nickel and cobalt.33. The method of claim 29 wherein the upper surfaces of the digit line region and spacer structure region do not have a suitable composition for the electroless plating prior to the formation of the layer, and are activated after the formation of the first and second openings to be suitable for the electroless plating.34. The method of claim 33 wherein the activation comprises provision of a sufficient amount of one or more of palladium, silver, zinc, nickel and cobalt on the upper surfaces of the digit line region and spacer structure region to make the electroless plating spontaneous.35. A semiconductor structure, comprising: a semiconductor substrate; a digit line supported by the semiconductor substrate, the digit line comprising a first region, the first region having an upper surface at a first elevational height over the semiconductor substrate; a dummy structure supported by the semiconductor substrate, the dummy structure comprising a second region; and the second region having an upper surface at a second elevational height over the semiconductor substrate, the first and second elevational heights being about the same as one another; a first insulative material supported by the semiconductor substrate and being over the digit line region; a second insulative material supported by the semiconductor substrate and being over the dummy structure region; a capacitor structure supported by the second insulative material, the capacitor structure including a first capacitor electrode, a second capacitor electrode and at least one dielectric material between the first and second capacitor electrodes; a first conductive interconnect extending upwardly from the digit line region and through the first insulative material, the first conductive interconnect being of a composition; and a second conductive interconnect extending upwardly from the dummy structure region, through only one of the first and second capacitor electrodes and through the second insulative material; the second conductive interconnect being of the same composition as the first conductive interconnect.36. The structure of claim 35 wherein said only one of the first and second capacitor electrodes comprises one or more of TiN, WN and WSi, with the listed compositions being shown in terms of the elements contained therein rather than in terms of a particular stoichiometry of the elements within the compositions.37. The structure of claim 35 wherein the first and second insulative materials are comprised by a common layer.38. The structure of claim 37 wherein the common layer comprises BPSG.39. The structure of claim 37 wherein the common layer is over the capacitor structure and below the capacitor structure, and wherein the common layer comprises a thickness of at least about 3 microns proximate the digit line region and proximate the dummy structure region.40. The structure of claim 39 wherein second conductive interconnect extends through a segment of said only one of the first and second capacitor electrodes, and wherein the common layer comprises a thickness of less than or equal to about 1 micron over said segment.41. The structure of claim 35 wherein the second conductive interconnect extends through the at least one dielectric material.42. The structure of claim 35 wherein the composition of the first and second conductive interconnects comprises one or more of palladium, silver, zinc, nickel and cobalt.43. The structure of claim 42 wherein the composition of the first and second conductive interconnects comprises palladium.44. The structure of claim 42 wherein the composition of the first and second conductive interconnects comprises silver.45. The structure of claim 42 wherein the composition of the first and second conductive interconnects comprises zinc.46. The structure of claim 42 wherein the composition of the first and second conductive interconnects comprises nickel.47. The structure of claim 42 wherein the composition of the first and second conductive interconnects comprises cobalt.48. A DRAM array comprising the structure of claim 42.49. An electronic system comprising the DRAM array of claim 48. |
Semiconductor Processing Methods For Forming Electrical Contacts, And Semiconductor StructuresTECHNICAL FIELD [0001] The invention pertains to semiconductor processing methods for forming electrical contacts, and also pertains to semiconductor structures.BACKGROUND OF THE INVENTION [0002] Semiconductor fabrication processes frequently involve formation of electrical interconnects within openings. The desired aspect ratio of the openings is increasing for various reasons, including, for example, to compensate for losses in capacitance or inductance. As the aspect ratio increases, it becomes increasingly difficult to conformally fill openings with traditional processes. Figs. 1 and 2 illustrate an exemplary prior art process, and a problem that can occur during an attempt to form an electrical interconnection within an opening. [0003] Fig. 1 shows a semiconductor construction 10 at a preliminary processing stage. Construction 10 comprises a base 12. The base can comprise, consist essentially of, or consist of monocrystalline silicon lightly-doped with background p-type dopant. The base 12 can be referred to as a "substrate", and/or various combinations of structures can be referred to as a "substrate". To aid in interpretation of the claims that follow, the terms "semiconductive substrate" and "semiconductor substrate" are defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. [0004] A conductive block 14 is formed over base 12. Block 14 can correspond to, for example, a digit line. [0005] An insulative material 16 is formed over base 12 and over block 14. Insulative material 16 can comprise, for example, borophosphosilicate glass (BPSG). [0006] An opening 18 is etched through insulative material 16 to an upper surface of conductive block 14. Opening 18 can be formed utilizing, for example, photolithographic processing to generate a patterned photoresist mask (not shown) which defines a location for opening 18, followed by an etch into material 16 to generate the opening 18, and subsequent removal of the photoresist mask. The opening is shown having vertical sidewalls, but it is to be understood that such is an idealized structure. Frequently the opening will have non-vertical sidewalls due to limitations in etching processes. [0007] Referring to Fig. 2, a first conductive material 20 is formed over insulative material 16 and within opening 18. Conductive material 20 can comprise, for example, a metal nitride (such as titanium nitride) and can be formed by, for example, chemical vapor deposition. A second conductive material 22 is formed over conductive material 20. Second conductive material 22 can comprise, for example, tungsten and can also be formed by, for example, chemical vapor deposition. The first layer 20 can function as an adhesive for adhering the second layer 22 to insulative material 16. [0008] A problem that occurs during deposition of one or both of materials 20 and 22 is that the conductive material can grow non-conformally at upper corners proximate opening 18 to form extensions 24. The extensions 24 can ultimately pinch off the top of opening 18 before the opening has been conformally filled with conductive materials 20 and 22. Accordingly, a void 26 remains in the opening. Such void is frequently referred to as a "keyhole". The shape of the opening 18 and keyhole 26 are shown diagrammatically in Figs. 1 and 2, and it is to be understood that the opening and keyhole can have other shapes. Such other shapes can include a concave "bow" near the top of opening 26 due to limitations in the ability of etches to form the shown vertical sidewalls. The bow can provide additional complications to a conformal fill which can exacerbate keyhole problems and lead to formation of large keyholes just below the upper surface of material 16. Such large keyholes can undesirably be exposed in subsequent polishing processes . It is desired to develop new methods for filling openings which alleviate, and preferably prevent, formation of keyholes.SUMMARY OF THE INVENTION [0009] In one aspect, the invention encompasses a semiconductor processing method for forming an electrical contact. A semiconductor substrate is provided. The substrate has a surface suitable for electroless plating, a layer over the surface, and a node supported by the layer. An opening is formed through the layer and to the suitable surface. A periphery of the opening includes an electrically conductive portion of the node. A conductive material is electroless plated within the opening, with the electroless plating being initiated from the suitable surface. The electroless-plated material forms an electrical contact to the node. [0010] In one aspect, the invention encompasses a semiconductor processing method for forming electrical contacts to a capacitor electrode and a digit line. A semiconductor substrate is provided. The substrate supports a digit line and a spacer structure. The digit line comprises a region, and the spacer structure comprises another region. The digit line region has an upper surface, and the spacer structure region has another upper surface. The digit line region upper surface is at about the same elevational height over the substrate as the spacer structure region upper surface. The semiconductor substrate further comprises electrically insulative material over the digit line region and the spacer structure region, and a capacitor electrode supported by the insulative material. Openings are formed through the insulative material. One of the openings is a first opening that extends to the upper surface of the digit line region, and another of the openings is a second opening and extends to the upper surface of the spacer structure region. The second opening has a periphery which includes an electrically conductive portion of the capacitor electrode. A conductive material is electroless plated within the first and second openings. The electroless plating initiates from the upper surfaces of the digit line region and the spacer structure region. The electroless-plated material forms an electrical contact with the digit line in the first opening, and forms an electrical contact.with the capacitor electrode in the second opening. The spacer structure can be referred to as a "dummy" structure in particular aspects of the invention to indicate that the structure is an electrical dead-end and thus comprises no electrical purpose. The spacer structure instead has the physical purpose of mimicking the height of the digit line. In other words, the term "dummy structure" is to be understood herein as referring to a structure which is utilized to mimic a physical property of another structure (such as to mimic the height of a digit line structure), and which is circuit inoperable (i.e., which is not part of a current flow path of a circuit). The dummy structure can comprise a single layer or a combination of different layers. [0011] In one aspect, the invention encompasses a semiconductor structure. The structure includes a semiconductor substrate, a digit line supported by the substrate, and a spacer structure supported by the substrate. The digit line can comprise a single layer or multiple layers, and frequently will comprise a stack of TiN/silicon/WSiK; similarly, the spacer structure can comprise a single layer or multiple layers. The digit line comprises a first region having an upper surface at a first elevational height over the semiconductor substrate. The spacer structure comprises a second region having an upper surface at an elevational height over the substrate which is about the same as the first elevational height. The spacer structure is a dummy structure. The semiconductor structure includes electrically insulative material supported by the semiconductor substrate. The electrically insulative material is over the digit line and the spacer structure regions. A capacitor structure is supported by the insulative material. The capacitor structure includes a first capacitor electrode, a second capacitor electrode and at least one dielectric material between the first and second capacitor electrodes. A first conductive interconnect extends upwardly from the digit line region and through the insulative material, and a second conductive interconnect extends upwardly from the spacer structure region, through only one of the first and second capacitor electrodes, and through the insulative material. The first and second conductive interconnects are of the same composition as one another.BRIEF DESCRIPTION OF THE DRAWINGS [0012] Preferred embodiments of the invention are described below with reference to the following accompanying drawings. [0013] Fig. 1 is a diagrammatic, cross-sectional view of a semiconductor wafer fragment shown at a preliminary prior art processing stage. [0014] Fig. 2 is a view of the Fig. 1 wafer fragment shown at a prior art processing stage subsequent to that of Fig. 1. [0015] Fig. 3 is a diagrammatic, cross-sectional view of a semiconductor wafer fragment shown at a preliminary processing stage of an exemplary method of the present invention. [0016] Fig. 4 is a view of the Fig. 3 wafer fragment shown at a processing stage subsequent to that of Fig. 3. [0017] Fig. 5 is a view of the Fig. 3 wafer fragment shown at a processing stage subsequent to that of Fig. 4. [0018] Fig. 6 is a diagrammatic, cross-sectional view of a semiconductor wafer fragment shown at a preliminary processing stage alternative to that of Fig. 3. [0019] Fig. 7 is a view of the Fig. 6 wafer fragment shown at a processing stage subsequent to that of Fig. 6. [0020] Fig. 8 is a view of the Fig. 6 wafer fragment shown at a processing stage subsequent to that of Fig. 7. [0021] Fig. 9 is a view of the Fig. 6 wafer fragment shown at a processing stage subsequent to that of Fig. 8. [0022] Fig. 10 is a view of the Fig. 6 wafer fragment shown at a processing stage subsequent to that of Fig. 6, and in accordance with an embodiment of the invention alternative to the embodiment described previously with reference to Figs. 7-9. [0023] Fig. 11 is a view of the Fig. 3 wafer fragment shown at a processing stage subsequent to that of Fig. 4 in accordance with a fourth aspect of the invention. [0024] Fig. 12 is a view of the Fig. 3 wafer fragment shown at a processing stage subsequent to that of Fig. 11 in accordance with the fourth aspect of the invention. [0025] Fig. 13 is a diagrammatic view of a computer illustrating an exemplary application of the present invention. [0026] Fig. 14 is a block diagram showing particular features of the motherboard of the Fig. 13 computer. [0027] Fig. 15 is a high-level block diagram of an electronic system according to an exemplary aspect of the present invention. [0028] Fig. 16 is a simplified block diagram of an exemplary memory device according to an aspect of the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS [0029] The invention includes methods which utilize electroless plating to form electrical interconnects within openings. An advantage of electroless plating is that such can be conducted to fill an opening from the bottom of the opening to the top, and accordingly can fill high aspect ratio openings without the prior art problem of pinching off a top of the opening during the fill process. [0030] One aspect of the invention is to utilize electroless plating to form interconnects to two or more circuit structures which are at different elevational heights relative to one another. Fig. 3 illustrates a semiconductor construction 50 which can be utilized in such aspect of the invention. Construction 50 comprises a base 52 which can comprise monocrystalline silicon, and which can have the same construction as discussed previously with reference to the base 12 of Figs. 1 and 2. A conductive structure 54 is formed over base 52. Structure 54 can correspond to, for example, a digit line. Although the structure is shown being uniformly conductive throughout its thickness, it is to be understood that the structure can comprise layers of electrically insulative and electrically conductive materials. The uppermost conductive material will have a top surface, and such top surface corresponds to an uppermost conductive surface 55 of structure 54. If structure 54 comprises a stack of electrically insulative and electrically conductive materials, the top surface 55 can be the uppermost surface of the stack, or can be covered by an electrically insulative cap. Regardless, the upper conductive surface will typically ultimately be exposed in subsequent processing, such as, for example, the processing described below with reference to Fig. 4. [0031] The Fig. 3 structure comprises an electrically insulative layer 56 over base 52 and over structure 54. Electrically insulative layer 56 can comprise any suitable material, including, for example, BPSG. [0032] A capacitor structure 58 is supported by electrically insulative layer 56. Capacitor structure 58 comprises a first capacitor electrode 60, a second capacitor electrode 62, and at least one dielectric material 64 between capacitor electrodes 60 and 62. Capacitor electrodes 60 and 62 can comprise any suitable electrically conductive materials, including, for example, metals, metal compositions, and/or conductively-doped silicon. In particular aspects, electrode 60 corresponds to a storage node of the capacitor and electrode 62 corresponds to a plate electrode of the capacitor. One or both of the capacitor electrodes can, in some aspects, comprise conductively- doped silicon (such as conductively-doped polycrystalline silicon) and/or a metal composition, such as, for example, one or more of TiN, WN and WSi; with the listed compositions being shown in terms of the elements contained therein rather than in terms of a particular stoichiometry of the elements within the compositions. [0033] The dielectric material 64 can comprise any suitable material, including, for example, one or more of silicon dioxide, silicon nitride, and various high-k dielectric materials. [0034] Capacitor storage node 60 is shown electrically connected to a transistor device 69. As is known to persons of ordinary skill in the art, transistor device 69 would typically comprise a gate (not shown) and a pair of source/drain regions (not shown). Storage node 60 would be connected to one of the source/drain regions, and the other of the source/drain regions would be connected to a bit line (or digit line) (not shown). Accordingly, the transistor gate would gatedly connect storage node 60 to the bit line. The capacitor structure 58 thus can be utilized as a memory storage unit of a memory cell. Specifically, the combination of a transistor structure with a capacitor is a typical unit cell of a dynamic random access memory (DRAM) device. A plurality of the capacitors and transistors can be incorporated into a DRAM array, as is known to persons of ordinary skill in the art. [0035] The shown capacitor construction 58 comprises storage node 60 in a container shape, and comprises dielectric material 64 and capacitor plate electrode 62 extending within the container shape of storage node 60. The shown capacitor construction also comprises horizontally-extending segments 66 and 68 laterally adjacent the portions of the materials 64 and 62 within the container opening. Horizontally-extending segments 66 and 68 can be exactly horizontal, can be substantially horizontal, or can simply be horizontal relative to portions of materials 64 and 62 along the sidewalls of the container opening. Capacitor plate electrode 62 has an upper surface 63 which extends along the horizontally-extending segments 66 and 68, and which also extends within the container shape of storage node 60. The illustrated capacitor construction is an exemplary construction, and it is to be understood that numerous other shapes of capacitor constructions can be utilized in various aspects of the invention. [0036] As discussed previously, the term "substrate" is defined herein to be broad enough to encompass any supporting structure or combination of structures, and the term "semiconductor substrate" is broad enough to encompass any combination of structures provided that one of the structures contains a semiconductor material. Accordingly, either base 52 or structure 54 can be considered a substrate in various aspects of the invention, and also the combination of structure 54 and base 52 can be considered a substrate (or semiconductor substrate) in various aspects of the invention. Additionally, capacitor structure 58 can be considered a substrate in various aspects of the invention, and can be considered a semiconductor substrate if either of the electrodes comprises conductively-doped silicon. Further, the combination of capacitor 58 with base 52 can be considered a semiconductor substrate. Also, the combination of capacitor structure 58, layer 56 and base 52 can be considered a semiconductor substrate, as can the combination of capacitor 58, layer 56, structure 54 and base 52. [0037] Although layer 56 is shown comprising a homogeneous composition, it is to be understood that the layer can be replaced with a stack of layers. The stacked layers can have the same composition as one another or different compositions. Also, although the same material 56 is shown over the structure 54 and around the capacitor 58, it is to be understood that different insulative materials can be over the structure 54 than are around the capacitor 58 in some aspects of the invention. Thus, the insulative material over structure 54 can be referred to as a first insulative material and the insulative material proximate the capacitor 58 can be referred to as a second insulative material. In the shown aspect of the invention, the first and second insulative materials are comprised by common layer 56, and in other aspects of the invention the first and second insulative materials can differ from one another. Further, although the material of layer 56 is shown both above and below capacitor 58, it is to be understood that a different insulative material can be over capacitor 58 than is under capacitor 58 in some aspects of the invention. If the insulative material over the capacitor is different than the insulative material under the capacitor, the insulative material under the capacitor can be referred to as a layer supporting the capacitor and the insulative material over the capacitor can be referred to as being supported by the capacitor. [0038] Referring to Fig. 4, openings 70 and 72 are etched through layer 56 to upper surface 63 of capacitor structure 58 and to upper surface 55 of conductive structure 54. Openings 70 and 72 can be formed utilizing photolithographic processing and an appropriate etch. Specifically, photolithographic processing can be used to form a patterned photoresist mask (not shown) which defines the locations of openings 70 and 72, a subsequent etch can be used to form the openings through layer 56, and then the photoresist mask can be removed. [0039] Openings 70 and 72 have a comparable width to one another, but opening 70 is much deeper than is opening 72. For instance, opening 70 can have a depth of about 3 microns and opening 72 can have a depth of about 1 micron in particular aspects of the invention. In other words, a thickness of layer 56 over segment 68 of capacitor structure 58 can be about 1 micron and a thickness of layer 56 over upper surface 55 of conductive structure 54 can be about 3 microns in particular aspects of the invention. [0040] Although layer 56 is shown as a homogeneous material, at least a portion of layer 56 can be replaced by a stack of insulative materials as discussed previously. In such aspects of the invention, at least one of openings 70 and 72 can extend through the stack of insulative materials. [0041] Referring to Fig. 5, a conductive material 80 is electroless plated within openings 70 and 72 to form electrical interconnects 82 and 84 extending within openings 70 and 72, respectively. Conductive material 80 can comprise, consist essentially of, or consist of, for example, one or more of palladium, zinc, silver, nickel and cobalt. In particular aspects, conductive material 80 will comprise, consist essentially of, or consist of nickel, cobalt, nickel-containing alloys or cobalt-containing alloys. [0042] The electroless plating initiates at upper surfaces 55 and 63 of structures 54 and 62, and accordingly conductive material 80 grows within openings 70 and 72 from the bottoms of the openings to the tops of the openings. Such bottom-up growth can uniformly fill the openings. [0043] As is known to persons of ordinary skill in the art, electroless plating initiates from surfaces which are suitable for the electroless plating. A surface suitable for electroless plating is a surface on which the electroless plating self-initiates from a bath rather than requiring a catalyst to initiate. Suitable surfaces can comprise, for example, one or more of palladium, zinc, silver, nickel and cobalt. Thus, surfaces 55 and 63 can be rendered suitable for initiation of electroless plating by forming the surfaces from materials comprising, consisting essentially of, or consisting of one or more of palladium, zinc, silver, nickel and cobalt. In some aspects, surfaces 55 and 63 can comprise compositions suitable for electroless plating prior to formation of layer 56 over the surfaces. Alternatively, surfaces 55 and 63 can be formed of compositions which are not suitable for electroless plating, and which are subsequently activated after formation of openings 70 and 72. The surfaces can be activated by exposing the surfaces to one or more of nickel, cobalt, palladium, zinc and silver to either incorporate one or more of nickel, cobalt, palladium, zinc and silver into the composition of the upper surfaces or to form a layer containing one or more of nickel, cobalt, palladium, zinc and silver over the upper surfaces. Thus, the composition of surfaces 55 and 63 can be, in particular aspects of the invention, unsuitable for electroless plating when layer 56 is formed over the surfaces, and then portions of the surfaces can be rendered suitable for electroless plating after such portions are exposed through openings 70 and 72. [0044] Compositions unsuitable for electroless plating are typically compositions which do not contain at least one of nickel, cobalt, palladium, zinc or silver in sufficient quantity to initiate electroless plating. Compositions suitable for initiation of electroless plating without activation can be referred to as "self-catalyzing" surfaces, and surfaces needing activation to be suitable for initiation of electroless plating can be referred to as "non-self-catalyzing" surfaces. [0045] In particular aspects of the invention, opening 72 will have a depth which is much less than the depth of opening 70. The electroless plating will form about the same amount of material within opening 72 as is formed within opening 70. Accordingly, formation of sufficient material to fill opening 70 will result in a large amount of excess material formed over opening 72. Thus, a large hump of excess material is shown formed over opening 72, and a substantially smaller hump of material is shown formed over opening 70. The disparity in the thickness of excess material 80 over opening 72 relative to the thickness over opening 70 can complicate subsequent processing. Specifically, it can be difficult to remove the excess conductive material by planarization when the thickness of the excess material has a large variation across the upper surface of layer 56. Additionally, if spacing between 70 and 72 is half of the height difference, material 80 overfilling opening 72 can pinch off opening 70 before opening 70 is filled. [0046] Figs. 6-8 illustrate an aspect of the invention which alleviates the disparate thicknesses of material 80 over openings 70 and 72. Referring initially to Fig. 6, a construction 100 is illustrated at a preliminary processing stage of a second embodiment aspect of the present invention. Construction 100 comprises a number of features identical to those described previously with reference to Figs. 3-5, and such features are labeled the same in Fig. 6 as they were labeled in Figs. 3-5. [0047] The construction 100 of Fig. 6 differs from the construction 50 of Figs. 3-5 in that a spacer structure 102 is provided in construction 100. Spacer structure 102 comprises a conductive material 104 having an upper surface 105, and is illustrated shaped as a block in the cross-sectional view of Fig. 6. Upper conductive surface 105 typically comprises the same chemical composition as upper conductive surface 55 of structure 54. Further, upper conductive surface 105 has about the same elevational height over base 52 as does upper conductive surface 55. Accordingly, structure 102 can be considered a spacer having an upper conductive surface 105 spaced from base 52 by about the same distance as upper surface 55 of structure 54 is spaced from base 52. [0048] Structure 102 can be referred to as a "dummy" structure if the structure has no purpose except to space upper conductive surface 105 from base 52. In such aspects, structure 102 is not connected to circuit devices, and ultimately is an electrical dead end for any electrical interconnect that extends to structure 102. In particular aspects of the invention, structure 54 is a digit line and structure 102 is a "dummy" structure that mimics at least the portion of the digit line where an electrical interconnection is ultimately to be formed. [0049] The digit line 54 will extend into and out of the page in the shown cross section of Fig. 6. Accordingly, the portion of the digit line shown in Fig. 6 corresponds to a specific region of the digit line. Other regions of the digit line can have a different thickness of conductive material than the shown thickness, and accordingly upper surface 55 may be at a different elevational height at regions of digit line 54 that are not visible in Fig. 6. Structure 102 can similarly be formed to be a line extending into and out of the page relative to the view of Fig. 6, and upper surface 105 can similarly have different elevational heights relative to base 52 at regions of structure 102 that are not visible in the view of Fig. 6. However, upper conductive surface 105 is at the same elevational height over base 52 as is upper conductive surface 55 of the digit line in at least the regions of the structures 102 and 54 visible in Fig. 6. [0050] Structure 102 is shown comprising a stack of materials, and specifically is shown comprising conductive material 104 over an electrically insulative material 106. It is to be understood that structure 102 can comprise any of numerous configurations which can include a conductive material alone, or a conductive material in combination with insulative materials. Further, although conductive material 104 is shown as the uppermost material of the stack of spacer 102, it is to be understood that an electrically insulative cap could be formed over conductive material 104. Ultimately, however, an opening is typically formed which extends to the uppermost conductive surface 105 of structure 102, and accordingly such opening would extend through any insulative cap formed over uppermost surface 105. [0051] Structure 102 is beneath a portion of capacitor 58, and in the shown aspect of the invention is beneath horizontally-extending segment 68 of capacitor plate electrode 62. [0052] Referring to Fig. 7, openings 120 and 122 are formed through layer 56 to upper conductive surfaces 55 and 105, respectively. Openings 120 and 122 can be referred to as first and second openings in the discussion that follows. Opening 120 is identical to the opening 70 discussed previously (Fig. 4). Opening 122 is in an identical location as the opening 72 discussed previously (Fig. 4), but unlike opening 72 extends entirely through capacitor dielectric 64 and capacitor plate electrode 62. Opening 122 thus has a periphery which includes an electrically conductive portion of electrode 62. Such electrically conductive portion of the periphery of opening 122 is labeled as 124 in Fig. 7. Openings 120 and 122 can be referred to as being formed through first and second insulative materials, respectively. In the shown aspect of the invention the first and second insulative materials are comprised by a common layer, but, as discussed above, the first and second insulative materials can differ from one another in other aspects of the invention. [0053] Since conductive surfaces 55 and 105 are at about the same elevational height as one another (and preferably are at an identical elevational height within the tolerances of a semiconductor fabrication process), openings 120 and 122 will be about the same depth as one another (and preferably will be at an identical depth to one another within the limitations of tolerances associated with a particular fabrication process). [0054] In particular aspects of the invention, upper surface 105 can be considered a portion of a semiconductor substrate. Further, surface 105 will ultimately be suitable for electroless plating. When surface 105 is suitable for electroless plating, the combination of structure 102 and semiconductor base 52 can be considered a semiconductor substrate having the surface 105 suitable for electroless plating. Layer 62 can be considered an electrically conductive node, and accordingly opening 122 can be considered to be formed through the electrically conductive node and to the surface 105 suitable for electroless plating. [0055] Upper surfaces 55 and 105 can be formed to be suitable for initiating electroless plating by patterning materials 54 and 104 from compositions suitable for electroless plating and/or by activating upper surfaces of materials 54 and 104 after formation of openings 120 and 122. Accordingly, upper surfaces 55 and 105 can be suitable for electroless plating prior to provision of layer 56; or can be rendered suitable by activation occurring after formation layer 56, and specifically after formation of openings 120 and 122 extending through layer 56. In preferred aspects, surface 105 of material 104 is identical in composition to surface 55 of material 54. In such aspects, surfaces 55 and 105 can comprise one or more of palladium, zinc, silver, nickel and cobalt. It can be preferred that surfaces 55 and 105 comprise one or both of nickel and cobalt in particular semiconductor processing applications. [0056] Referring to Fig. 8, conductive material 80 is electroless plated within openings 120 and 122. Conductive material 80 can comprise, consist essentially of, or consist or one or more of palladium, zinc, silver, nickel and cobalt; and in particular aspects will consist essentially of, or consist of one or both of nickel and cobalt. The electroless plating of material 80 can comprise conventional methods. For instance, the electroless plating can be conducted utilizing one or both of cobalt sulfate and nickel sulfate together with appropriate reducing agents, such as, for example, ammonium hypophosphite and/or dimetal-amino-borane. Conductive material 80 would typically comprise some phosphorous and/or boron in addition to the electroless-plated metal due to boron and/or phosphorous being present in reducing agents utilized during the electroless-plating process. Accordingly, conductive material 80 can, in particular aspects, comprise, consist essentially of, or consist of one or more of palladium, zinc, silver, nickel and cobalt, in combination with one or both of phosphorous and boron. Choosing one or more of palladium, zinc, silver, nickel and cobalt can enable a plating bath to be made stable, and yet still sufficient to initiate plating of material 80 on materials 54 and 104 but not on materials 62 and 64. The materials 62 and 64 would need appropriate activation (with, for example, one or more of Pd, Sn, Zn, etc) for plating to initiate thereon. [0057] The conductive material within openings 120 and 122 forms conductive interconnects 130 and 132, respectively. Conductive interconnect 130 extends to structure 54. As discussed previously, structure 54 can comprise a digit line, and accordingly conductive interconnect 130 can be utilized for interconnecting the digit line to other circuitry (not shown). Conductive interconnect 132 extends to capacitor plate electrode, and can thus be utilized for connecting plate electrode 62 to other circuitry (not shown). Conductive interconnect 132 also extends to conductive material 104. However, in typical processing conductive material 104 will be electrically isolated from any circuitry other than conductive interconnect 132, and accordingly will be an electrical dead-end (or terminus). [0058] Referring to Fig. 9, upper surfaces of electrical interconnects 130 and 132 are planarized. Such can be accomplished utilizing, for example, chemical-mechanical polishing. Interconnects 130 and 132 are then connected to appropriate circuitry 134 and 136, respectively. Accordingly, interconnect 130 forms an electrical contact between circuitry 134 and structure 54 (with structure 54 being, for example, a digit line), and interconnect 132 forms an electrical contact between circuitry 136 and capacitor plate electrode 62. [0059] Various modifications can be made to the shown aspect of the invention, as will be understood by persons of ordinary skill in the art. For instance, although conductive interconnect 132 is shown extending through both dielectric material 64 and capacitor electrode 62, the invention can encompass other aspects (not shown) in which capacitor electrode 62 extends beyond dielectric material 64, and in which the interconnect extends only through capacitor electrode 62 rather than through both capacitor 62 and dielectric material 64. As another example of a modification that can be incorporated into aspects of the invention, interconnect 132 can be formed to be adjacent an end of electrode 62 so that the conductive interconnect 132 is formed beside electrode 162, rather than through the electrode. As another example, processing of the present invention can be utilized to form electrical connection to a node 62 other than a capacitor electrode. [0060] The structure of Fig. 9 can be, in particular aspects, considered to comprise a digit line 54 and a spacer structure 102 which have shown regions with upper surfaces55 and 105, respectively, at about the same elevational height as one another. The structure further comprises a layer 56 supported by a base (or substrate) 52, and a capacitor structure supported by the layer. A first conductive interconnect 130 extends from the digit line and through the layer 56; and a second conductive interconnect 132 extends from spacer structure 102, through capacitor electrode 62 and dielectric material 64 (and not through capacitor electrode 60). The first and second electrical interconnects 130 and 132 were formed simultaneously during the same electroless plating procedure, and accordingly comprise the same composition as one another. [0061] The conductive interconnects 130 and 132 can be formed in relatively high-aspect ratio openings, with such openings being formed to any suitable depth, including, for example, depths of greater than or equal to about 3 microns. Thus, layer56 can have a thickness of at least about 3 microns in the vicinity proximate the shown region of digit line 54, and can also comprise a thickness of at least about 3 microns proximate the shown region of spacer structure 102. [0062] Although capacitor 58 and digit line 54 are shown adjacent one another in the aspect of the invention described with reference to Fig. 9, it is to be understood that numerous intervening devices (not shown) can be provided in a space between the capacitor and the digit line, and accordingly the capacitor and the digit line can be separated by a relatively large distance in other aspects of the invention (not shown). [0063] As described with reference to Figs. 7 and 8, upper surfaces 55 and 105 of structures 54 and 104 can be suitable for electroless plating by either forming such structures from compositions suitable for initiation of electroless plating, or by activation of compositions after exposing the compositions through openings 120 and 122. A problem which can occur if surfaces 55 and 105 are activated after formation of openings 120 and 122 is that such activation may also activate exposed portions 124 (Fig. 7) of electrode 62. Accordingly, electroless plating may initiate not only at surfaces 55 and 105, but also at exposed portions 124 of electrode 162. Such can lead to undesired closure of the middle portion of opening 122 before electroless-plated material 80 completely fills a lower portion of the opening. This problem is illustrated in Fig. 10 where a keyhole 150 is shown formed within conductive interconnect 132, as would occur if the portion of opening 122 (Fig. 7) were closed at about the region of electrode 62 due to electroless plating initiation from exposed portions 124 (Fig. 7) of electrode 62. Accordingly, it can be preferred to form upper surfaces 55 and 105 of an appropriate composition from which electroless plating will initiate without activation, and further to form electrode 62 of a composition from which electroless plating will not initiate without activation. The electroless plating can then selectively initiate from surfaces 55 and 105 without initiation from exposed portions 124 (Fig. 7) of electrode 62, and accordingly keyhole 150 (Fig. 10) can be avoided. [0064] Figs. 11 and 12 illustrate a further aspect of the invention. Such aspect can follow the processing stage of Fig. 4. Referring initially to Fig. 4, surface 55 of digit line 54 can comprise a suitable material for electroless plating, while surface 63 comprises a material which is not suitable for electroless plating. Fig. 1 1 shows construction 50 after the electroless plating has been conducted for sufficient time to form conductive material 80 within opening 70 to approximately the same elevational level as the upper surface 63 of capacitor 62. Subsequently, upper surface 63 can be activated so the surface is now suitable for electroless plating. Such activation is represented by the thin layer 160 shown at upper surface 63 in Fig. 11. The activation can be conducted in accordance with procedures discussed previously in this application. [0065] After the activation of surface 63, the electroless plating can be continued so that conductive material 80 fills opening 70, and also fills opening 72. Since the openings 70 and 72 were approximately the same depth as one another at the initiation of the second stage of the electroless plating (i.e., at the process stage of Fig. 11 ), the conductive material 80 formed within openings 70 and 72 fills the openings to about the same level. Such forms caps 160 and 162 over openings 70 and 72, respectively, that are about the same size as one another. The caps can be removed by subsequent planarization, similarly to the planarization of interconnects 130 and 132 discussed previously with reference to Figs. 8 and 9. [0066] The processing of Figs. 4, 11 and 12 can be considered to comprise the following sequence. [0067] Initially, a semiconductor substrate 52 is provided, with such substrate supporting an electrically insulative material 56 and a pair of electrical nodes 54 and 62. Nodes 54 and 62 can be referred to as a first node and a second node, respectively. A first opening 70 extends through the electrically insulative material to the first node and a second opening 72 extends through the electrically insulative material to the second node at the processing stage of Fig. 4. The first node 54 is at a first elevational height over the substrate and the second node 63 is at a second elevational height over the substrate, with the first elevational height being less than the second elevational height. Accordingly the first opening 70 is deeper than the second opening 72. The first electrical node has a first surface exposed within the first opening and the second electrical node has a second surface exposed within the second opening. The first surface is suitable for electroless plating and the second surface is not suitable for electroless plating at the processing stage of Fig. 4. Subsequently, a first conductive material 80 is electroless plated within the first opening to form a first conductive material plug extending to a height within the first opening that is about the same as the elevational height of the second node. Such forms the construction of Fig. 11. [0068] The second surface is then activated to render the second surface suitable for electroless plating. Subsequently, a second conductive material is electroless plated within the first and second openings to form the construction of Fig. 12. The second conductive material within the first opening forms a second conductive material plug extending upwardly from the first conductive material plug, and the second material within the second opening forms a second conductive material plug extending upwardly from the second node. In the shown aspect of the invention, the first and second electroless plated materials are the same as one another, and specifically both correspond to material 80. However, it is to be understood that the invention also encompasses aspect in which the first and second electroless-plated materials are not compositionally the same as one another. [0069] Fig. 13 illustrates generally, by way of example but not by way of limitation, an embodiment of a computer system 400 according to an aspect of the present invention. Computer system 400 includes a monitor 401 or other communication output device, a keyboard 402 or other communication input device, and a motherboard 404. Motherboard 404 can carry a microprocessor 406 or other data processing unit, and at least one memory device 408. Memory device 408 can comprise various aspects of the invention described above. Memory device 408 can comprise an array of memory cells, and such array can be coupled with addressing circuitry for accessing individual memory cells in the array. Further, the memory cell array can be coupled to a read circuit for reading data from the memory cells. The addressing and read circuitry can be utilized for conveying information between memory device 408 and processor 406. Such is illustrated in the block diagram of the motherboard 404 shown in Fig. 14. In such block diagram, the addressing circuitry is illustrated as 410 and the read circuitry is illustrated as 412. Various components of computer system 400, including processor 406, can comprise one or more of the memory constructions described previously in this disclosure. [0070] Processor device 406 can correspond to a processor module, and associated memory utilized with the module can comprise teachings of the present invention. [0071] Memory device 408 can correspond to a memory module. For example, single in-line memory modules (SIMMs) and dual in-line memory modules (DIMMs) may be used in the implementation which utilize the teachings of the present invention. The memory device can be incorporated into any of a variety of designs which provide different methods of reading from and writing to memory cells of the device. One such method is the page mode operation. Page mode operations in a DRAM are defined by the method of accessing a row of a memory cell arrays and randomly accessing different columns of the array. Data stored at the row and column intersection can be read and output while that column is accessed. [0072] An alternate type of device is the extended data output (EDO) memory which allows data stored at a memory array address to be available as output after the addressed column has been closed. This memory can increase some communication speeds by allowing shorter access signals without reducing the time in which memory output data is available on a memory bus. Other alternative types of devices include SDRAM, DDR SDRAM, SLDRAM, VRAM and Direct RDRAM, as well as others such as SRAM or Flash memories. [0073] Memory device 408 can comprise memory formed in accordance with one or more aspects of the present invention. [0074] Fig. 15 illustrates a simplified block diagram of a high-level organization of various embodiments of an exemplary electronic system 700 of the present invention. System 700 can correspond to, for example, a computer system, a process control system, or any other system that employs a processor and associated memory. Electronic system 700 has functional elements, including a processor or arithmetic/logic unit (ALU) 702, a control unit 704, a memory device unit 706 and an input/output (I/O) device 708. Generally, electronic system 700 will have a native set of instructions that specify operations to be performed on data by the processor 702 and other interactions between the processor 702, the memory device unit 706 and the I/O devices 708. The control unit 704 coordinates all operations of the processor 702, the memory device 706 and the I/O devices 708 by continuously cycling through a set of operations that cause instructions to be fetched from the memory device 706 and executed. In various embodiments, the memory device 706 includes, but is not limited to, random access memory (RAM) devices, read-only memory (ROM) devices, and peripheral devices such as a floppy disk drive and a compact disk CD-ROM drive. One of ordinary skill in the art will understand, upon reading and comprehending this disclosure, that any of the illustrated electrical components are capable of being fabricated to include memory constructions in accordance with various aspects of the present invention. [0075] Fig. 16 is a simplified block diagram of a high-level organization of various embodiments of an exemplary electronic system 800. The system 800 includes a memory device 802 that has an array of memory cells 804, address decoder 806, row access circuitry 808, column access circuitry 810, read/write control circuitry 812 for controlling operations, and input/output circuitry 814. The memory device 802 further includes power circuitry 816, and sensors 820, such as current sensors for determining whether a memory cell is in a low-threshold conducting state or in a high-threshold nonconducting state. The illustrated power circuitry 816 includes power supply circuitry 880, circuitry 882 for providing a reference voltage, circuitry 884 for providing the first wordline with pulses, circuitry 886 for providing the second wordline with pulses, and circuitry 888 for providing the bitline with pulses. The system 800 also includes a processor 822, or memory controller for memory accessing. [0076] The memory device 802 receives control signals 824 from the processor 822 over wiring or metallization lines. The memory device 802 is used to store data which is accessed via I/O lines. It will be appreciated by those skilled in the art that additional circuitry and control signals can be provided, and that the memory device 802 has been simplified to help focus on the invention. At least one of the processor 822 or memory device 802 can include a memory construction of the type described previously in this disclosure. [0077] The various illustrated systems of this disclosure are intended to provide a general understanding of various applications for the circuitry and structures of the present invention, and are not intended to serve as a complete description of all the elements and features of an electronic system using memory cells in accordance with aspects of the present invention. One of the ordinary skill in the art will understand that the various electronic systems can be fabricated in single-package processing units, or even on a single semiconductor chip, in order to reduce the communication time between the processor and the memory device(s). [0078] Applications for memory cells can include electronic systems for use in memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. Such circuitry can further be a subcomponent of a variety of electronic systems, such as a clock, a television, a cell phone, a personal computer, an automobile, an industrial control system, an aircraft, and others. [0079] It is noted that relative elevational relationships are utilized to describe the locations of various features to one another (e.g., upward, downward, etc are utilized) within this disclosure. It is to be understood that such terms are used to express relative relations between the components only, and not to indicate a relationship of the components relative to an external frame of reference. Thus, for example, a feature described herein as projecting upwardly relative to another feature may in fact appear to extend downwardly to a viewer in an external frame of reference relative to the feature. |
A packaged semiconductor device (100) includes a semiconductor die (110) including a substrate (112) having a top side (113) including active circuitry (114) and a bottom side (116) with at least one back side metal layer (118) directly attached. A package (130) including a molding material (132) having a die pad (125) and a plurality of leads (127) is encapsulated within the molding material, wherein the leads include an exposed portion 127(a) that includes a bonding portion 127(a)(1). The top side of the semiconductor die is attached to the die pad, and the package includes a gap that exposes the back side metal layer along a bottom surface of the package. Bond wires couple pads on the top side of the semiconductor die to the leads. The bonding portions, the molding material along the bottom surface of the package, and the back side metal layer are all substantially planar to one another. |
CLAIMS What is claimed is: 1. A packaged semiconductor device, comprising: a semiconductor die comprising a substrate having a top side including active circuitry and a bottom side, and at least one back side metal layer on said bottom side of said substrate, wherein said back side metal layer is directly attached to said bottom side of said semiconductor die and an area of said back side metal layer matches an area of said bottom side of said semiconductor die; a package including a molding material having a die pad and a plurality of leads encapsulated within said molding material, wherein said plurality of leads include an exposed portion that includes a bonding portion; wherein said top side of said semiconductor die is attached to said die pad, and wherein said package includes a gap that exposes said back side metal layer along a bottom surface of said package; and bond wires coupling pads on said top side of said semiconductor die to said plurality of leads; wherein said bonding portions, said molding material along said bottom surface of said package, and said back side metal layer are all substantially planar to one another. 2. The device of claim 1, wherein said exposed portions extend laterally beyond said molding material and are bent beyond said molding material; and said bonding portions comprise distal feet. 3. The device of claim 1, wherein said plurality of leads comprises a plurality of perimeter terminating leads that do not extend beyond said molding material; and wherein said plurality of perimeter terminating leads provide said bonding portions. 4. The packaged semiconductor device of claim 1, wherein said back side metal layer comprises copper. 5. The packaged semiconductor device of claim 1, wherein said back side metal layer comprises a first metal layer on said bottom side of said semiconductor die and at least a second metal layer different from said first metal layer on said first metal layer. 6. The packaged semiconductor device of claim 5, wherein said first metal layer comprises titanium. 7. The packaged semiconductor device of claim 5, wherein said first metal layer or said second metal layer comprises nickel. 8. The packaged semiconductor device of claim 5, wherein said first metal layer comprises titanium, said second metal layer comprises nickel, and further comprising a third metal layer on said second metal layer comprising gold or silver. 9. The packaged semiconductor device of claim 1, wherein said molding material along said bottom surface of said package is planar throughout. 10. An electronic assembly, comprising: a packaged semiconductor device, comprising: a semiconductor die comprising a substrate having a top side including active circuitry and a bottom side, and at least one back side metal layer on said bottom side of said substrate, wherein said back side metal layer is directly attached to said bottom side of said semiconductor die and an area of said back side metal layer matches an area of said bottom side of said semiconductor die; a package including a molding material having a die pad and a plurality of leads encapsulated within said molding material, wherein said plurality of leads include an exposed portion that includes a bonding portion; wherein said top side of said semiconductor die is attached to said die pad, and wherein said package includes a gap that exposes said back side metal layer along a bottom surface of said package; and bond wires coupling pads on said top side of said semiconductor die to said plurality of leads; and wherein said bonding portions, said molding material along said bottom surface of said package, and said back side metal layer are all substantially planar to one another; a printed circuit board (PCB) including a plurality of surface pads; and direct solder connections from said back side metal layer and said bonding portions of said plurality of leads to ones of said plurality of surface pads on said PCB. 11. The electronic assembly of claim 10, wherein said PCB comprises a multi-layer PCB. 12. A method of forming an electronic assembly, comprising: providing a packaged semiconductor device comprising a semiconductor die including a substrate having a top side including active circuitry and a bottom side, and at least one back side metal layer on said bottom side of said substrate, wherein said back side metal layer is directly attached to said bottom side of said semiconductor die and an area of said back side metal layer matches an area of said bottom side of said semiconductor die; a package including a molding material having a die pad and a plurality of leads encapsulated within said molding material, wherein said plurality of leads include an exposed portion that includes a bonding portion; wherein said bottom side of said semiconductor die is attached to said die pad, wherein said package includes a gap that exposes said back side metal layer along a bottom surface of said package, and bond wires coupling pads on said top side of said semiconductor die to said plurality of leads, and wherein said bonding portions, said molding material along said bottom surface of said package, and said back side metal layer are all substantially planar to one another; and directly soldering said packaged semiconductor device to a printed circuit board (PCB) including a plurality of surface pads, wherein said back side metal layer and said bonding portions of said plurality of leads are soldered to ones of said plurality of substrate pads on said PCB. 13. The method of claim 12, wherein said exposed portions extend laterally beyond said molding material and are bent beyond said molding material, and said bonding portions comprise distal feet. 14. The method of claim 12, wherein said plurality of leads comprises a plurality of perimeter terminating leads that do not extend beyond said molding material, and wherein said plurality of perimeter terminating leads provide said bonding portions. 15. The method of claim 12, wherein said back side metal layer comprises a first metal layer on said bottom side of said semiconductor die and at least a second metal layer different from said first metal layer on said first metal layer. |
EXPOSED DIE PACKAGE FOR DIRECT SURFACE MOUNTING [0001] Disclosed embodiments relate to packaged semiconductor devices including die with exposed substrates (e.g., silicon) and electronic assemblies including such packaged semiconductor devices. BACKGROUND [0002] For a semiconductor package that includes at least one semiconductor die therein, particularly for power integrated circuits (ICs), the problem of heat dissipation is an important issue. A semiconductor package with poor heat dissipation may not just produce errors, but may also reduce product reliability and greatly increase manufacturing cost. [0003] One known power package that includes enhanced cooling is an exposed heat slug package that comprises a heat slug (e.g., copper slug) that is exposed on the bottom side of the package. The die is bonded face (active top side) up on top of the heat slug with a thermally conductive die attach material. Another known power package is an exposed silicon package that flip chip mounts the semiconductor die on a die pad and exposes the bottom side of the semiconductor die. A heat sink is then thermally coupled to the bottom side of the semiconductor die using a thermal grease. [0004] Both of these known power packages have significant thermal resistance that reduces cooling performance due to multiple interfaces in the cooling path that increases the thermal resistance of the package. For example, the exposed heat slug package includes the semiconductor substrate (e.g., silicon), the die attach material, the heat slug and solder in the cooling path from the top side of the semiconductor to an underlying workpiece, such as a printed circuit board (PCB). Similarly, the exposed silicon package includes the substrate, thermal grease and the heat sink in the cooling path from the top side of the semiconductor die to the atmosphere. SUMMARY [0005] Disclosed embodiments recognize conventional packaged semiconductor devices, particularly high power semiconductor devices, can reach high junction temperatures during their operation due to high thermal resistance resulting from a large thermal resistance drop across multiple interfaces that interferes with heat dissipation from the packaged device to its heat sink during its operation. By having the bonding portion of the leads, the bottom surface of the package, and the back side metal of the semiconductor die all be substantially planar to one another allows direct soldering of the packaged semiconductor device to a workpiece such as a printed circuit board (PCB), and as a result improved heat dissipation to the workpiece (e.g., PCB) due to a reduction in interfaces in the thermal cooling path to the workpiece. [0006] One disclosed embodiment comprises a packaged semiconductor device that includes a semiconductor die comprising a substrate having a top side including active circuitry and a bottom side, and at least one back side metal layer that is directly attached to the bottom side. A package including a molding material comprising a die pad and a plurality of leads is encapsulated within the molding material, wherein the leads include an exposed portion that includes a bonding portion. The top side of the semiconductor die is attached to the die pad, and the package includes a gap that exposes the back side metal layer along a bottom surface of the package. Bond wires couple pads on the top side of the semiconductor die to the leads. The bonding portions, the bottom surface of the package, and the back side metal layer are all substantially planar to one another. [0007] Another disclosed embodiment comprises an electronic assembly comprising a disclosed packaged semiconductor device and a PCB including a plurality of surface pads. Direct solder connections are provided from the back side metal layer and the bonding portions of the leads to the surface pads on the PCB. Direct solderability provided by disclosed packaged semiconductor device reduces assembly cost as compared to conventional assembly, such as by eliminating the need for thermal grease and heat sinks, and added processing such to attach a heat sink. Moreover, direct soldering reduces board space for PCB assemblies, and eases PCB layout by enabling use of surface mount device (SMD) rules. BRIEF DESCRIPTION OF THE DRAWINGS [0008] FIG. 1A is a cross-sectional depiction of an example packaged semiconductor device comprising a leaded package, with back side metal of the semiconductor die exposed along a bottom surface of the package for direct surface mounting, according to an example embodiment. [0009] FIG. IB is a cross-sectional depiction of an example packaged semiconductor device comprising a leadless package, with back side metal of the semiconductor die exposed at along a bottom surface of the surface of the package, according to an example embodiment. [0010] FIG. 2 is a cross-sectional depiction of an example electronic assembly comprising the packaged semiconductor device shown in FIG. 1 A surface mounted using a direct solder connection to a multi-layer PCB, according to an example embodiment. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0011] FIG. 1A illustrates an example packaged semiconductor device 100 comprising a leaded package, with back side metal of the semiconductor die 110 exposed along a bottom surface of the package for direct surface mounting, according to an example embodiment. The semiconductor die 110 comprises a substrate (e.g., silicon or silicon/germanium) 112 having a top side 113 including active circuitry 114 and a bottom side 116, and at least one back side metal layer 118 on the bottom side 116 of the substrate 112. The active circuitry 114 on top side surface 1 13 of semiconductor die 110 is configured to provide an IC circuit function. The back side metal layer 118 is directly attached to the bottom side 116 of the semiconductor die 110. [0012] A variety of back side metal layers 118 can be used. In one embodiment, the back side metal layer 1 18 is a single metal layer, such as a copper layer. The thickness of the copper layer is typically 3 μιη to 6 μιη, but can be thinner or thicker than this range. One example process involves forming a thin seed layer before forming the copper layer. In another embodiment, the back side metal layer comprises a first metal layer on the bottom side 116 of the semiconductor die 110 and a multi-layer metal stack comprising at least a second metal layer different from the first metal layer on the first metal layer. For example, the first metal layer can comprise titanium. Titanium is known to provide good adhesion with silicon and other semiconductors and thereby to create an effective "adhesion layer". Other embodiments may comprise tantalum, palladium, vanadium or molybdenum as the first layer in contact with the bottom side 116 of the semiconductor die 110. Like titanium, these metals provide good adhesion to silicon because they can form intermediate metal-silicides with silicon at relatively low temperatures. Some examples of specific multi-layer back side metal stacks include Cu on Ti, Ag on Ti, Cu on Ti, and stacks including first, second and third metal layers, such as Au on Ni on Ti, and Ag on Ni on Ti. A nickel layer can provide protection for underlying metal layers from mechanical scratching and corrosion. [0013] In other example embodiments the first metal layer or second metal layer can comprises nickel. For example, Ag on Cr on Ni, or Pd on Ni on Au. Chromium can act as a barrier layer to stop metal diffusion into the substrate, provides a stress buffer layer, and also act to prevent fracturing inside the metal stack due to its high fracture strength. Typical thicknesses for the multi-layer metal stack can comprise 1 to 2 kA for the first metal layer, 2 to 4 kA for the second metal layer and 10 to 20 kA for the third metal layer. In the case of Au for the third metal layer, the Au thickness can be significantly thicker than 20 kA. However, the respective metal layer thicknesses can be thinner or thicker than these ranges. [0014] An area of the back side metal layer 118 matches an area of the bottom side 116 of the semiconductor die 110. As used herein "directly attached" refers to a connection that does not include any intervening layers. Back side metal layer 118 matching an area of the bottom side 116 of the semiconductor die 110 is provided by the back side metal layer 118 being on the bottom side 116 of the semiconductor die 110 before singulation (e.g., back side metal layer 118 is deposited on the bottom side 116 of the substrate 112 while the semiconductor die 110 are in wafer form), so that the singulation process cuts the wafer into a plurality of semiconductor die each having an area that is constant during the cutting process through both the back side metal 118 and the substrate 112. [0015] The package 130 in FIG. 1A is shown as a leaded package including a molding material 132, such as a standard epoxy-resin package material having a die pad 125 and a plurality of leads 127 that include a portion encapsulated within the molding material 132 and exposed portions 127(a) on which the leads shown are bent including a bonding portion 127(a)(1) shown as feet 127(a)(1). [0016] The top side 113 of the semiconductor die 110 is attached to the die pad 125 by a die attach material 126, such as an epoxy. The back side metal layer 118 is exposed by a gap in the molding material 132 along a portion of the bottom surface 130(a) of the package 130. The package can be molded with a gap in the molding material so that the back side metal layer 118 is exposed. Back side metal layer 118 allows packaged semiconductor device 100 to be directly soldered to a package substrate, such as a PCB. [0017] Directly soldering the back side metal layer 118 of packaged semiconductor device 100 to a package substrate (e.g., a PCB) provides good thermal transfer from the semiconductor die 110 to the package substrate. In this directly soldered arrangement, the thermal dissipation path has a minimum number of interfaces, including from the active devices 114 on the top side 113 of the semiconductor die 110 through the thickness of the substrate 112 and a tiny contribution across the back side metal 118, so that thermal dissipation for packaged semiconductor device 100 to the underlying workpiece is generally set by the thermal conductivity the substrate 112 for the semiconductor die 110, or about 140 W/m-K for a silicon substrate. In one embodiment, the semiconductor die 110 is a thinned die, such as 40 to 100 μιη in thickness, to further enhance thermal transfer from the packaged semiconductor device to the workpiece. [0018] In addition, direct solderability provided by packaged semiconductor device 100 reduces assembly cost as compared to conventional assembly, such as by eliminating the need for thermal grease and heat sinks, and added processing such to attach a heat sink. Moreover, direct soldering reduces board space for PCB assemblies, and eases PCB layout by enabling use of surface mount device (SMD) rules. [0019] Bond wires 136 are shown for coupling bond pads 119 on the top side 113 of the semiconductor die 110 to the plurality of leads 127. The feet 127(a)(1), the bottom surface 130(a) of the package 130, and the back side metal layer 118 are all substantially planar to one another. As used herein, "substantially planar" refers to a maximum range between the lower edges of bonding portion of the leads for bonding to the workpiece (e.g., PCB) such as the feet 127(a)(1) shown in FIG. 1A, the bottom surface 130(a) of the package 130, and the back side metal layer 118 all being within a range of +/- 0.25 mm (i.e. a maximum 0.5 mm tilt). This disclosed "substantially planar" arrangement facilitates direct surface mounting, such as for the example case where the soldering process comprises a solder paste onto a screen (mask) having a thickness of 0.3 to 0.5 mm on a workpiece such as a PCB. Moreover, as shown in FIG. 1A, the molding material 132 along the full length of the bottom surface 130(a) of the package 130 can also be substantially planar throughout (i.e. no indentation regions). [0020] FIG. IB illustrates an example packaged semiconductor device 150 comprising a leadless package 180, with back side metal 118 of the semiconductor die 110 exposed along a bottom surface 180(a) of the package for direct surface mounting, according to an example embodiment. Package 180 includes a die pad 125 and plurality of exposed portions shown as perimeter terminating leads 181 (also sometimes referred to as perimeter lands) that do not extend beyond the molding material 132. Perimeter terminating leads 181 are substantially planar to the bottom surface 180(a) of the leadless package 180, and the back side metal 118. Leadless package 180 can comprise a variety of Flat No leads packages such as QFN (Quad Flat No leads) and DFN (Dual Flat No leads). Packaged semiconductor device 150 provides the directly solderability, high level of thermal, reduced assembly cost, reduced board space for PCB assemblies, and the same ease of board layout provided by packaged semiconductor device 100 described above. [0021] FIG. 2 illustrates an example electronic assembly 200, comprising the packaged semiconductor device 100 shown in FIG. 1A surface mounted using a direct solder connection to a multi-layer PCB 210 comprising at least one internal metal (e.g., copper) plane 211 and a plurality of surface pads 215. Direct solder connections 216 are shown for coupling the back side metal layer 118 and the feet 127(a)(1) of the packaged semiconductor device 100 to ones of the surface pads 215 on the PCB 210, such as copper surface pads. [0022] Another disclosed embodiment is a method of forming an electronic assembly. A disclosed packaged semiconductor device, such as packaged semiconductor devices 100 or 150 described, is directly solder to a workpiece such as a PCB including a plurality of surface pads. The back side metal layer and the bonding portions of the plurality of leads are directly soldered to substrate pads on the PCB. [0023] The active circuitry formed on the semiconductor wafers and the semiconductor die therefrom comprise circuit elements that may generally include transistors, diodes, capacitors, and resistors, as well as signal lines and other electrical conductors that interconnect the various circuit elements to provide an IC circuit function. As used herein "provide an IC circuit function" refers to circuit functions from ICs, that for example may include an application specific integrated circuit (ASIC), a digital signal processor, a radio frequency chip, a memory, a microcontroller and a system-on-a-chip or a combination thereof. [0024] Disclosed embodiments can be integrated into a variety of assembly flows to form a variety of different IC devices and related products. The IC assembly can comprise single semiconductor die or multiple die, such as PoP configurations comprising a plurality of stacked semiconductor die. A variety of package substrates may be used. The semiconductor die may include various elements therein and/or layers thereon, including barrier layers, dielectric layers, device structures, active elements and passive elements including source regions, drain regions, bit lines, bases, emitters, collectors, conductive lines, conductive vias, etc. Moreover, the semiconductor die can formed from a variety of processes including bipolar, CMOS, BiCMOS and MEMS. [0025] Those skilled in the art to which this disclosure relates will appreciate that modifications of the described embodiments and many other embodiments are possible within the scope of the claimed invention. |
Methods, devices, systems, and non-transitory process-readable storage media for a multi-processor computing device to schedule multi-versioned tasks on a plurality of processing units. An embodiment method may include processor-executable operations for enqueuing a specialized version of a multi-versioned task in a task queue for each of the plurality of processing units, wherein each specialized version is configured to be executed by a different processing unit of the plurality of processing units, providing ownership over the multi-versioned task to a first processing unit when the first processing unit is available to immediately execute a corresponding specialized version of the multi-versioned task, and discarding other specialized versions of the multi-versioned task in response to providing ownership over the multi-versioned task to the first processing unit. Various operations of the method may be performed via a runtime functionality. |
CLAIMSWhat is claimed is:1. A method for a multi-processor computing device to schedule multi-versioned tasks on a plurality of processing units, comprising:enqueuing, via a processor of the plurality of processing units, a specialized version of a multi-versioned task in a task queue for each of the plurality of processing units, wherein each specialized version is configured to be executed by a different processing unit of the plurality of processing units;providing, via the processor, ownership over the multi-versioned task to a first processing unit of the plurality of processing units, wherein the first processing unit is available to immediately execute a first specialized version of the multi-versioned task before other processing units of the plurality of processing units are available to execute other specialized versions of the multi-versioned task; anddiscarding, via the processor, the other specialized versions of the multi- versioned task in response to providing the ownership over the multi-versioned task to the first processing unit.2. The method of claim 1, wherein the multi-processor computing device is executing a runtime functionality configured to schedule the plurality of processing units.3. The method of claim 1, wherein enqueuing, via the processor, the specialized version of the multi-versioned task in the task queue for each of the plurality of processing units comprises:enqueuing, via the processor, a pointer associated with the specialized version of the multi-versioned task in the task queue for each of the plurality of processing units.4. The method of claim 3, wherein each pointer includes an identification of the multi-versioned task and the specialized version of the multi-versioned task.5. The method of claim 4, wherein the identification of the specialized version of the multi-versioned task is included in a lowest four bits of each pointer.6. The method of claim 1, wherein providing, via the processor, the ownership over the multi-versioned task to the first processing unit of the plurality of processing units comprises storing data in association with the multi-versioned task.7. The method of claim 1, further comprising:determining, via the processor, whether a next task from the task queue of the first processing unit is associated with the multi-versioned task; andrequesting, via the processor, the ownership over the multi-versioned task for the first processing unit in response to determining that the next task is associated with the multi-versioned task.8. The method of claim 7, further comprising:acquiring, via the processor, the ownership over the multi-versioned task for the first processing unit in response to requesting the ownership; andexecuting, via the first processing unit, the next task in response to acquiring the ownership over the multi-versioned task for the first processing unit.9. The method of claim 7, wherein determining, via the processor, whether the next task from the task queue of the first processing unit is associated with the multi- versioned task comprises:obtaining, via the processor, an identifier by applying a bit mask to a pointer from the task queue of the first processing unit; and determining, via the processor, that the next task is associated with the multi- versioned task in response to determining that the identifier corresponds to the first specialized version of the multi-versioned task.10. The method of claim 7, wherein determining, via the processor, whether the next task from the task queue of the first processing unit is associated with the multi- versioned task comprises:retrieving, via the processor, a pointer from the task queue of the first processing unit, wherein the pointer is a common multi-versioned task pointer placed in the task queue for each of the plurality of processing units;determining, via the processor, whether the pointer is associated with a list of specialized versions of the multi-versioned task; andidentifying, via the processor, the first specialized version of the multi- versioned task from the list of specialized versions of the multi-versioned task in response to determining that the pointer is associated with the list of specialized versions of the multi-versioned task, wherein the first specialized version is the next task to be executed by the first processing unit.11. The method of claim 7, further comprising:executing, via the first processing unit, the next task in response to determining that the next task is not associated with the multi-versioned task.12. The method of claim 7, further comprising:determining, via the processor, that the ownership over the multi-versioned task has been acquired by a second processing unit in response to requesting the ownership for the first processing unit; anddiscarding, via the processor, the next task in response to determining that the ownership over the multi-versioned task has been acquired by the second processing unit.13. The method of claim 7, further comprising:determining, via the processor, whether there is a priority task within the task queue associated with the first processing unit; andexecuting, via the first processing unit, the priority task,wherein requesting, via the processor, the ownership over the multi-versioned task for the first processing unit in response to determining that the next task is associated with the multi-versioned task comprises requesting, via the processor, the ownership over the multi-versioned task for the first processing unit in response to executing the priority task.14. The method of claim 1, wherein the processor is the first processing unit.15. A multi-processor computing device, comprising:a memory; anda plurality of processing units coupled to the memory, wherein a processor of the plurality of processing units is configured with processor-executable instructions to perform operations comprising:enqueuing a specialized version of a multi-versioned task in a task queue for each of the plurality of processing units, wherein each specialized version is configured to be executed by a different processing unit of the plurality of processing units;providing ownership over the multi-versioned task to a first processing unit of the plurality of processing units, wherein the first processing unit is available to immediately execute a first specialized version of the multi- versioned task before other processing units of the plurality of processing units are available to execute other specialized versions of the multi-versioned task; and discarding the other specialized versions of the multi-versioned task in response to providing the ownership over the multi-versioned task to the first processing unit.16. The multi-processor computing device of claim 15, wherein the processor is configured to perform the operations when the multi-processor computing device is executing a runtime functionality configured to schedule the plurality of processing units.17. The multi-processor computing device of claim 15, wherein the processor is configured with processor-executable instructions to perform operations such that enqueuing the specialized version of the multi-versioned task in the task queue for each of the plurality of processing units comprises:enqueuing a pointer associated with the specialized version of the multi- versioned task in the task queue for each of the plurality of processing units.18. The multi-processor computing device of claim 17, wherein the processor is configured with processor-executable instructions to perform operations such that each pointer includes an identification of the multi-versioned task and the specialized version of the multi-versioned task.19. The multi-processor computing device of claim 18, wherein the processor is configured with processor-executable instructions to perform operations such that the identification of the specialized version of the multi-versioned task is included in a lowest four bits of each pointer.20. The multi-processor computing device of claim 15, wherein the processor is configured with processor-executable instructions to perform operations such that providing the ownership over the multi-versioned task to the first processing unit of the plurality of processing units comprises storing data in association with the multi- versioned task.21. The multi-processor computing device of claim 15, wherein the processor is configured with processor-executable instructions to perform operations further comprising:determining whether a next task from the task queue of the first processing unit is associated with the multi-versioned task; andrequesting the ownership over the multi-versioned task for the first processing unit in response to determining that the next task is associated with the multi- versioned task.22. The multi-processor computing device of claim 21, wherein the processor is configured with processor-executable instructions to perform operations further comprising:acquiring the ownership over the multi-versioned task for the first processing unit in response to requesting the ownership; andexecuting the next task via the first processing unit in response to acquiring the ownership over the multi-versioned task.23. The multi-processor computing device of claim 21, wherein the processor is configured with processor-executable instructions to perform operations such that determining whether the next task from the task queue of the first processing unit is associated with the multi-versioned task comprises:obtaining an identifier by applying a bit mask to a pointer from the task queue of the first processing unit; anddetermining that the next task is associated with the multi-versioned task in response to determining that the identifier corresponds to the first specialized version of the multi-versioned task.24. The multi-processor computing device of claim 21, wherein the processor is configured with processor-executable instructions to perform operations such that determining whether the next task from the task queue of the first processing unit is associated with the multi-versioned task comprises:retrieving a pointer from the task queue of the first processing unit, wherein the pointer is a common multi-versioned task pointer placed in the task queue for each of the plurality of processing units;determining whether the pointer is associated with a list of specialized versions of the multi-versioned task; andidentifying the first specialized version of the multi-versioned task from the list of specialized versions of the multi-versioned task in response to determining that the pointer is associated with the list of specialized versions of the multi-versioned task, wherein the first specialized version is the next task to be executed by the first processing unit.25. The multi-processor computing device of claim 21, wherein the processor is configured with processor-executable instructions to perform operations further comprising:executing the next task via the first processing unit in response to determining that the next task is not associated with the multi-versioned task.26. The multi-processor computing device of claim 21, wherein the processor is configured with processor-executable instructions to perform operations further comprising:determining that the ownership over the multi-versioned task has been acquired by a second processing unit in response to requesting the ownership over the multi- versioned task for the first processing unit; anddiscarding the next task in response to determining that the ownership over the multi-versioned task has been acquired by the second processing unit.27. The multi-processor computing device of claim 21, wherein the processor is configured with processor-executable instructions to perform operations further comprising:determining whether there is a priority task within the task queue associated with the first processing unit; andexecuting the priority task via the first processing unit,wherein requesting the ownership over the multi-versioned task for the first processing unit comprises requesting the ownership over the multi-versioned task for the first processing unit in response to executing the priority task.28. A non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a multi-processor computing device to perform operations comprising:enqueuing a specialized version of a multi-versioned task in a task queue for each of a plurality of processing units, wherein each specialized version is configured to be executed by a different processing unit of the plurality of processing units;providing ownership over the multi-versioned task to a first processing unit of the plurality of processing units, wherein the first processing unit is available to immediately execute a first specialized version of the multi-versioned task before other processing units of the plurality of processing units are available to execute other specialized versions of the multi-versioned task; anddiscarding the other specialized versions of the multi-versioned task in response to providing the ownership over the multi-versioned task to the first processing unit.29. The non-transitory processor-readable storage medium of claim 28, wherein the stored processor-executable instructions are configured to cause the processor of the multi-processor computing device to perform operations such that enqueuing the specialized version of the multi-versioned task in the task queue for each of the plurality of processing units comprises:enqueuing a pointer associated with the specialized version of the multi- versioned task in the task queue for each of the plurality of processing units.30. A multi-processor computing device, comprising:means for enqueuing a specialized version of a multi-versioned task in a task queue for each of a plurality of processing units, wherein each specialized version is configured to be executed by a different processing unit of the plurality of processing units;means for providing ownership over the multi-versioned task to a first processing unit of the plurality of processing units, wherein the first processing unit is available to immediately execute a first specialized version of the multi-versioned task before other processing units of the plurality of processing units are available to execute other specialized versions of the multi-versioned task; andmeans for discarding the other specialized versions of the multi-versioned task in response to providing ownership over the multi-versioned task to the first processing unit. |
TITLEEfficient Scheduling of Multi-Versioned Tasks BACKGROUND[0001] Parallel programming is a technique for computing devices to splitcomputations into small chunks of work (referred to as tasks) in order to provide responsive and high performance software. In a multi-core or multi-processor computing device (e.g., a heterogeneous system-on-chip (SOC)), different tasks may be assigned to (or offloaded to) various processing units of the device, with some tasks being specified to run after others finish due to task dependencies. Typically, a runtime engine (or task scheduler) determines to which processing unit a task may be assigned, and such determinations may typically be based on various device, processing unit, and/or task characteristics or conditions.[0002] Some tasks may be directed to or designed for particular processing units. For example, a first task may be designed for execution by a central processing unit (CPU), a second task may be designed for execution on a graphics processing unit (GPU), and a third task may be designed for execution on a digital signal processor (DSP). Tasks meant for different processing units are often written in different programming languages or using different specifications. For example, the code to implement a vector addition calculation as a CPU task and the code to implement a matrix multiplication calculation as a GPU task may use different languages and/or syntax. To capitalize upon the different processing units in a computing device, different versions of common general-purpose tasks may be concurrently supported. A "multi-versioned" task may be associated with or otherwise include multiple implementations of the same logical function or routine, with each implementation specialized for execution by a particular processing unit. For example, a vector addition calculation may be implemented as a CPU task and a GPU task that both use different languages and/or syntax. SUMMARY[0003] Various embodiments provide methods, devices, systems, and non-transitory process-readable storage media for a multi-processor computing device to schedule multi-versioned tasks on a plurality of processing units. In some embodiments, a method performed by a processor of a multi-processor computing device may include enqueuing a specialized version of a multi-versioned task in a task queue for each of the plurality of processing units in which each specialized version may be configured to be executed by a different processing unit of the plurality of processing units, providing ownership over the multi-versioned task to a first processing unit of the plurality of processing units in which the first processing unit may be available to immediately execute a first specialized version of the multi-versioned task before other processing units of the plurality of processing units are available to execute other specialized versions of the multi-versioned task, and discarding the other specialized versions of the multi-versioned task in response to providing the ownership over the multi-versioned task to the first processing unit. In some embodiments, the multi-processor computing device may be executing a runtime functionality configured to schedule the plurality of processing units.[0004] In some embodiments, enqueuing the specialized version of the multi- versioned task in the task queue for each of the plurality of processing units may include enqueuing a pointer associated with the specialized version of the multi- versioned task in the task queue for each of the plurality of processing units. In some embodiments, each pointer may include an identification of the multi-versioned task and the specialized version of the multi-versioned task. In some embodiments, the identification of the specialized version of the multi-versioned task may be included in a lowest four bits of each pointer. In some embodiments, providing the ownership over the multi-versioned task to the first processing unit of the plurality of processing units may include storing data in association with the multi-versioned task. [0005] Some embodiments may further include determining whether a next task from the task queue of the first processing unit is associated with the multi-versioned task, and requesting the ownership over the multi-versioned task for the first processing unit in response to determining that the next task is associated with the multi- versioned task. Some embodiments may further include acquiring the ownership over the multi-versioned task for the first processing unit in response to requesting the ownership, and executing the next task in response to acquiring the ownership over the multi-versioned task for the first processing unit. In some embodiments, determining whether the next task from the task queue of the first processing unit is associated with the multi-versioned task may include obtaining an identifier by applying a bit mask to a pointer from the task queue of the first processing unit, and determining that the next task is associated with the multi-versioned task in response to determining that the identifier corresponds to the first specialized version of the multi-versioned task.[0006] In some embodiments, determining whether the next task from the task queue of the first processing unit is associated with the multi-versioned task may include retrieving a pointer from the task queue of the first processing unit, wherein the pointer is a common multi-versioned task pointer placed in the task queue for each of the plurality of processing units, determining whether the pointer is associated with a list of specialized versions of the multi-versioned task, and identifying the first specialized version of the multi-versioned task from the list of specialized versions of the multi-versioned task in response to determining that the pointer is associated with the list of specialized versions of the multi-versioned task, wherein the first specialized version may be the next task to be executed by the first processing unit.[0007] Some embodiments may further include executing the next task in response to determining that the next task is not associated with the multi-versioned task. Some embodiments may further include determining that the ownership over the multi- versioned task has been acquired by a second processing unit in response to requesting the ownership for the first processing unit, and discarding the next task in response to determining that the ownership over the multi-versioned task has been acquired by the second processing unit. Some embodiments may further include determining whether there is a priority task within the task queue associated with the first processing unit, and executing, via the first processing unit, the priority task, in which requesting the ownership over the multi-versioned task for the first processing unit in response to determining that the next task is associated with the multi-versioned task may include requesting the ownership over the multi-versioned task for the first processing unit in response to executing the priority task. In some embodiments, the processor may be the first processing unit.[0008] Further embodiments include a computing device configured with processor- executable instructions for performing operations of the methods described above. Further embodiments include a computing device including means for performing functions of the methods described above. Further embodiments include a non- transitory processor-readable medium on which is stored processor-executable instructions configured to cause a computing device to perform operations of the methods described above.BRIEF DESCRIPTION OF THE DRAWINGS[0009] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the claims.[0010] FIG. 1 is a component block diagram illustrating task queues and processing units of an exemplary multi-processor computing device (e.g., a heterogeneous system-on-chip (SoC)) suitable for use in various embodiments.[0011] FIGS. 2A-2B are component block diagrams illustrating conventional early- binding of specialized versions of a multi-versioned task. [0012] FIGS. 3A-3C are component block diagrams illustrating exemplary late- binding of specialized versions of a multi-versioned task by a multi-processor computing device according to various embodiments.[0013] FIG. 4A is a process flow diagram illustrating an embodiment method performed by a multi-processor computing device to schedule specialized versions of multi-versioned tasks.[0014] FIG. 4B is a process flow diagram illustrating an embodiment method performed by a multi-processor computing device to manage the performance of multi-versioned tasks.[0015] FIGS. 5A-5B are diagrams illustrating pseudocode for routines performed by a multi-processor computing device to create and schedule (or dispatch) specialized versions of multi-versioned tasks according to various embodiments.[0016] FIG. 6 is a process flow diagram illustrating an embodiment method performed by a multi-processor computing device to execute specialized versions of multi-versioned tasks according to various embodiments.[0017] FIG.7 is a diagram illustrating pseudocode for routines performed by a multiprocessor computing device to execute specialized versions of multi-versioned tasks according to various embodiments.[0018] FIG. 8 is a process flow diagram illustrating an embodiment method performed by a multi-processor computing device to execute priority tasks and specialized versions of multi-versioned tasks.[0019] FIG. 9 is a component block diagram of a multi-processor computing device suitable for use in some embodiments. DETAILED DESCRIPTION[0020] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the embodiments or the claims.[0021] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other implementations.[0022] The term "computing device" is used herein to refer to an electronic device equipped with at least a processor. Examples of computing devices may include mobile devices (e.g., cellular telephones, wearable devices, smart-phones, web-pads, tablet computers, Internet enabled cellular telephones, Wi-Fi® enabled electronic devices, personal data assistants (PDA's), laptop computers, etc.), personal computers, and server computing devices. In various embodiments, computing devices may be configured with various memory and/or data storage as well as networkingcapabilities, such as network transceiver(s) and antenna(s) configured to establish a wide area network (WAN) connection (e.g., a cellular network connection, etc.) and/or a local area network (LAN) connection (e.g., a wired/wireless connection to the Internet via a Wi-Fi® router, etc.).[0023] The term "multi-processor computing device" is used herein to refer to computing devices configured with two or more processing units. Multi-processor computing devices may execute various tasks (e.g., routines, functions, calculations, instruction sets, etc.) using two or more processing units. For example, a multiprocessor computing device may be a heterogeneous computing device (e.g., a system-on-chip (SoC)) with different processing units each configured to perform specialized and/or general-purpose workloads. Such processing units may include various processor devices, a core, a plurality of cores, etc. For example, processing units of a multi-processor computing device may include an application processor(s) (e.g., a CPU) and/or specialized processing devices, such as a GPU and a DSP, any of which may include one or more internal cores.[0024] Various embodiments provide methods, devices, systems, and non-transitory process-readable storage media for efficiently scheduling and executing particular (or specialized) versions of multi-versioned tasks by a multi-processor computing device. As an overview of the various embodiments, when a multi-versioned task (e.g., a general-purpose task having more than one supported implementation or version) is to be executed, a specialized version of the multi-versioned task may be enqueued for each supported processing unit (e.g., CPU, GPU, DSP, etc.). For example, if multiple versions of a matrix multiplication general-purpose task are available for use with a CPU and a GPU, the computing device may enqueue the CPU specialized version in the CPU's task queue as well as enqueue the GPU specialized version in the GPU's task queue. Such enqueuing may include placing a pointer in the task queue for each processing unit having a supported version (e.g., a pointer indicating both the multi- versioned task and the specialized version). As the processing units execute tasks within corresponding task queues, eventually a first processing unit may become available to immediately execute a first specialized version of the multi-versioned task. The first processing unit may acquire an ownership over the multi-versioned task, such as by making an application programmer interface (API) call for requesting ownership over the multi-versioned task. Once acquired by the first processing unit, any other processing unit subsequently requesting ownership over the particular multi- versioned task may not receive the ownership and thus may not execute corresponding specialized versions of the multi-versioned task. Instead, these other processing units may simply dequeue and discard specialized version pointers and proceed to process other tasks without waiting. [0025] With the multi-versioned task owned by the first processing unit, the multiprocessor computing device may configure the multi-versioned task to finish after the processing unit completes execution of the first specialized version. For example, the first processing unit may execute a "finish_after()" API call. Such a "finish after" operation may guarantee that the multi-versioned task is marked as finished in response to the first processing unit finishing execution of the first specialized version. This may ensure that the task dependencies, waits, and lifetime of all versions of the multi-versioned task are tied together. In other words, the main multi-versioned task may be automatically fulfilled by the completion of one specialized version. In this way, "late-binding" techniques are provided to allow multi-versioned tasks to be completed by the processing units that are the fastest to be available to actually execute specialized versions.[0026] In some embodiments there may be no need to explicitly place any indication of specialized versions in task queues of processing units. Instead, the multi-processor computing device may implement multi-versioned techniques by placing a common multi-versioned task pointer in the task queues of all processing units supported by the multi-versioned task (i.e., all processing units for which there are specialized versions of the multi-versioned task). For example, a DSP may dequeue a task pointer and acquire ownership over the task. The DSP may check whether the task hasalternatives (i.e., whether there are more than one specialized versions of the task). If there are alternatives, the task may be identified as a multi-versioned task, and the DSP may scan a list of alternatives (or specialized versions) of the multi-versioned task and execute a corresponding specialized version for DSPs.[0027] In some embodiments, the multi-processor computing device may perform an algorithm that determines whether a task to be executed is a multi-versioned task. If a task is not a multi-versioned task, the task may be enqueued in a processing unit's task queue for typical execution. However, if the task is a multi-versioned task, the multiprocessor computing device may create pointers for all supported (or requested) specialized versions of the multi-versioned task (e.g., one for GPU, CPU, DSP, etc.), placing the pointers in task queues corresponding to the appropriate processing units. For example, the pointer for a DSP version of a matrix multiplication task may be entered into the DSP's task queue, etc. In some cases, processing unit identities or other version associations may be stored in the pointers themselves, such as encoding core identifiers (IDs) in the lowest four (4) bits of a pointer. In some embodiments, when a multi-versioned task is launched for execution, the task may be considered for scheduling either when it becomes ready (e.g., when it has no predecessordependencies) or sooner if the task is subject to scheduling optimizations.[0028] In some embodiments, a multi-versioned task may be created through an API call, indicating the task, arguments, and potential processing units (e.g., cores) that may execute specialized versions of the multi-versioned task. The multi-versioned task may be created by the programmer, compiler, or runtime system. In some embodiments, the order in which the specialized versions are presented in the API call may indicate relative preference or priority. For each specialized version (or implementation) in such an API call, the multi-processor computing device (e.g., via a runtime functionality) may create a task (or task pointer) to be enqueued in a supported processing unit's task queue. For example, specialized versions of a matrix multiplication task tailored for execution by either a CPU or a GPU may be enqueued in appropriate task queues in response to an API call invoked by a programmer within code or by a compiler/runtime system.[0029] In various embodiments, a specific processing unit may be configured to process enqueued tasks (or task pointers) via an associated thread in a thread pool. For example, a DSP thread may dequeue a pointer for a specialized version of a multi- versioned task from the DSP's task queue. The following is a non-limiting illustration of operations by such a thread. A thread associated with a DSP may perform operations to determine whether dequeued tasks (or pointers) are associated with multi-versioned tasks or not, such as by applying a mask to identify any version numbers within the pointer bits. If the dequeued task is not related to a multi -version task, the thread for the DSP may simply execute the dequeued task. However, if the dequeued task (or pointer) is related to a multi- versioned task, the DSP thread may request ownership over the multi- versioned task, thus seeing whether any other thread has already acquired ownership for the multi- versioned task. If ownership is acquired, the DSP thread may set the multi- versioned task (or request that the multi-versioned task be set) to finish after execution of the specialized version task associated with the DSP, and may execute that specialized version. If the dequeued task is multi- versioned but ownership cannot be acquired by the thread, the DSP thread may simply discard the dequeued task (e.g., a deqeued task pointer for the task), and process any other tasks in the task queues associated with the DSP thread.[0030] The processing units of the multi-processor computing device may each be natively associated with particular types of tasks. For example, the GPU may primarily be used within an SoC mobile device to perform operations for rendering graphics that may be displayed on a touch screen, etc. Primary functions of the computing device may suffer if the processing units fail to properly prioritize native operations, such as due to parallel processing scheduling. In some embodiments, a processing unit of the multi-processor computing device may be configured to evaluate tasks within a task queue to identify any priority tasks that should be executed before any specialized versions of multi-versioned tasks. For example, although the next task to be dequeued and executed at a GPU may be a version of a matrix multiplication task, the GPU may first execute a high-priority rendering/ display task that is behind the version task in the queue. In this way, it may be possible for the processing unit to lose an opportunity to acquire ownership over a specialized version of a multi-versioned task, as another processing task may become available for the multi-versioned task as the processing unit executes the priority task. For example, a DSP may acquire the ownership over a multi-versioned task while a GPU completes a display operation before requesting ownership over the multi- versioned task. By prioritizing native workloads over general-purpose multi- versioned tasks, the processing units may maintain priority, native capabilities (e.g., rendering, signal processing, etc.) while still competing to execute multi-versioned tasks.[0031] The "late-binding" scheduling techniques described herein do not assess the relative load on the processing units at assignment time, and thus ensure that the processing unit that is actually the fastest to get to a multi-versioned task gets to execute a corresponding version of the task. For example, even if a DSP's task queue has fewer work items enqueued than a GPU at a first time, the multi-processor computing device implementing embodiment techniques may allow a faster GPU to execute a corresponding specialized version of a multi-versioned task at a second time. Thus, the embodiment techniques are beneficial for improving the overall efficiency of resource consumption (e.g., memory, processing units, etc.), load- balancing, and thermal management of a multi-processor computing device. For example, by providing a late-binding for versions of multi-versioned tasks, the embodiment techniques may decrease hotspots for executions in the multi-processor computing device, allowing for higher processing in the total computing system as no individual processing unit may be disproportionately overworked.[0032] By enabling processing units to acquire the right to execute specialized versions of general-purpose, multi-versioned tasks only when actually available to begin immediate execution, the embodiment techniques provide just-in-time scheduling of tasks. In this way, the embodiment techniques do not rely upon potentially inaccurate and costly computations and/or measurements to identify the best processing unit to run every multi-versioned task. Instead, processing units are configured to compete to execute specialized versions of multi-versioned tasks, thereby spreading workloads throughout the computing device based on an actual ability to start executing tasks. Such techniques naturally account for unforeseen operating conditions that may or may not be accounted for with typical multi-version scheduling schemes, resulting in greater performance, lower energy consumption, and lower memory consumption.[0033] The embodiment techniques are distinct from conventional schemes, such as conventional techniques that may calculate current workloads or predict likely workloads and/or capabilities of processing units in order to choose to which processmg unit a task may be assigned. Such conventional schemes may often fail to account for dynamic events that may be encountered by processing units during executing of tasks, such as data loads from memory that may affect projected execution times. Some conventional schemes may also use profiles or other precomputed data, to perform a priori predictions of best destinations for tasks.Conversely, the embodiment techniques dispatch multiple versions concurrently and allow processmg units to compete for ownership of the common multi-versioned task. In other words, the embodiment techniques do not use a predefined priority', cost, and/or prediction models for assigning specialized versions of multi-versioned tasks, but instead make execution assignments based on the processmg unit that first becomes available to actually execute a multi-versioned task,[0034] Further, unlike some conventional schemes, the embodiment techniques do not require virtual memory address space considerations or special task queues. In particular, some conventional schemes may utilize a special task queue from which all processing units may steal multi-versioned tasks. Such special queue schemes may not be feasible for open-ended scheduling in systems with finite resources, as countless queues may be required to accommodate tasks with different combinations of supported implementations. For example, a first multi-versioned task may have only CPU and GPU implementations, a second multi-versioned task may have CPU, GPU, and DSP implementations, and a third multi-versioned task may contain multiple CPU implementations), each requiring a separate special task queue. Such conventional techniques may require processing units to scan eac special task queue to identify any tasks that may be assigned and executed on the processing units. The embodiment techniques are more efficient than such conventional schemes, and require only as many task queues as are necessary for single-version tasks.[0035] In various embodiments, the multi-processor computing device may execute one or more runtime functionalities (e.g., a runtime service, routine, thread, logic, or other software element, etc.) to perform various operations for scheduling or dispatching tasks, multi-versioned or otherwise. Such a runtime functionality may include a dedicated runtime functionality that may be executed by a processor of the computing device, such as a general-purpose or applications processor configured to execute operating systems, services, and/or other system-relevant software. For example, a runtime functionality executing on an application processor may be configured to provide ownership to a processing unit for executing a specialized version of a multi-versioned task. Other runtime functionalities may also be used, such as dedicated functionalities for handling tasks at individual processing units.[0036] FIG. 1 is a diagram 100 illustrating various components of an exemplary multi-processor computing device 101 (e.g., a heterogeneous system-on-chip (SoC) device) suitable for use with various embodiments. The multi-processor computing device 101 may include a plurality of processing units, such as a first CPU 102 (referred to as "CPU A" 102 in FIG. 1), a second CPU 112 (referred to as "CPU B" 102 in FIG. 1), a GPU 122, and a DSP 132. In some embodiments, the multiprocessor computing device 101 may utilize an "ARM big. Little" architecture, and the first CPU 102 may be a "big" processing unit having relatively high performance capabilities but also relatively high power requirements, and the second CPU 112 may be a "little" processing unit having relatively low performance capabilities but also relatively low power requirements than the first CPU 102.[0037] The multi-processor computing device 101 may be configured to support parallel-processing, work sharing, and/or work "stealing" between the various processing units 102, 112, 122, 132. In particular, any combination of the processing units 102, 112, 122, 132 may be configured to create and/or receive discrete work items (e.g., tasks) for execution. Each of the processing units 102, 112, 122, 132 may utilize one or more queues (or task queues) for temporarily storing and organizing tasks (and/or data associated with tasks) to be executed by the processing units 102, 112, 122, 132. For example and with reference to FIG. 1, the first CPU 102 may retrieve tasks from task queues 166, 168, 176 for local execution by the first CPU 102 and may place tasks in queues 170, 172, 174 for execution by other devices; the second CPU 112 may retrieve tasks from queues 174, 178, 180 for local execution by the second CPU 112 and may place tasks in queues 170, 172, 176 for execution by other devices; the GPU 122 may retrieve tasks from queues 172; and the DSP 132 may retrieve tasks from queues 170. In some embodiments, some task queues 170, 172, 174, 176 may be so called multi-producer, multi-consumer queues; some task queues 166, 168, 178, 180 may be so called single-producer, multi-consumer queues; while yet other task queues may be so called single-producer, single-consumer queues. In some cases, tasks may be generated based on indicators within code, such as designations by programmers of workloads that split certain computations. Further, any combination of the processing units 102, 112, 122, 132 may typically be capable of identifying tasks within workloads that may be submitted for distribution by the runtime functionality.[0038] In some embodiments, a runtime functionality (e.g., runtime engine, task scheduler, etc.) may be configured to determine destinations for dispatching tasks to the processing units 102, 112, 122, 132. For example, in response to identifying a new general-purpose task that may be offloaded to any of the processing units 102, 112, 122, 132, the runtime functionality may identify the best processing unit(s) for executing the task and may dispatch the task accordingly. Such a runtimefunctionality may be executed on an application processor or main processor, such as the first CPU 102. In particular, the runtime functionality may be performed via one or more operating system-enabled threads (e.g., "main thread" 150). For example, based on determinations of the runtime functionality, the main thread 150 may provide task data to various task queues 166, 170, 172, 180. [0039] In some embodiments, each processing unit 102, 112, 122, 132 may be capable of providing tasks to one another. For example, the DSP 132 may create tasks for the first CPU 102, the second CPU 112, and/or the GPU 122 and vice versa. As another example, the GPU 122 may create tasks for the first CPU 102, the second CPU 112, and/or the DSP 132 and vice versa,[0040] FIGS. 2A-2B illustrate a scenario in which a multi-processor computing device (e.g., a heterogeneous system-on-chip (SoC) device) performs a conventional "early-binding" of a specialized version of a multi-versioned task. Typically, conventional task scheduling for a multi-versioned task may identify, a priori, a processing unit predicted to best be available or able to perform a particular version of the multi-versioned task. Such early-binding techniques often evaluate the load present on the applicable processing units (e.g., cores) at the time of scheduling, such as by evaluating the number of tasks in a task queue, the task size, and other factors that affect the speed and/or efficiency of the processing units. In other words, the processing unit with the lightest load at the time of scheduling may be assigned a task. For example, in response to determining that a GPU's current task queue includes fewer work items than the task queues of the CPU and DSP at an evaluation time, a scheduler may assign a version of a multi-versioned task to the GPU instead of to a CPU or a DSP. These evaluations (or predictions) may occur in advance of when the assigned processing unit is actually available to execute the specialized version of a multi-versioned task.[0041] However, due to differences in the execution speeds and various runtime conditions, such as input to the program and the availability of computational resources, a priori assignments of tasks to processing units based on current or predicted load evaluations may be suboptimal. For example, even though a version of a multi-versioned task may be assigned to a little CPU that has fewer tasks in a related task queue than in a GPU's task queue, the GPU may finish more tasks faster than the little CPU due to the latter' s relatively weak computational power (e.g., lower frequency, instructions-per-cycle, etc.). Thus, the GPU could have executed the multi-versioned task before the little CPU. Such conventional scheduling techniques may be short-sighted and adversely affect overall system performance.[0042] Conventional early-binding techniques may be additionally problematic with multi-versioned tasks because assignments of specialized versions may not be reassigned for dynamic load balancing. In particular, "work stealing" policies that enable available processing units with no work items in corresponding task queues to take work items from other task queues may not be applicable as specialized versions in task queues may only be executed on partic ular types of processing units. For example, a. DSP that is available to wrork may not be able to take a specialized version of a multi-versioned task from the task queue of a CPU as that specialized task is specially-configured for the CPU.[0043] FIG. 2A includes a diagram 200 illustrating an exemplary early-binding scheduling (or dispatching) of a particular version of a multi-versioned task 201 (referred to as "Task 1" in FIG. 2A) by a multi-processor computing device 101 at a first time. The multi-versioned task 201 may be a general-purpose task, such as a computation that various processing units may be capable of executing (e.g., matrix multiplication calculation, etc.). The multi-versioned task 201 may include or otherwise be associated with a plurality of specialized versions of the multi-versioned task 201 that are each configured to accomplish a common computation, process, and/or function. For example, the multi-versioned task 201 may include a first specialized version 202a configured for execution on a first CPU 102 (referred to as "CPU A" in FIGS. 2A-2B), a second specialized version 202b configured for execution on a second CPU 112 (referred to as "CPU B" in FIGS. 2A-2B), a third specialized version 202c configured for execution on a GPU 122, and a fourth specialized version 202d configured for execution on a DSP 132.[0044] At an arbitrary first time, a conventional load-based scheduler module 210 configured to schedule tasks based on current workloads of the processing units 102, 112, 122, 132 may determine that the multi-versioned task 201 is ready to be scheduled for execution. The load-based scheduler module 210 may be a scheduler functionality executed via a thread via an application processor that receives and dispatches new tasks for execution by the processing unit having the least workload (e.g., fewest number of queues tasks). For simplicity, the load-based scheduler module 210 is shown in FIGS. 2A-2B as a single module (e.g., software,component(s), instructions, routine(s), logic, etc.) that is configured to be executed via a thread (e.g., main thread 150). However, the load-based scheduler module 210 may represent or otherwise include functionalities that may or may not be enabled by a single (or main) thread. For example, in some embodiments, the load-based scheduler module 210 may be comprised of logic, components, routines, instructions, and/or other functionalities that are executed and enabled across multiple threads in the multi-processor computing device 101.[0045] The load-based scheduler module 210 may evaluate the operating states of the plurality of processing units 102, 112, 122, 132, such as by identifying the processing unit currently having the fewest number of tasks currently queued to be processed. For example, regarding the illustration in FIG. 2A, the load-based scheduler module 210 may identify that at the first time a first task queue 220a associated with the first CPU 102 may have four tasks 230a queued, a second task queue 220b associated with the second CPU 112 may have two tasks 230b queued, a third task queue 220c associated with the GPU 122 may have five tasks 230c queued, and a fourth task queue 220d associated with the DSP 132 may have three tasks 230d queued. Based on the number of currently queued tasks in the task queues 220a-220d, the load-based scheduler module 210 may identify the second CPU 112 as the appropriate processing unit to execute a version of the multi-versioned task 201 due to the second task queue 220b having the fewest currently queued tasks 230b. The load- based scheduler module 210 may assign the multi-versioned task 201 to the second CPU 112 by placing the second specialized version 202b (or a representation of the second specialized version 202b) in the second task queue 220b. [0046] However, as different processing units 102, 112, 122, 132 may perform operations at different rates and/or may be affected by various unforeseen factors or conditions after tasks are assigned by the load-based scheduler module 210, the multi- versioned task 201 may be sub-optimally executed by the multi-processor computing device 101. For example, determinations of the load-based scheduler module 210 may fail to consider dynamic device operating conditions that may occur and subsequently affect the speed at which work items are executed by processing units, the complexity of work items to be performed, etc. Some conditions that may eventually cause different processing units 102, 112, 122, 132 to become available (or unavailable) before other processing units may or may not be predictable or measured, such as random device failures or inefficiencies. In other words, predicting and/or measuring the processing abilities of the processing units 102, 112, 122, 132 may be inaccurate and/or imprecise and thus may result in sub-optimal task assignments.[0047] FIG. IB includes a diagram 250 illustrating such an inefficient condition of the computing device 101 at a second time due to the conventional early-binding scheduling shown in FIG. 2A. In particular, after a period of time during which the processing units 102, 112, 122, 132 process the tasks within corresponding task queues 220a-220d, the GPU 122 may become available and thus inactive due to an empty task queue 220c. Further, despite the load determination by the load-based scheduler module 210 at the first time, the second specialized version 202b of the multi-versioned task 201 has yet to begin execution by the second CPU 112 at the second time. In other words, despite having the fewest tasks in a task queue 220b at the first time, the second CPU 112 was not the eventual best choice for executing the multi-versioned task 201, as the GPU 122 became available at the second time while the second CPU 112 was unable to begin execution of the second specialized version 202b. Thus, due to the obvious limitations of conventional a priori "early-binding" policies, there is an opportunity to improve multi-versioned task scheduling within multi-processor computing devices. [0048] FIGS. 3A-3C illustrate an exemplary "late-binding" technique according to some embodiments that may be used to overcome the shortcomings of a priori scheduling policies for multi-versioned tasks, such as illustrated in FIGS. 2A-2B. In particular, embodiment late-binding operations may assign specialized versions of multi-versioned tasks based on relative speed of execution of processing units. This may ensure that, regardless of the relative load on the processing units of a multiprocessor computing device at the time tasks are placed within task queues, the fastest processing unit to be ready to execute a specialized version may perform a multi- versioned task.[0049] The components depicted in FIGS. 3A-3C are provided for illustrative purposes only and are not intended to limit the embodiments or claims to any particular structural implementation suitable for implementing methods according to various embodiments. For example, task queues 220a-220d may be represented in various data structures and/or other equivalent structures in multi-processor computing devices. As another example, a concurrent scheduler module 311 may represent one or more components, logic, devices, and/or other functionalities that may be supported by one or more processing units of a multi-processor computing device.[0050] FIG. 3 A includes a diagram 300 illustrating a multi-versioned task 301 that may be created for performing general-purpose operations (e.g., a calculation, etc.). The multi-versioned task 301 (referred to as "Task_2" in FIGS. 3A-3C) may include or otherwise be associated with a plurality of specialized versions 302a-302d of the multi-versioned task 301. For example, the multi-versioned task 301 may include a first specialized version 302a configured for execution on a first CPU 102, a second specialized version 302b configured for execution on a second CPU 112, a third specialized version 302c configured for execution on a GPU 122, and a fourth specialized version 302d configured for execution on a DSP 132. [0051] In some embodiments, the multi-versioned task 301 may also be associated with stored data that may be used by the multi -processor computing device 101 to control how the processing units 102, 112, 122, 132 access or otherwise execute the multi-versioned task 301. For example, the multi-versioned task 301 may include or otherwise be associated with ownership data 310, such as a data field that stores an identifier (or ID) of a processing unit that has acquired the right to execute the multi- versioned task 301 (i.e., the ID of the processing unit having "ownership" over the multi-versioned task 301). In some embodiments, the ownership data 310 may not store an identifier but instead may simply store data indicating whether ownership over the multi-versioned task 301 has or has not been acquired by any processing unit (e.g., a single bit indicating owned/not-owned).[0052] Prior to the multi-versioned task 301 being owned by a processing unit, the ownership data 310 may be null. The stored data associated with the multi-versioned task 301 may also include completion data 312, such as a data field that stores information indicating whether any specialized version 302a-302d of the multi- versioned task 301 has been completed. For example, as a default, the completion data 312 may be set to a negative value (e.g., 0, "N", "no", etc.), but after a processing unit having ownership over the multi-versioned task 301 executes one of the specialized versions 302a-302d, the completion data 312 may be set to a positive value to indicate the task is completed (e.g., 1,. "Y", "yes", etc.). In someembodiments, the data fields 310-312 may be stored in a volatile or non- volatile memory, such as in a data array(s), system variable(s), register(s), and/or other structure(s) accessible to the multi-processor computing device 101.[0053] At an arbitrary first time, a runtime functionality configured to schedule tasks (i.e., concurrent scheduler module 311) may determine that the multi-versioned task 301 has been created or is otherwise ready to be scheduled for execution. For example, the concurrent scheduler module 311 may be a scheduler functionality executed via a thread by an application processor that receives and dispatches new tasks. For simplicity, the concurrent scheduler module 311 is shown in FIGS. 3A-3C as a single module (e.g., software, component(s), instructions, routine(s), logic, etc.) that is configured to be executed via a thread (e.g., main thread 150). However, the concurrent scheduler module 311 may represent or otherwise be comprised of functionalities that may or may not be enabled by a single (or main) thread. For example, in some embodiments, the concurrent scheduler module 311 may be comprised of logic, components, routines, instructions, and/or other functionalities that are executed and enabled across multiple threads in the multi-processor computing device 101. In some embodiments, the concurrent scheduler module 311 may not necessarily be an execution entity that is separate from the various processing units of the multi-processor computing device 101 (e.g., GPU 122, DSP 132, etc.). For example, the concurrent scheduler module 311 may represent a logicalfunctionality that may be executed inline by any processor or device (e.g., GPU 122, etc.) as part of operations to execute a multi-versioned task as described.[0054] Regardless of the workloads of the processing units 102, 112, 122, 132 at the first time (e.g., the other tasks 230a-230d), the concurrent scheduler module 311 may concurrently assign each of the specialized versions 302a-302d to the task queues 220a-220d of the processing units 102, 112, 122, 132. For example, the concurrent scheduler module 311 may send the first specialized version 302a that is configured for execution by a CPU to the first task queue 220a associated with the first CPU 102, the second specialized version 302b that is configured for execution by a CPU to the second task queue 220b associated with the second CPU 112, the third specialized version 302c that is configured for execution by a GPU to the third task queue 220c associated with the GPU 122, and the fourth specialized version 302d that is configured for execution by a DSP to the fourth task queue 220d associated with the DSP 132. In some embodiments, sending the specialized versions 302a-302d may include placing task pointers or other data representations of the specialized versions 302a-302d within the appropriate task queues 220a-220d. [0055] FIG. 3B includes a diagram 350 illustrating a processing unit (i.e., GPU 122) acquiring ownership over the multi-versioned task 301. For example, after the first time illustrated in FIG. 3A, the GPU 122 may complete all other tasks 230c in the third task queue 220c and therefore become available to execute the third specialized version 302c of the multi-versioned task 301. The GPU 122 may exchange signals 352 with the concurrent scheduler module 311 (i.e., runtime functionality) to acquire the ownership over the multi-versioned task 301. For example, upon determining that the next task to execute in the third task queue 220c is the third specialized version 302c, the GPU 122 may transmit a request signal to the concurrent scheduler module 311, and in response the concurrent scheduler module 311 may transmit a response signal that indicates whether the GPU 122 acquired the ownership. In some embodiments, the signals 352 may include an API call from the GPU 122 for configuring the multi-versioned task 301 to be configured to finish in response to the GPU 122 finishing a specialized version of the multi-versioned task 301. For example, the signals 352 from the GPU 122 may include a "finish_after()" call that causes the concurrent scheduler module 311 to configure the multi-versioned task 301 to complete, cancel, or otherwise end all associated versions of the multi-versioned task 301 (e.g., 302a, 302b, 302d) in response to determining that the third specialized version 302c has completed. In other words, the "finish_after()" may be a mechanism that causes a multi-versioned task to finish after the execution of only one specialized version of the multi-versioned task.[0056] In some embodiments, the concurrent scheduler module 311 may perform operations 354 to check and/or update the ownership data 310 associated with the multi-versioned task 301 in response to receiving signaling 352 from the GPU 122. For example, the concurrent scheduler module 311 may query the ownership data 310 to determine whether a processing unit identifier is already set and if not may cause the identifier of the GPU 122 (e.g., "GPU") to be stored in the ownership data 310. [0057] Once the GPU 122 acquires ownership over the multi-versioned task 301 (i.e., GPU 122 has the exclusive right to execute a specialized version of the multi- versioned task 301), the other processing units 102, 112, 132 may eventually complete other tasks and become available to execute specialized versions 302a, 302b, 302d. However, in response to exchanges with the concurrent scheduler module 311, these other processing units 102, 112, 132 may be denied ownership over the multi- versioned task 301 and thus may be precluded from executing the corresponding specialized versions 302a, 302b, 302d. Instead, the other processing units 102, 112, 132 may be configured to simply discard the specialized versions 302a, 302b, 302d without execution due to such ownership denials. For example, the specialized versions 302a, 302b, 302d may be discarded in response to the GPU 122 acquiring ownership over the multi-versioned task 301. In some cases, the GPU 122 may have already completed execution of the third specialized version 302c by the time the other processing units 102, 112, 132 are ready to execute corresponding specialized versions 302a, 302b, 302d, and so the other processing units 102, 112, 132 may dequeue and discard specialized versions 302a, 302b, 302d and simply begin executing other tasks entered into corresponding task queues 220a, 220b, 220d.[0058] FIG. 3C includes a diagram 375 illustrating signaling occurring after the GPU 122 completes execution of the third specialized version 302c, thus completing the multi-versioned task 301. In some embodiments, the GPU 122 may transmit a signal 376 to the concurrent scheduler module 311 indicating completion of the third specialized version 302c, and in turn, the concurrent scheduler module 311 may perform operations 378 to update the data 312 (e.g., store a "Y" value). In some embodiments, the concurrent scheduler module 311 may exchange signals 380 with the other processing units 102, 112, 132 that cause the other processing units 102, 112, 132 to dequeue or disable corresponding specialized versions 302a, 302b, 302d of the multi-versioned task 301 and resume execution of other tasks in corresponding task queues 220a, 220b, 220d. For example, when ready to execute specialized versions 302a, 302b, 302d, the other processing units 102, 112, 132 may request ownership. In response, the signals 380 may indicate that ownership of the multi-versioned task 301 is unavailable, and thus the processing units 102, 112, 132 may disable the specialized versions 302a, 302b, 302d and proceed with other tasks unrelated to the multi- versioned task 301.[0059] In some embodiments, the operations 378 may include dequeuing and/or otherwise deleting the multi-versioned task 301, such as by removing data associated with the multi-versioned task 301 from memory. In some embodiments, the GPU may not be required to transmit the signal 376 as the original signals 352 to acquire the ownership over the multi-versioned task 301 may be adequate for disabling other processing units 102, 112, 132 from executing respective specialized versions 302a, 302b, 302d. For example, after the signals 352 are sent, the DSP 132 may dequeue the fourth specialized version 302d, determine that the associated multi-versioned task 301 is already owned by the GPU 122 (e.g., via signaling to the concurrent scheduler module 311), and thus simply discard the fourth specialized version 302d.[0060] FIG. 4A illustrates a method 400 performed by a multi-processor computing device to schedule and dispatch specialized versions of multi-versioned tasks to various processing unit task queues. As described, the multi-processor computing device (e.g., multi-processor computing device 101 in FIGS. 1-3C) may be configured to perform late-binding of multi-versioned tasks at runtime by concurrently assigning specialized versions to a plurality of processing units, for instance, some or all processing units supported for the multi-versioned tasks. The method 400 may be performed by the multi-processor computing device to place specialized versions in appropriate task queues of various processing units. In some embodiments, various operations of the method 400 may be performed by a runtime functionality (e.g., concurrent scheduler module 311 of FIGS. 3A-3C) executing via a processing unit of a multi-processor computing device, such as the CPU 102, GPU 122, or DSP 132 of the multi-processor computing device 101 of FIGS. 1-3C. [0061] The multi-processor computing device may identify a multi- versioned task with specialized versions to be executed on one of a plurality of processing units in block 402. Each specialized version may be configured to be executed by a different processing unit of the plurality of processing units (e.g., GPU, CPU, DSP, etc.). The identification may include determining whether a task to be scheduled by the multiprocessor computing device is associated with specialized versions (or alternative tasks/ identifiers). For example, the multi-processor computing device may perform an operation (e.g., the "has_alternatives()" function call in FIG. 5B) to determine whether a new task to be scheduled has alternative numbers that indicate specialized versions. In some embodiments, the multi-processor computing device may make such identifications in response to the creation of the multi-versioned task by a processing unit. For example, the multi-processor computing device may identify any new task created via an API call that indicates supported specialized versions as a multi-versioned task. An exemplary code for creating a multi-versioned task is illustrated in FIG. 5A.[0062] When launched for execution, a multi-versioned task may be considered for scheduling by the multi-processor computing device, such as at a time when the multi- versioned task has no predecessor dependencies and/or at a time defined by various scheduling optimizations. In some embodiments, the multi-versioned task may be scheduled based on a scheduling policy that evaluates task graphs by various properties (e.g., processing unit type, etc.) to find subgraphs. For example, the multi- versioned task may identify within a large task graph a subgraph of all tasks for a GPU, and dispatch the entire subgraph at once for execution on the GPU. Such a dispatch may include a single GPU-to-CPU callback to indicate finish of all the tasks in the subgraph, reducing the number of round-trips between the CPU and GPU for scheduling purposes.[0063] In block 404, the multi -processor computing device may enqueue each of the specialized versions of the multi-versioned task in a task queue for an appropriate processing unit of the plurality of processing units. For example, if there are specialized versions available for a CPU, a GPU, and a DSP, the multi-processor computing device may enqueue a CPU specialized version in the task queue of the CPU, a GPU specialized version in the GPU task queue, and a DSP specialized version in the DSP task queue. The multi-processor computing device may only enqueue specialized versions in task queues of processing units that are supported by the multi-versioned task. For example, the multi-processor computing device may only enqueue specialized versions in the GPU's task queue and the CPU's task queue and may ignore unsupported processing units (e.g., a DSP, etc.) if only CPU and GPU specialized versions of the multi-versioned task are provided by a programmer.[0064] The multi-processor computing device may continue with the operations in block 402. In some embodiments, the multi-processor computing device may continue with the operations of block 404 in FIG. 4B.[0065] In some embodiments, the multi-processor computing device may enqueue a task pointer (or other similar data) for each specialized version in each appropriate task queue. An exemplary enqueing of pointers for various specialized versions of a multi-versioned task is illustrated in FIG. 5B (e.g., code subsection 526 and code section 528). In some embodiments, the multi-processor computing device may encode task identification information into the pointers. In particular, the pointers may each include data indicating the multi-versioned task as well as an individual specialized version. Such task information within the pointers may be accessed by the individual processing units by applying different bit masks. For example, a first processing unit (e.g., a GPU) may identify a multi-versioned task by applying a parent mask to the pointer dequeued from an associated task queue and may identify a specialized version by applying a second mask (or an alternative mask) to the same pointer. In some embodiments, data indicating the specialized version may be included within the lowest 4 bits of each pointer. An exemplary encoding of pointers is illustrated in FIG. 5B (e.g., code subsection 526) and an application of such masks by a processing unit to retrieve task identification information is illustrated in FIG. 7 (e.g., code section 702).[0066] FIG. 4B illustrates a method 405 performed by a multi-processor computing device (e.g., multi-processor computing device 101 in FIGS. 1-3C) to manage the performance of multi-versioned tasks. As described, once specialized versions of a multi-versioned task are assigned to all supported processing units, the first of the supported processing units to become ready to immediately execute a specialized version enqueued within an associated task queue may gain ownership over the multi- versioned task. Such ownership acquisition may prevent the other supported processing units from executing corresponding specialized versions of the same multi- versioned task. For example, specialized versions in task queues of processing units that fail to get ownership may simply finish without execution, such as in response to determining the first processing unit has already acquired ownership and completed a corresponding specialized version. Acquiring ownership over multi-versioned tasks may ensure that each multi-versioned task is executed by the fastest single processing unit that is ready to execute an appropriate specialized version, thereby providing better performance than potentially inaccurate a priori prediction schemes.[0067] In some embodiments, the method 405 may be performed for each multi- versioned task active within the multi-processor computing device. For example, the multi-processor computing device may concurrently execute one or more instances of the method 405 (e.g., one or more threads for executing method 405) to handle the management of one or more multi-versioned tasks. In some embodiments, various operations of the method 405 may be performed by a runtime functionality (e.g., concurrent scheduler module 311 of FIGS. 3A-3C) executing via a processing unit of a multi-processor computing device, such as the CPU 102, GPU 122, or DSP 132 of the multi-processor computing device 101 of FIGS. 1-3C. In various embodiments, the method 405 may be performed in combination with the method 400 described with reference to FIG. 4A. For example, a runtime functionality may be configured to perform both methods 400, 405 in order to identify, dispatch, and then manage multi- versioned tasks across a plurality of processing units according to variousembodiments.[0068] Some time after specialized versions of a multi-versioned task are distributed to task queues of a plurality of processing units (e.g., after execution of block 404 of the method 400), the multi-processor computing device may determine whether a request for ownership of the multi-versioned task was received from a requesting processing unit in determination block 406. Such a request may be sent via an inter-processor signal, interrupt, register, and/or other data that may be shared between processing units of the computing device. For example, a GPU may use an API call to request ownership over the multi-versioned task associated with a specialized version within the GPU's task queue. An exemplary request call or function by a processing unit is illustrated in FIG. 7 (e.g., "t->request_ownership()" in code section 706).[0069] In response to determining that a request for ownership of the multi-versioned task was received from the requesting processing unit in (i.e., determination block 406 = "Yes"), the multi-processor computing device may determine whether theownership of the multi-versioned task was already given to another processing unit in determination block 412. For example, the multi-processor computing device may evaluate stored data associated with the multi-versioned task to determine whether an "ownership" variable, flag, and/or other indicator has been set indicating that a processing unit has previously acquired ownership over that particular multi-versioned task.[0070] In response to determining that the ownership has already been given to another processing unit (i.e., determination block 412 = "Yes"), the multi-processor computing device may transmit a rejection signal to the requesting processing unit that transmitted the request for ownership over the multi-versioned task in block 414.Such a rejection signal may cause the requesting processing unit to discard a dequeued specialized version of the multi-versioned task. In some embodiments, the operations of block 414 may be optional as the multi-processor computing device may simply fail to respond to requests for ownership in order to indicate that processing units are not provided ownership.[0071] In response to determining that the ownership has not already been given to a processing unit (i.e., determination block 412 = "No"), the requesting processing unit may be the first to access a corresponding specialized version of the multi-versioned task, and so may be identified as the owner. Therefore, in block 416, the multiprocessor computing device may transmit an ownership signal for the requested multi- versioned task to the requesting processing unit, such as by transmitting anacknowledgement.[0072] In block 418, the multi-processor computing device may update stored data associated with the multi-versioned task to indicate ownership by the requesting processing unit, such as by storing data representing the identity of the requesting processing unit within a data structure, variable, and/or register that is associated with the multi-versioned task. In some embodiments, the multi-processor computing device may store data for the multi-versioned task that simply indicates the multi- versioned task is currently owned by a processing unit.[0073] In block 420, the multi-processor computing device may configure the multi- versioned task to terminate or otherwise complete upon the subsequent completion of the requesting processing unit's execution of a corresponding specialized version. For example, at some later time when the requesting processing unit completes execution of a corresponding specialized version, the multi-processor computing device may mark the multi-versioned task as completed. In other words, the multi-processor computing device may set the multi-versioned task as "owned" and all associated specialized versions to be discarded (e.g. not executed) in response to a single specialized version being executed by a single processing unit. [0074] In response to performing the operations of block 420, the multi-processor computing device may continue with the determination operations in determination block 406, such as for identifying subsequently received signals indicating completion of a specialized version and/or requesting ownership.[0075] In some embodiments, the configuration may include a call to an API command "finish_after()" that causes the multi-versioned task to experience a delayed finish. For example, in response to obtaining the ownership over the multi-versioned task, the requesting processing unit and/or another processing unit or functionality of the multi-processor computing device may execute the "finish_after()" command in order to set the lifetime of the multi-versioned task to be connected to the lifetime of the specialized version to be executed by the requesting processing unit that acquired ownership. Any task dependencies, waits, etc. on the multi-versioned task may be automatically fulfilled by the specialized version's execution. An exemplary use of such a "finish afterO" command is illustrated in FIG. 7 (e.g., code subsection 708). Techniques for implementing such a "finish_after()" command may be found in commonly-held U.S. Patent Application No. 14/604,821, filed January 26, 2015, the contents of which are herein incorporated by reference in their entirety.[0076] In response to determining that a request for ownership of the multi-versioned task was not received from a processing unit in (i.e., determination block 406 = "No"), the multi-processor computing device may continue listening for subsequent requests for ownership of the multi-versioned task in determination block 402.[0077] In some embodiments, the multi-processor computing device may beconfigured to periodically determine whether a specialized version of the multi- versioned task has completed after a processing unit has acquired ownership over the multi-versioned task. Such a determination may be based on signals received from a processing unit having ownership over the multi-versioned task, based on evaluations of a bit, variable, register, or other data that may be updated by a processing unit having ownership over the multi-versioned task, and/or based on the expiration of a predefined period of time. For example, the multi-processor computing device may listen for incoming signals from a GPU that already acquired ownership over the multi-versioned task that indicate the GPU has completed execution of a respective specialized version, thus indicating the multi-versioned task may finish as well. In this way, in various embodiments the multi-processor computing device may be configured to use periodic polling and/or event-driven mechanisms for detecting (or being notified of) the completion of a specialized version of a multi-versioned task.[0078] In some embodiments, in response to determining that a specialized version of the multi-versioned task has completed, the multi-processor computing device may perform operations for finishing the multi-versioned task. For example, the multiprocessor computing device may terminate (e.g., dequeue, invalidate, etc.) the multi- versioned task.[0079] In some embodiments, the multi-processor computing device may also perform operations to finish related specialized versions of the finished multi-versioned task in response to an identified (e.g., detected or notified) completion of a specialized version of the multi-versioned task. For example, in response to determining that the GPU completed a GPU specialized version, the specialized versions for the CPU and DSP may be set to complete or end without execution. Other examples of operations that may be performed by the multi-processor computing device to finish related specialized versions of a multi-versioned task (e.g., those specialized versions that are not executed) may include causing the specialized versions to leave task groups and/or releasing memory allocated to the related un-executed specialized versions. In some embodiments, the multi-processor computing device may transmit signals or interrupts, set registers, and/or perform other operations to actively indicate to various processing units that no other specialized versions of the multi-versioned task should be executed, thereby nullifying any unexecuted but still enqueued specialized versions. The method 405 may then end as the multi-versioned task has been processed via the execution of only one specialized version. [0080] FIGS. 5A-5B illustrate non-limiting exemplary pseudocode 500, 520 representing instructions that may be performed by a multi-processor computing device (e.g., computing device 101) to create and dispatch specialized versions of multi-versioned tasks to task queues according to various embodiments. In some embodiments, the pseudocode 500, 520 may represent operations performed by a runtime scheduler functionality (e.g., a service, thread, application, etc.) executing on the multi-processor computing device. In some embodiments, the pseudocode 500 may be used for various main implementations (e.g., versions[0], ... versions[n]), and thus is not intended to limit the embodiments and/or claims.[0081] Referring to FIG. 5A, the pseudocode 500 represents a non-limiting example of instructions that may be executed by a processing unit (e.g., a CPU 102,applications processor, etc.) in order to create new multi-versioned tasks having one or more specialized versions that are each designed to be performed on particular processing units of the multi-processor computing device. In particular, thepseudocode 500 may represent a function (i.e., the "create task" function) that may be called with various arguments, including a tuple indicating the specialized versions that are supported for the multi-versioned task to be created. For example, the tuple may indicate GPU, DSP, and CPU specialized versions should be created. In some embodiments, the order in which the specialized versions are indicated in the tuple argument for the function may indicate relative preference or priority.[0082] The pseudocode 500 may include a first code section 502 including a recursive function that may itself be called by instructions within the first code section 502. For example, a first code subsection 504 may be included that creates a task using the 'create task' function. The first time that the first code subsection 504 is executed regarding a particular task, a "main" multi-versioned task may be created. A second code subsection 506 (e.g., a 'for' loop) may be performed to create specialized versions (or "alternatives") that may be linked (or referenced by) for the main multi- versioned task. For example, pointer data for each specialized version (or "alternative") may be stored in association with the main multi-versioned task.[0083] In some embodiments, task creation may be initiated or otherwise executed based on a programmer (e.g., inserting function calls or API calls within code to create tasks, etc.) and/or a compiler or runtime system.[0084] Referring to FIG. 5B, the pseudocode 520 represents a non-limiting example of instructions of one or more methods, functions, routines, or other logic that are periodically executed by a processing unit (e.g., a CPU 102, applications processor, etc.) in order to schedule (or dispatch) various tasks, multi-versioned or otherwise, to task queues of processing units of the device (e.g., CPU 102, CPU 112, GPU 122, DSP 132, etc.). Any references to particular processing units within the pseudocode 520 (e.g., big.Little core, GPU, etc.) are for illustration purposes and thus are not intended to limit the embodiments or claims to any particular processing units.[0085] The pseudocode 520 may include a first code section 522 for scheduling a task. The first code section 522 may include a first code subsection 524 (e.g., an 'if block) that may be performed in response to determining that the task has no alternatives (i.e., the task is not a multi-versioned task), wherein the task is simply enqueued in a task queue of a processing unit.[0086] A second code subsection 526 (e.g., an 'else' block) may be performed in response to determining that the identified task has alternatives (i.e., the task is a multi-versioned task), and may include operations for enqueuing pointers for each of the specialized versions (or "alternatives") of the multi-versioned task in one of the task queues of the various processing units of the computing device. Such pointers may have identifiers encoded in unused bits, such as the lowest 4 bits of a pointer. For example, second code subsection 526 may be performed to push N task pointers to the appropriate task queues, where N may be the number of specialized versions (or implementations) of a given multi-versioned task, with the individual alternative numbers encoded in each task pointer. A second code section 528 may include operations for enqueuing tasks into appropriate task queues of processing units based on an identifier generated by applying a mask (or bit mask) to a task pointer. Various "push(task)" operations in the code section 528 may be performed that retain alternative encodings for tasks until the tasks are dequeued.[0087] FIG. 6 illustrates a method 600 performed by a processing unit (e.g., any of processing units 102, 112, 122, 132 of FIGS. 1-3C) of a multi-processor computing device (e.g., multi-processor computing device 101 of FIGS. 1-3C) to execute specialized versions of multi-versioned tasks according to various embodiments. In various embodiments, each of the processing units of the multi-processor computing device may be configured to continually execute the method 600 in order to handle specialized versions of multi-versioned tasks as described herein. For example, a first CPU (e.g., processing unit 102 of FIGS. 1-3C), a second CPU (e.g., processing unit 112 of FIGS. 1-3C), a GPU (e.g., processing unit 122 of FIGS. 1-3C), and a DSP (e.g., processing unit 132 of FIGS. 1-3C) may all be configured to independently and concurrently execute independent instances of the method 600 in order to acquire ownership over multi-versioned tasks and execute related specialized versions.Various operations of the method 600 may be performed via various software, threads, routines, instructions and/or other functionalities configured to control and otherwise operate the processing unit. For example, a thread (e.g., a DSP thread in a thread pool associated with a task queue) may perform the method 600 to dequeue task pointers from the task queue and perform operations to acquire ownership of multi-versioned tasks when appropriate.[0088] In block 602, the processing unit may dequeue a next task to be executed by the processing unit. In some embodiments, dequeuing the next task may include removing a pointer from the task queue. In determination block 604, the processing unit may determine whether the dequeued task is a specialized version of a multi- versioned task. For example, the processing unit may apply various bit masks to a pointer to obtain an identifier, and may determine whether the identifier corresponds to any specialized versions (or "alternative" numbers) of the multi-versioned task. Exemplary operations for performing the determination are illustrated in FIG. 7 (e.g., code section 702, command "t->has_alternatives" in code section 704).[0089] In some embodiments, the processing unit may perform a look-up within a table or other storage structure to compare an identifier of the dequeued task with stored data that indicates whether the dequeued task is a multi-versioned task or merely a typical task having only one version. Such implementations may not require bit masks to be applied to individual pointers placed within task queues in order to identify specialized versions. The following is an illustration of such animplementation. The processing unit may retrieve (or dequeue) a pointer from the task queue of the first processing unit, wherein the pointer may be a common multi- versioned task pointer placed in the task queue for each processing unit that has a supported specialized version for the multi-versioned task. Once retrieved, the processing unit may determine whether the pointer is associated with a list of specialized versions of the multi-versioned task (i.e., determine whether there are alternatives associated with the pointer). If there is a list of specialized versions associated with the pointer, the multi-processor computing device may then identify a specialized version of the multi-versioned task from the list of specialized versions that corresponds to the processing unit. For example, if the processing unit is a GPU, the processing unit may identify a GPU specialized version is in the list. If a corresponding specialized version is identified in the list, the processing unit may then proceed to obtain ownership over the multi-versioned task and execute thecorresponding specialized version.[0090] In response to determining that the dequeued task is not a specialized version of a multi-versioned task (i.e., determination block 604 = "No"), the processing unit may execute the dequeued task in block 612. In some embodiments, the processing unit may be configured to transmit a signal indicating the completion of the dequeued task in optional block 614. In some embodiments, the completion of the dequeued task may instead be indicated by setting a variable, flag, bit, or other stored data to indicate to the multi-processor computing device (and/or the various processing units) that the dequeued task has been completely executed.[0091] In response to determining that the dequeued task is a specialized version of a multi-versioned task (i.e., determination block 604 = "Yes"), the processing unit may determine whether the multi-versioned task is already owned and/or has been completed (and thus the entire multi-versioned task has been completed) by another processing unit in optional determination block 606. In other words, the processing unit may determine whether the specialized version from the corresponding task queue is still valid and should be executed by the processing unit if ownership can be acquired. If ownership of the multi-versioned task cannot be acquired by the processing unit, the multi-versioned task has been completed or is being completed by another processing unit, and thus the processing unit may simply discard the task and continue to process other tasks. The processing unit may determine whether multi- versioned task has been completed in various manners, such as by evaluating a bit, variable, register, and/or other stored information indicating the status of the multi- versioned task, transmitting an update request to a runtime functionality, and/or evaluating whether information stored in association with the dequeued specialized version indicates a completed status.[0092] In response to determining that the multi-versioned task is already owned and/or has been completed (i.e., optional determination block 606 = "Yes"), the processing unit may discard data for the unexecuted, dequeued task (representing the specialized version assigned to the processing unit) in block 607 and return to the dequeuing operations in block 602.[0093] In response to determining that no other specialized version of the multi- versioned task has been completed by another processing unit (i.e., optional determination block 606 = "No"), the multi-versioned task is not yet owned, and thus the processing unit may still be able to execute a corresponding specialized version. Thus, in block 608, the processing unit may request ownership of the multi-versioned task, such as by transmitting a request signal to a runtime functionality configured to schedule and/or manage multi-versioned tasks. An exemplary call for requesting ownership is illustrated in FIG. 7 (e.g., code section 706).[0094] In determination block 610, the processing unit may determine whether ownership of the multi-versioned task has been acquired by the processing unit. For example, the processing unit may continually monitor an incoming message buffer and/or a stored bit, variable, register, and/or other information in order to determine whether a response from a runtime functionality configured to schedule and manage multi-versioned tasks has provided ownership of the multi-versioned task to the processing unit.[0095] In response to determining that the ownership has not been acquired by the processing unit (i.e., determination block 610 = "No"), the multi-versioned task (and thus the specialized version assigned to but not executed by the processing unit) may be considered owned by another processing unit, and thus the processor may discard data for the unexecuted, dequeued task in block 607. For example, the processing unit may discard a task pointer for a specialized version of the multi-versioned task. The processing unit may then continue with the dequeuing operations in block 602.[0096] In response to determining that the ownership has been acquired by the processing unit (i.e., determination block 610 = "Yes"), the processing unit may transmit a signal indicating other specialized versions of the multi-versioned task should be disabled when dequeued in block 611. For example, as there is no need to wait until the completion of the processing unit's specialized version to disable the other specialized versions, the signal may cause other processing units to disable enqueued other specialized versions and/or discard dequeued task pointer data for the other specialized versions of the multi-versioned task. [0097] In some embodiments, the processing unit may execute a call to an API command "finish_after()" that causes the multi-versioned task to experience a delayed finish (e.g., the multi-versioned task may not complete until the processing unit fully executes the dequeued task). For example, the processing unit may execute the "finish afterO" command in order to set the lifetime of the multi-versioned task to be connected to the lifetime of the deqeueued task and thus cause a runtime functionality to finish the multi-versioned task (and all other specialized versions associated with the multi-versioned task). An exemplary use of such a "finish_after()" command is illustrated in FIG. 7 (e.g., code subsection 708).[0098] The processing unit may continue by executing the dequeued task (i.e., the specialized version of the multi-versioned task) in block 612, transmitting a signal indicating the dequeued task has been completed in optional block 614, and continue with the dequeuing operations in block 602.[0099] FIG. 7 illustrates exemplary pseudocode 700 representing instructions that may be performed by a processing unit of a multi-processor computing device (e.g., multi-processor computing device 101 of FIGS. 1-3C) to execute specialized versions of multi-versioned tasks according to various embodiments. For example, the pseudocode 700 may represent the instructions of one or more methods, functions, routines, or other logic that are periodically executed by a processing unit (e.g., CPU 102, CPU 112, GPU 122, DSP 132, etc.) in order to process various tasks, multi- versioned or otherwise.[0100] As a non-limiting illustration, the pseudocode 700 may include a first code section 702 for dequeuing pointers from a task queue of the processing unit, wherein masks (e.g., bit masks) may be applied to the pointers to identify a task (e.g., a multi- versioned task identity) and any other identification for the task (e.g., alternative number or version numbers for multi-versioned tasks). A second code section 704 (e.g., an 'if block) may be performed in response to determining that the identified task has no alternatives (i.e., the task is not a multi-versioned task). A third code section 706 (e.g., an 'else' block) may be performed in response to determining that the identified task has alternatives (i.e., the task is a multi-versioned task), and may include operations for requesting/ acquiring ownership over the multi-versioned task. The third code section 706 may include a first code subsection 708 (e.g., an 'if block) that may be performed by the processing unit in response to determining that ownership of the multi-versioned task has been acquired by the processing unit. The first code subsection 708 may include operations for configuring the multi-versioned task to end (or conclude) at the completion of a version as well as operations for executing the version. The third code section 706 may also include a second code subsection 710 (e.g., an 'else' block) that may be empty and otherwise not include operations to be performed by the processing unit in response to determining that ownership of the multi-versioned task has not been acquired by the processing unit. For example, when ownership cannot be acquired for a specialized version of a multi- versioned task, the processing unit may not perform a specialized version, but instead discard data for the specialized version and proceed to process other tasks in a task queue.[0101] FIG. 8 illustrates a method 800 performed by a processing unit of a multiprocessor computing device to execute priority tasks and specialized versions of multi-versioned tasks. The method 800 may be similar to the method 600 of FIG. 6, except that the method 800 may include additional operations for identifying and processing priority tasks. As described, such priority tasks may be uniquely suited for the processing unit, and thus should be executed by the processing unit prior to expending resources participating in parallel processing of multi-versioned tasks. For example, instead of performing a GPU version of a common multi-versioned task having other versions that can alternatively be executed at a DSP or CPU, a GPU processing unit may execute a rendering task before performing operations to acquire ownership over the multi-versioned task and executing the GPU version. In this manner, the processing unit may participate in parallel-processing policies without undermining native activities of the processing unit. [0102] The operations of blocks 602-614 may be similar to the operations of like numbered blocks described with reference to FIG. 6. In response to determining that the dequeued task is a specialized version of a multi-versioned task (i.e.,determination block 604 = "Yes") and determining that the multi-versioned task is not already owned and/or completed by another processing unit (i.e., optionaldetermination block 606 = "No"), the processing unit may determine whether there are one or more priority tasks in the task queue in determination block 802. For example, the processing unit may analyze the type and function of any other tasks currently enqueued in the task queue to determine whether any correspond to codes, functions, and/or activities predefined as priority for the processing unit. As another example, a GPU processing unit may determine whether any display and/or rendering related tasks are currently enqueued. In some embodiments, priority tasks, multi- versioned tasks, and regular tasks may all be within a same priority queue. In some embodiments, priority tasks may be in a different queue than multi-versioned tasks and other tasks, and the processing unit may be configured to process tasks in the priority queue before processing tasks in a non-priority queue.[0103] In response to determining that there are no priority tasks in the task queue (i.e., determination block 802 = "No"), the processing unit may request ownership over the multi-versioned task associated with the dequeued specialized version in block 608.[0104] In response to determining that there is at least one priority task in the task queue (i.e., determination block 802 = "Yes"), the processing unit may postpone processing the dequeued specialized version of the multi-versioned task in order to process the priority task(s) first. In block 804, the processing unit may dequeue and execute the one or more priority tasks. In some embodiments, the processing unit may execute the priority tasks one at a time or alternatively in a batch. In block 806, the processing unit may push the specialized version of the multi-versioned task back onto the task queue. In some embodiments, other tasks that were dequeued to retrieve the priority tasks may also need to be pushed back onto the task queue in order. The multi-processor computing device may dequeuer the next task to be executed by the processing unit in block 602.[0105] In some embodiments, the operations of blocks 804-806 may only be performed in response to the processing unit determining that there are nodependencies or restrictions that may require a particular execution order. For example, the processing unit may only dequeue and execute the priority tasks before the specialized version of the multi-versioned task in response to determining that the priority tasks are not dependent upon the multi-versioned task and/or other tasks within the queue that are before the priority tasks.[0106] Various forms of multi-processor computing devices, including personal computers, mobile devices, and laptop computers, may be used to implement the various embodiments. Such computing devices may typically include the components illustrated in FIG. 9 which illustrates an example multi-processor mobile device 900. In various embodiments, the mobile device 900 may include a processor 901 coupled to a touch screen controller 904 and an internal memory 902. The processor 901 may include a plurality of multi-core ICs designated for general or specific processing tasks. In some embodiments, other processing units may also be included and coupled to the processor 901 (e.g., GPU, DSP, etc.).[0107] The internal memory 902 may be volatile and/or non- volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof. The touch screen controller 904 and the processor 901 may also be coupled to a touch screen panel 912, such as a resistive- sensing touch screen, capacitive-sensing touch screen, infrared sensing touch screen, etc. The mobile device 900 may have one or more radio signal transceivers 908 (e.g., Bluetooth®, ZigBee®, Wi-Fi®, RF radio) and antennae 910, for sending and receiving, coupled to each other and/or to the processor 901. The transceivers 908 and antennae 910 may be used with the above-mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The mobile device 900 may include a cellular network wireless modem chip 916 that enables communication via a cellular network and is coupled to the processor 901. The mobile device 900 may include a peripheral device connection interface 918 coupled to the processor 901. The peripheral device connection interface 918 may be singularly configured to accept one type of connection, or multiply configured to accept various types of physical and communication connections, common or proprietary, such as USB, Fire Wire, Thunderbolt, or PCIe. The peripheral device connection interface 918 may also be coupled to a similarly configured peripheral device connection port (not shown). The mobile device 900 may also include speakers 914 for providing audio outputs. The mobile device 900 may also include a housing 920, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components discussed herein. The mobile device 900 may include a power source 922 coupled to the processor 901, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile device 900.[0108] Various processors (or processing units) described herein may be any programmable microprocessor, microcomputer or multiple processor chip or chips that may be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described herein. In the various devices, multiple processors may be provided, such as one processor dedicated to wireless communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in internal memory before they are accessed and loaded into the processors. The processors may include internal memory sufficient to store the application software instructions. In many devices the internal memory may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. For the purposes of this description, a general reference to memory refers to memory accessible by the processors including internal memory or removable memory plugged into the various devices and memory within the processors.[0109] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0110] The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and designconstraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present claims.[0111] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[0112] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a non-transitory processor-readable, computer-readable, or server-readable medium or a non-transitory processor-readable storage medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module or processor-executable software instructions which may reside on a non-transitory computer-readable storage medium, a non- transitory server-readable storage medium, and/or a non-transitory processor-readable storage medium. In various embodiments, such instructions may be stored processor- executable instructions or stored processor-executable software instructions.Tangible, non-transitory computer-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non- transitory computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic data storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc® where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of non-transitory computer- readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a tangible, non- transitory processor-readable storage medium and/or computer-readable medium, which may be incorporated into a computer program product.[0113] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the embodiment techniques of the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
Some features pertain to a package that includes an enhanced electromagnetic shield. The package includes a substrate, an electronic component coupled to the substrate, and a mold partially surrounding the electronic component. The package further includes a first shield over the mold, and a second shield over the first shield. One of the first shield or the second shield is a high permeability shield and the remaining first or second shield is a high conductivity shield relative to the high permeability shield. |
1.A package that includes:SubstrateAn electronic component coupled to the substrate;A molded part that partially surrounds the electronic component and is coupled to the substrate;A first shield located above the molded part; andA second shield located above the first shield, wherein one of the first shield or the second shield is a high permeability shield, and the remaining first shield or second shield The shield is a high conductivity shield relative to the high magnetic permeability shield.2.The package of claim 1, wherein the high permeability shield has a permeability greater than 10 H/m.3.The package of claim 2, wherein the high permeability shield comprises at least one material selected from the group consisting of: ferromagnetic materials, iron, nickel, and manganese, or combinations thereof .4.The package of claim 1, wherein the electrical conductivity of the high-conductivity shield is ten times higher than the electrical conductivity of the high-permeability shield.5.The package of claim 4, wherein the high-conductivity shield includes at least one material selected from the group consisting of copper, silver, gold, and aluminum, or a combination thereof.6.The package of claim 1, further comprising: side walls of the molded part, side walls of the first shielding part, and side walls of the substrate, wherein the second shielding part is located on the side walls of the molded part, Above the side wall of the first shield and the side wall of the substrate.7.7. The package of claim 6, wherein the side wall of the first shield is located above the side wall of the molded member and above the side wall of the substrate.8.The package of claim 1, wherein the high permeability shield is configured to have a thickness of about 100 nm to 300 μm.9.The package of claim 8, wherein the high-conductivity shield is configured to have a thickness of about 1 μm to 30 μm.10.The package of claim 1, wherein the thickness of the high permeability shield and the thickness of the high conductivity shield have a ratio of 1:1.11.The package of claim 1, wherein a total thickness of the shield including the first shield and the second shield is about 1.1 μm to 330 μm.12.The package of claim 1, wherein the first shielding member or the second shielding member is a sputtering shielding member, a plating shielding member, or a spraying shielding member.13.The package of claim 1, wherein the high-conductivity shield includes at least one material selected from the group consisting of copper, silver, gold, aluminum, and alloys thereof.14.The package of claim 1, wherein the first shield and the second shield repeatedly alternate to form more than two shields.15.The package of claim 14, further comprising:The third shield is located above the second shield and the third shield, wherein if the first shield is a high permeability shield, the third shield is a high permeability shield Shield; or if the first shield is a high-conductivity shield, the third shield is a high-conductivity shield.16.The package of claim 15, further comprising:The fourth shield is located above the second shield and the third shield, wherein if the second shield is a high permeability shield, the fourth shield is a high permeability shield Shield; or if the second shield is a high-conductivity shield, the fourth shield is a high-conductivity shield.17.The package of claim 1, wherein the package is incorporated into a device selected from the group consisting of: music player, video player, entertainment unit, navigation device, communication device, mobile Devices, mobile phones, smart phones, personal digital assistants, fixed location terminals or servers, tablet computers, and laptop computers, and further include the devices.18.A method of manufacturing integrated circuit packages, including:Coupling electronic components to the substrate;Applying a molded part to the electronic component and the substrate, the molded part partially surrounding the electronic component and the substrate;Coupling the first shield over the molded part; andA second shield is coupled above the first shield, wherein one of the first shield or the second shield is a high permeability shield, and the remaining first shield or the second shield The second shield is a high-conductivity shield relative to the high-magnetic-permeability shield.19.The method of claim 18, wherein the high permeability shield is made of a material having a permeability greater than 10 H/m.20.The method according to claim 19, wherein the high-conductivity shield is made of a material with a conductivity that is ten times higher than that of the high-magnetic-permeability shield.21.19. The method of claim 18, wherein coupling the first shield over the molded part occurs after coupling the electronic component to the molded part.22.22. The method of claim 21, wherein the molded part includes a molded part side wall, the first shield includes a first shield side wall located above the molded part side wall, and The substrate includes a substrate side wall, and wherein the second shield is located above the molded part side wall, the first shield side wall, and the substrate side wall.23.The method of claim 22, wherein the first shield is located above the side wall of the substrate.24.The method of claim 18, wherein the high magnetic permeability shield is configured to have a thickness of 100 nm to 300 μm.25.The method according to claim 24, wherein the high-conductivity shield is configured to have a thickness of 1 μm to 30 μm. |
Integrated circuit package including enhanced electromagnetic shieldbackgroundPriority claimThis patent application claims the priority of application No.16/106,117 filed on August 21, 2018, entitled "INTEGRATED CIRCUIT PACKAGE COMPAGE COMPPRISING AN ENHANCED ELECTROMAGNETIC SHIELD (Integrated Circuit Packages Including Enhanced Electromagnetic Shielding)". Assigned to the assignee of this application and is hereby incorporated by reference.Public domainVarious features relate to enhanced electromagnetic shields for integrated circuit packaging.Background techniqueIntegrated circuits, integrated circuit packages and electronic devices are continuously being driven towards smaller form factors. A smaller form factor is required so that such devices can be integrated into mobile devices (such as mobile phones, tablet devices, laptop devices, etc.). Integrated circuit packages include several components (such as substrates) and electronic devices (including dies, integrated circuits, and passive devices). These electronic devices (including dies, integrated circuits and passive devices) require electromagnetic shielding. Electromagnetic shielding protects these electronic devices from radio frequency, electromagnetic fields and electrostatic fields. Similarly, the electromagnetic shield protects the electronic devices outside the electromagnetic shield from the radio frequency, electromagnetic field, and electrostatic field generated by the electronic devices on the integrated circuit package. There are challenges in realizing small form factor electromagnetic shields with improved shielding effectiveness.Figure 1 illustrates a package that includes a conventional shield. Specifically, FIG. 1 illustrates an integrated circuit (IC) package 100 that includes a substrate 102, electronic components 110 and 112 (eg, dies, or passive components), a molded part 120, and a shield 140. The shield 140 is sputtered onto the molded part 120. The shield 140 is sputtered so that the thickness of the shield can be kept small. However, one disadvantage is that the sputtering process may result in reduced shielding effectiveness. Another disadvantage is that it may be difficult to realize thin layers where high permeability materials are desired.Accordingly, there is an industry demand for increased shielding effectiveness while maintaining a small form factor. In other words, there is an industrial demand for an electromagnetic shielding member with increased shielding effectiveness that does not significantly increase the height of the IC package 100.OverviewVarious features relate to enhanced electromagnetic shields for integrated circuit packaging.The first example provides a package that includes a substrate, an electronic component coupled to the substrate, and a molding part partially surrounding the electronic component and coupled to the substrate. The package further includes: a first shield located above the molded member, and a second shield located above the first shield. One of the first shield or the second shield is a high permeability shield, and the remaining first or second shield is a high conductivity shield with respect to the high permeability shield.The second example provides a method of manufacturing an integrated circuit package. The method includes: coupling an electronic component to a substrate, and applying a molded part to the electronic component and the substrate, the molded part partially surrounding the electronic component and the substrate. Substrate. The method of manufacturing an integrated circuit package further includes: coupling a first shielding member above the molded member, and coupling a second shielding member above the first shielding member. One of the first shield or the second shield is a high permeability shield, and the remaining first or second shield is a high conductivity shield with respect to the high permeability shield.Attached drawingWhen the detailed description set forth below is understood in conjunction with the accompanying drawings, various features, essences and advantages will become apparent. In the accompanying drawings, the same reference numerals are always marked accordingly.Figure 1 illustrates a package that includes a conventional shield.Figure 2 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield.Figure 3 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield.Figure 4 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield.Figure 5 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield.FIG. 6 illustrates an exemplary flow chart of a method for manufacturing an integrated circuit package including an enhanced electromagnetic shield.Figure 7 illustrates various electronic devices that may include the various substrates, integrated devices, integrated device packages, semiconductor devices, dies, integrated circuits, and/or packages described herein.A detailed descriptionIn the following description, specific details are given to provide a thorough understanding of various aspects of the present disclosure. However, those of ordinary skill in the art will understand that these aspects can be practiced without these specific details. For example, the circuit may be shown in a block diagram to avoid obliterating these aspects in unnecessary details. In other instances, well-known circuits, structures, and techniques may not be shown in detail so as not to obscure these aspects of the present disclosure.OverviewSome features relate to a package that includes electronic components coupled to a substrate, which includes an enhanced electromagnetic shield. The mold partly surrounds the electronic component and is coupled to the substrate. The first shield is located above the molded part, and the second shield is located above the first shield. One of the first shield or the second shield is a high permeability shield, and the remaining first or second shield is a high conductivity shield with respect to the high permeability shield. The first shielding member and the second shielding member are electromagnetic shielding members, which are configured to reduce electromagnetic interference to the electronic components in the package and the electronic components outside the package.The molded part includes a molded part side wall, the first shield includes a first shield side wall, and the substrate includes a substrate side wall. The first shield is located above the molded part, including above the side walls of the molded part and the side walls of the substrate. The second shield is located above the first shield, including above the sidewall of the first shield.In the first aspect, the package includes a first shield and a second shield, as described above. In a second aspect, the package includes a third shield located above the second shield (including above the sidewall of the second shield). In a third aspect, the package includes a fourth shield located above the third shield (including above the side wall of the third shield). In the fourth aspect, the package may include more than four shields, including alternating first and second shielding layers.In any of the above aspects (ie, the first to fourth aspects), the first shield may be a high permeability shield. That is, the first shield is made of a material selected to have high magnetic permeability. The high permeability material is a material with a permeability greater than 10H/m. The first shield may have a higher magnetic permeability relative to the second shield. If the first shield is a high permeability shield, the second shield is a high conductivity shield. That is, the material of the second shield may be made of high-conductivity metal. Electrical conductivity refers to the amount, level, or degree to which a specified material conducts electricity. The more a material can conduct electricity, the higher its conductivity. The second shield is a high-conductivity shield relative to the first shield. In one aspect, the conductivity of the second shield is ten times higher than the conductivity of the first shield. On the other hand, the second shield is a high-conductivity shield because it has a conductivity greater than 1 x 106 S/m. The third shield is optional and is a high permeability shield. The fourth shield is optional and is a high-conductivity shield.Alternatively, in any of the above aspects (ie, the first to fourth aspects), the first shield may be a high-conductivity shield, and the second shield may be a high-magnetic-permeability shield. The optional third shield is a high-conductivity shield, and the optional fourth shield is a high-permeability shield.Integrated circuit package including enhanced electromagnetic shieldFigure 2 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield. Specifically, FIG. 2 illustrates an integrated circuit (IC) package 200. The IC package 200 includes a substrate 202, electronic components 210 and 212, a mold 220, a first shield 232, and a second shield 240. The first shield is a high-conductivity shield 232, and the second shield 240 is a high-magnetic-permeability shield. It should be understood that FIG. 2 is a simplified drawing of the IC package 200. The IC package 200 may include additional elements not shown, such as a dielectric layer, a passivation layer, a metal layer, and electronic components embedded in the substrate 202 or the silicon substrate of one of the electronic components 210.The substrate 202 may be a package substrate. Alternatively, at least one of the electronic components 210 or 212 and the substrate 202 together may include a wafer-level package. The substrate 202 includes substrate sidewalls. The substrate 202 may be coupled to ground.The electronic component 210 may be an IC, die, passive device, or any other type of electronic component. The electronic component 212 may be an IC, die, passive device, or any other type of electronic component. The IC package 200 may have only a single electronic component (for example, one of 210 or 212), or may have many electronic components.The molding 220 partially surrounds the electronic components 210 and/or 212 and is coupled to the substrate 202. The molded part 220 has a top side, and a first molded part side wall, a second molded part side wall, a third molded part side wall, and a fourth molded part side wall (that is, collectively referred to as a molded part side wall) ). The molding 220 may include one or more of the following materials: epoxy resin and fused silica filler or any other organic filler material, but is not limited thereto. For example, the molding 220 may be any material that can be deposited, formed, or molded over the electronic components 210 and/or 212 and provide mechanical support and environmental protection for the IC package 200 and the electronic components 210 and/or 212.The first shield 232 is located above the molded part 220 and, in one aspect, may be directly coupled to the molded part 220. The first shield 232 has a first shield top side 232a, and a first first shield side wall 232b, a second first shield side wall 232c, and a third first shield side wall (not shown in this view ) And the fourth first shield side wall (not shown in this view) (ie, collectively referred to as the first shield side wall). In one aspect, the first shield 232 is located above the top side of the molded part 360 and above the side wall of the molded part 360. On the other hand, the first shield 232 is directly located on the top side of the molded part 360 and the side wall of the molded part 360. In any of the above aspects, the first shield 232 is above the sidewall of the substrate.The first shield 232 is a high-conductivity shield. The material of the first shield 232 may be made of high-conductivity metal. Electrical conductivity refers to the amount, level, or degree to which a specified material conducts electricity. The more a material can conduct electricity, the higher its conductivity. The first shield 232 is a high-conductivity shield with respect to the second shield 240. In other words, the first shield 232 has a higher conductivity than the second shield 240. In one aspect, the electrical conductivity of the first shield 232 is ten times higher than the electrical conductivity of the second shield 240. On the other hand, the first shield 232 is a high-conductivity shield because it has greater than 1x 106S/m The conductivity.The first shield 232 includes at least one of the following materials: copper (Cu), silver (Ag), gold (Au), aluminum (Al), or an alloy of any of the materials, or the material Any combination of.The first shield 232 has a length, a width, and a height. The length of the first shield 232 can be measured on the X axis. The width of the first shield 232 can be measured on the Y axis (ie, coming out of the page). The height of the first shield 232 may be measured on the Z axis (ie, measured vertically). For example, the height of the first shield 232 may be measured by the height of the sidewall of the first shield (for example, measured from the bottom of the substrate 202 to the top side 232a of the first shield). The length, width and height of the first shield 232 can be determined by those skilled in the art. For example, the length, width, and height of the first shield 232 may be large enough to cover the electronic components 210 and 212, and/or may be large enough to cover the substrate 202.The first shield 232 has a thickness. The thickness may be defined as the depth of the first shield 232. For example, the first shield top side 232a has a thickness, and the first first shield side wall 232b, the second first shield side wall 232c, the third first shield side wall (not shown), and the fourth Each of the first shield side walls (not shown) (ie, collectively referred to as first shield side walls) has a thickness that may be the same or different. In order to keep the form factor of the IC package 200 small, the thickness of the first shield 232 may be kept small. In one aspect, the thickness of the first shield 232 may be in the range of about 1 μm-30 μm. In another aspect, the thickness of the first shield 232 may be approximately equal to the thickness of the second shield 240 (for example, the thickness of the first shield 232 and the thickness of the second shield 240 may have a ratio of 1:1). In another aspect, the thickness of the first shield 232 may be thicker than the second shield 240.The second shield 240 is located above the first shield 232. The second shield 240 has a second shield top side 240a, a first second shield side wall 240b, a second second shield side wall 240c, and a third second shield side wall (not shown in this view) ), and a fourth side wall of the first shield (not shown in this view) (that is, collectively referred to as the side wall of the first shield). The second shield 240 may encapsulate the first shield 232, the molded electronic components 210 and 212, and the substrate 202.In one aspect, the second shield 240 may be directly or indirectly coupled to the first shield 232 (for example, with an intervening material). The second shield is located above the side wall of the first shield 232 and above the side wall of the molded part 220 so that the second shield 240 surrounds the IC package 200. Specifically, the second shield top side 240a is located above the first shield top side 232a, and the second shield side walls (for example, 240a, 240b, etc.) are located on the first shield side walls (for example, 232a, 232b, etc.) ) And above the side walls of the substrate 202. That is, the second shield 240 is located above the side wall of the substrate 202 and is coupled to the ground via the substrate 202 (ie, ground connection through the substrate 202).The second shield 240 is a high magnetic permeability shield. The material of the second shield 240 may be made of high permeability metal. Permeability refers to the ability of a material to attract and conduct magnetic flux lines. The stronger the conductivity of the material to the magnetic field, the higher its permeability. In one aspect, the material may have a magnetic permeability greater than 10 H/m. The second shield 240 may include a ferromagnetic material. The second shield 240 may include any one of the following materials or a combination of one or more of the following materials, or alloys thereof: ferromagnetic materials, ferromagnetic alloys, iron (Fe), nickel (Ni), or Manganese (Mn), however, is not limited to this. The second shield 240 may include copper as a part of the ferromagnetic alloy.The second shield 240 has a length, a width, and a height. The length of the second shield 240 can be measured on the X axis. The width of the second shield 240 can be measured on the Y axis (ie, out of the page). The length and width of the second shield 240 can be determined by those skilled in the art. For example, the length and width of the second shield 240 may be large enough to cover the electronic components 210 and 212, or may be large enough to cover the substrate 202 and cover the first shield 232. The height of the second shield 240 may be measured on the Z axis (ie, measured vertically). The height of the second shield 240 may be measured as the distance from the bottom of the substrate 202 to the top side 240a of the second shield.The second shield 240 has a thickness. The thickness may be defined as the depth of the second shield 240. For example, the second shield top side 240a has a thickness, and the first second shield side wall 240b, the second second shield side wall 240c, the third second shield side wall (not shown), and the fourth Each of the second shield side walls (not shown) (ie, collectively referred to as the second shield side walls 240) has a thickness that may be the same or different. In order to keep the form factor of the IC package 200 small, the thickness of the second shield 240 may be kept small. In one aspect, the thickness of the second shield 240 may be in the range of about 100 nm to 300 μm. In another aspect, the thickness of the second shield 240 may be about 100 μm. In another aspect, the thickness of the second shield 240 may be equal to the thickness of the second shield 240 (for example, at a ratio of 1:1). In another aspect, the thickness of the second shield 240 may be less than the thickness of the first shield 232.In one aspect, the first shield 232 and the second shield 240 together may have a total shield thickness of about 1.1 μm to 330 μm.Figure 3 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield. FIG. 3 is similar to FIG. 2, except that the IC package 300 includes a third shield 344 located above the second shield 340 and includes a fourth shield 346 located above the third shield 344. The third shield 344 is a high-conductivity shield, and the fourth shield 346 is a high-magnetic-permeability shield. The IC package 300 also includes a substrate 302 (for example, a package substrate), and a molded part 320 that partially surrounds electronic components (such as 310 and 312).The molded part 320 includes a molded part side wall, the first shield 332 includes a first shield side wall, the second shield 340 includes a second shield side wall, and the third shield 344 includes a third shield side wall, The fourth shield 346 includes a fourth shield side wall, and the substrate 302 includes a substrate side wall. Similar to FIG. 2, the fourth shield 346 is located above the top of the third shield 344 and above the side walls of the third shield. The third shield 344 is located above the top of the second shield 340 and above the side walls of the second shield. The second shield 340 is located above the top of the first shield 332 and above the side walls of the first shield. The first shield 332 is located above the top of the molded part 320, above the side walls of the molded part, and above the side walls of the substrate.It should be understood that although FIG. 3 illustrates a total of four shields (ie, the first shield 332, the second shield 340, the third shield 344, and the fourth shield 346), the IC package 300 is not limited thereto. In one aspect, the fourth shield 346 is not included, so the third shield 344 will be the outermost shield.On the other hand, there are more than four shields. The arrangement of the first shield 332 (ie, the high-conductivity shield) above the second shield 340 (ie, the high-magnetic-permeability shield) may be repeated alternately. For example, a fifth shield (not shown) may be located above the fourth shield 346, where the fifth shield is a high-conductivity shield. Optionally, a sixth shield (not shown) may be located above the fifth shield (not shown). The sixth shield may be a high permeability shield.Figure 4 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield. Specifically, FIG. 4 illustrates an integrated circuit (IC) package 400. The IC package 400 includes a substrate 402, electronic components 410 and 412, a mold 420, a first shield 432, and a second shield 440. The first shield is a high magnetic permeability shield 432, and the second shield 440 is a high conductivity shield. It should be understood that FIG. 4 is a simplified drawing of the IC package 400. The IC package 400 may include additional elements not shown, such as a dielectric layer, a passivation layer, a metal layer, and electronic components embedded in the substrate 402 or the silicon substrate of one of the electronic components 410.The substrate 402 may be a package substrate. Alternatively, at least one of the electronic components 410 or 412 together with the substrate 402 may include a wafer level package. The substrate 402 includes substrate sidewalls. The substrate 402 may be coupled to ground.The electronic component 410 may be an IC, die, passive device, or any other type of electronic component. The electronic component 412 may be an IC, die, passive device, or any other type of electronic component. The IC package 400 may have only a single electronic component (for example, one of 410 or 412), or may have many electronic components.The molding 420 partially surrounds the electronic components 410 and/or 412 and is coupled to the substrate 402. The molded part 420 has a top side, and a first molded part side wall, a second molded part side wall, a third molded part side wall, and a fourth molded part side wall (ie, collectively referred to as a molded part side wall ). The molded part 420 may include one or more of the following materials: epoxy resin and fused silica filler or any other organic filler material, but is not limited thereto. For example, the molded part 420 may be any material that can be deposited, formed, or molded over the electronic components 410 and/or 412 and provide mechanical support and environmental protection for the IC package 400 and the electronic components 410 and/or 412.The first shield 432 is located above the molded part 420 and, in one aspect, may be directly coupled to the molded part 420. The first shield 432 has a first shield top side 432a, and a first first shield side wall 432b, a second first shield side wall 432c, and a third first shield side wall (not shown in this view) ) And the fourth first shield side wall (not shown in this view) (ie, collectively referred to as the first shield side wall). In one aspect, the first shield 432 is located above the top side of the molded part 360 and above the side wall of the molded part 360. On the other hand, the first shield 432 is directly located on the top side of the molded part 360 and the side wall of the molded part 360. In any of the above aspects, the first shield 432 is above the sidewall of the substrate.The first shield 432 is a high magnetic permeability shield. The material of the first shield 432 may be made of a metal with high magnetic permeability. Permeability refers to the ability of a material to attract and conduct magnetic flux lines. The stronger the conductivity of the material to the magnetic field, the higher its permeability. In one aspect, the material may have a magnetic permeability greater than 10 H/m. The first shield 432 may include a ferromagnetic material. The first shield 432 may include any one of the following materials or a combination of one or more of the following materials, or an alloy thereof: ferromagnetic material, iron (Fe), nickel (Ni), or manganese (Mn) , But not limited to this. The first shield 432 may include copper as a part of the ferromagnetic alloy.The first shield 432 has a length, a width, and a height. The length of the first shield 432 can be measured on the X axis. The width of the first shield 432 can be measured on the Y axis (ie, coming out of the page). The length and width of the first shield 432 can be determined by those skilled in the art. For example, the length and width of the first shield 432 may be large enough to cover the electronic components 410 and 412, or may be large enough to cover the substrate 402 and cover the first shield 432. The height of the first shield 432 may be measured on the Z axis (ie, measured vertically). The height of the first shield 432 may be measured as the distance from the bottom of the substrate 402 to the top side 432a of the first shield.The first shield 432 has a thickness. The thickness may be defined as the depth of the first shield 432. For example, the first shield top side 432a has a thickness, and the first first shield side wall 432b, the second first shield side wall 432c, the third first shield side wall (not shown), and the fourth Each of the first shield side walls (not shown) (ie, collectively referred to as the first shield side walls 432) has a thickness that may be the same or different. In order to keep the form factor of the IC package 400 small, the thickness of the first shield 432 may be kept small. In one aspect, the thickness of the first shield 432 may be in the range of about 100 nm to 300 μm. On the other hand, the thickness of the first shield 432 may be about 100 μm. In another aspect, the thickness of the first shield 432 may be equal to the thickness of the first shield 432 (for example, at a ratio of 1:1). In another aspect, the thickness of the first shield 432 may be less than the thickness of the first shield 432.The second shield 440 is located above the first shield 432. The second shield 440 has a second shield top side 440a, and a first second shield side wall 440b, a second second shield side wall 440c, and a third second shield side wall (not shown in this view) ), and a fourth side wall of the first shield (not shown in this view) (that is, collectively referred to as the side wall of the first shield). The second shield 440 may encapsulate the first shield 432, the molded electronic components 410 and 412, and the substrate 402.In one aspect, the second shield 440 may be directly or indirectly coupled to the first shield 432 (for example, with an intervening material). The second shield 440 is located above the side walls of the first shield 432 and above the side walls of the molded part 420 so that the second shield 440 surrounds the IC package 400. Specifically, the second shield top side 440a is located above the first shield top side 432a, and the second shield side walls (for example, 440a, 440b, etc.) are located on the first shield side walls (for example, 432a, 432b, etc.) ) And above the side walls of the substrate 402. That is, the second shield 440 is located above the side wall of the substrate 402 and is coupled to ground via the substrate 402 (ie, ground connection through the substrate 402).The second shield 440 is a high-conductivity shield. The material of the second shield 440 may be made of high-conductivity metal. Electrical conductivity refers to the amount, level, or degree to which a specified material conducts electricity. The more a material can conduct electricity, the higher its conductivity. The second shield 440 is a high-conductivity shield with respect to the first shield 432. In other words, the first shield 432 has a higher conductivity than the second shield 440. In one aspect, the conductivity of the second shield 440 is ten times higher than the conductivity of the first shield 332. On the other hand, the second shield 440 is a high-conductivity shield because it has a conductivity greater than 1×106 S/m.The second shield 440 includes at least one of the following materials: copper (Cu), silver (Ag), gold (Au), aluminum (Al), or an alloy of any of the materials, or the material Any combination of.The second shield 440 has a length, a width, and a height. The length of the second shield 440 may be measured on the X axis. The width of the second shield 440 can be measured on the Y axis (ie, coming out of the page). The height of the second shield 440 may be measured on the Z axis (ie, measured vertically). The height of the second shield 440 may be measured as the distance from the bottom of the substrate 402 to the top side 440b of the second shield. The length, width, and height of the second shield 440 can be determined by those skilled in the art. For example, the length, width, and height of the second shield 440 may be large enough to cover the electronic components 410 and 412, and/or may be large enough to cover the substrate 402.The second shield 440 has a thickness. The thickness may be defined as the depth of the second shield 440. For example, the second shield top side 440a has a thickness, and the first second shield side wall 432b, the second second shield side wall 432c, the third second shield side wall (not shown), and the fourth Each of the second shield sidewalls (not shown) (that is, collectively referred to as second shield sidewalls) has a thickness that may be the same or different from each other. In order to keep the form factor of the IC package 400 small, the thickness of the second shield 440 may be kept small. In one aspect, the thickness of the second shield 440 may be in the range of about 1 μm-30 μm. In another aspect, the thickness of the second shield 440 may be approximately equal to the thickness of the second shield 440 (for example, the thickness of the second shield 440 and the thickness of the second shield 440 may have a ratio of 1:1). In another aspect, the thickness of the second shield 440 may be thicker than the second shield 440.In one aspect, the first shield 432 and the second shield 440 together may have a total shield thickness of about 1.1 μm to 330 μm.Figure 5 illustrates a side view of an integrated circuit package including an enhanced electromagnetic shield. FIG. 5 is similar to FIG. 4 except that the IC package 500 includes a third shield 544 located above the second shield 540 and includes a fourth shield 546 located above the third shield 544. The third shield 544 is a high permeability shield, and the fourth shield 546 is a high conductivity shield. The IC package 500 also includes a substrate 502 (for example, a package substrate), and a molding 520 that partially surrounds electronic components (such as 510 and 512).The molded part 520 includes a molded part side wall, the first shield 532 includes a first shield side wall, the second shield 540 includes a second shield side wall, and the third shield 544 includes a third shield side wall, The fourth shield 546 includes a fourth shield side wall, and the substrate 502 includes a substrate side wall. Similar to FIG. 4, the fourth shield 546 is located above the top of the third shield 544 and above the side walls of the third shield. The third shield 544 is located above the top of the second shield 540 and above the side walls of the second shield. The second shield 540 is located above the top of the first shield 532 and above the side walls of the first shield. The first shield 532 is located above the top of the molded part 520, above the side walls of the molded part, and above the side walls of the substrate.It should be understood that although FIG. 5 illustrates a total of four shields (ie, the first shield 532, the second shield 540, the third shield 544, and the fourth shield 546), the IC package 500 is not limited thereto. In one aspect, the fourth shield 546 is not included, so the third shield 544 will be the outermost shield.On the other hand, there are more than four shields. The arrangement of the first shield 532 (ie, high permeability shield) over the second shield 540 (ie, high conductivity shield) may be repeated alternately. For example, a fifth shield (not shown) may be located above the fourth shield 546, where the fifth shield is a high permeability shield. Optionally, a sixth shield (not shown) may be located above the fifth shield (not shown). The sixth shield may be a high-conductivity shield.Compared with conventional electromagnetic conformal shielding, the disclosed integrated circuit packages 200, 300, 400, and 500 with enhanced electromagnetic shielding components have high shielding effectiveness over a wide frequency range covering 1 MHz-12 GHz. For example, high-permeability shields (e.g., 240, 340, 346, 432, 532, and 544) improve shielding effectiveness in the lower frequency range of <3 GHz, while high-conductivity shields (e.g., 232, 332) , 344, 440, 540, and 546) contributed at higher frequencies >3GHz.Exemplary flow chart of a method for manufacturing an integrated circuit package including an enhanced electromagnetic shieldFIG. 6 illustrates an exemplary flow chart of a method for manufacturing an integrated circuit package including an enhanced electromagnetic shield. It should be noted that for clarity and simplicity, the flowchart of FIG. 6 does not necessarily include all the steps of manufacturing a substrate including one or more embedded interconnects. In addition, in some instances, several steps may have been combined into a single step to simplify the description of the process.As shown in Figure 6, at step 602, the method includes coupling an electronic component to a substrate. The substrate may be a package substrate. Alternatively, the electronic component and the substrate together may include a wafer-level package. The substrate includes substrate sidewalls. The substrate can be coupled to ground.At step 604, the method includes applying a molded part to the electronic component and the substrate, the molded part partially surrounding the electronic component and the substrate. The molded part may include one or more of the following materials: epoxy resin and fused silica filler or any other organic filler material, but is not limited thereto. For example, the molded part may be any material that can be deposited, formed, or molded over the electronic component and provide mechanical support and environmental protection for the IC package and the electronic component. The application of the molded part may include the application of an overmolding process and optionally an under-mold process.At step 606, the method includes coupling a first shield over the molded part. The coupling of the third shield occurs after step 604. The first shield is coupled over the molded part by any of the following methods: plating, sputtering, or spraying. In other words, the first shield may be a sputter shield, a plating shield, or a spray shield. Compression molding can also be used.At step 608, the method includes coupling a second shield over the first shield, where the first shield is a high permeability shield. The second shield is coupled above the first shield by any of the following methods: plating, sputtering, or spraying. Compression molding can also be used.Exemplary electronic equipmentFIG. 7 illustrates various electronic devices that can be integrated with any of the aforementioned integrated circuit packages including enhanced electromagnetic shields. For example, a mobile phone device 702, a laptop computer device 704, a fixed position terminal device 706, a wearable device 708 may include an integrated device 700 as described herein. The integrated device 700 may be, for example, any of a substrate, an integrated circuit, a die, an integrated device, an integrated device package, an integrated circuit device, a device package, an integrated circuit (IC) package, and a package-on-package device described herein. The devices 702, 704, 706, and 708 illustrated in FIG. 7 are merely exemplary. Other electronic devices can also be characterized by the integrated device 700. Such electronic devices include, but are not limited to, a device (e.g., electronic device) group, which includes a mobile device, a handheld personal communication system (PCS) unit, and a portable data unit (such as Personal digital assistants), GPS-enabled devices, navigation devices, set-top boxes, music players, video players, entertainment units, fixed location data units (such as meter reading equipment), communication devices, smart phones, tablets Electronic devices implemented in computers, computers, wearable devices (for example, watches, glasses), Internet of Things (IoT) devices, servers, routers, motor vehicles (for example, autonomous vehicles), or storing or retrieving data or computer instructions Any other equipment, or any combination thereof.One or more of the components, processes, features, and/or functions illustrated in FIGS. 2 to 6 can be rearranged and/or combined into a single component, process, feature, or function, or implemented in several components, In process or function. Additional elements, components, processes, and/or functions may also be added without departing from the present disclosure. In some implementations, the device may include a die, an integrated device, a die package, an integrated circuit (IC), a device package, an integrated circuit (IC) package, a wafer, a semiconductor device, a package on package (PoP) device, and/or an intermediary body.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" need not be construed as being superior or superior to other aspects of this disclosure. Likewise, the term "aspect" does not require that all aspects of the present disclosure include the discussed feature, advantage, or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically contacts object B, and object B contacts object C, objects A and C can still be considered coupled to each other-even if they are not in direct physical contact with each other. The term "passing through" as used herein means traversing, and includes the entire traversing object or part of the traversing object.It is also noted that the various disclosures contained herein can be described as processes depicted as flowcharts, flow diagrams, structure diagrams, or block diagrams. Although the flowchart may describe the operations as a sequential process, many of these operations can be performed in parallel or concurrently. In addition, the order of these operations can be rearranged. The process terminates when its operation is completed.The various features of the present disclosure described herein can be implemented in different systems without departing from the present disclosure. It should be noted that the above aspects of the present disclosure are only examples, and should not be construed as limiting the present disclosure. The description of various aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the appended claims. Thus, the teaching of the present invention can be readily applied to other types of devices, and many substitutions, modifications and variations will be apparent to those skilled in the art. |
A hybrid capacitor associated with an integrated circuit package provides multiple levels of excess, off-chip capacitance to die loads. The hybrid capacitor includes a low inductance, parallel plate capacitor embedded within the package, and electrically connected to a second source of off-chip capacitance. The parallel plate capacitor is disposed underneath a die, and includes a top conductive layer, a bottom conductive layer, and a thin dielectric layer that electrically isolates the top and bottom layers. The second source of off-chip capacitance is a set of self-aligned via capacitors, and/or one or more discrete capacitors, and/or an additional parallel plate capacitor. Each of the self-aligned via capacitors is embedded within the package, and has an inner conductor and an outer conductor. The inner conductor is electrically connected to either the top or bottom conductive layer, and the outer conductor is electrically connected to the other conductive layer. The discrete capacitors are electrically connected to contacts from the conductive layers to the surface of the package. During operation, one of the conductive layers of the low inductance parallel plate capacitor provides a ground plane, while the other conductive layer provides a power plane. |
What is claimed is: 1. A method for fabricating a hybrid capacitor within an integrated circuit package, the method comprising:fabricating a parallel plate capacitor embedded within a first layer of the package, wherein the parallel plate capacitor includes a first conductive layer, a second conductive layer, and a dielectric layer that electrically isolates the first conductive layer from the second conductive layer; and fabricating and electrically connecting a second source of capacitance to the parallel plate capacitor, wherein the second source of capacitance is formed in a second layer of the package, and is formed from a set of self-aligned via capacitors embedded within the second layer, and each of the self-aligned via capacitors includes an inner conductor electrically connected to the first conductive layer, and an outer conductor electrically connected to the second conductive layer. 2. The method as claimed in claim 1, further comprising:forming contacts to a surface of the integrated circuit package, wherein at least one of the contacts electrically connects the first conductive layer to the surface and at least one other of the contacts electrically connects the second conductive layer to the surface; and wherein electrically connecting the second source of capacitance comprises electrically connecting one or more discrete capacitors to the contacts. 3. The method as claimed in claim 2, wherein forming the contacts comprises forming the contacts around a perimeter of an area where an integrated circuit will be electrically connected to the package.4. The method as claimed in claim 1, wherein fabricating the parallel plate capacitor comprises:forming a bottom conductor from a conductive material, wherein the bottom conductor can act as the first conductive layer or the second conductive layer; forming the dielectric layer on the bottom conductor; and forming a top conductor by plating a top surface of the dielectric layer, wherein the top conductor can act as the second conductive layer or the first conductive layer. 5. The method as claimed in claim 4, wherein forming the dielectric layer comprises forming a dielectric layer having a thickness within a range of about 0.3 to 5 microns, and the dielectric layer has a dielectric constant in a range of about 1 to 1000.6. The method as claimed in claim 1, further comprising electrically connecting a third capacitor to the parallel plate capacitor, wherein the third capacitor is an off-chip capacitor with a third conductor electrically connected to the first conductive layer, and a fourth conductor electrically connected to the second conductive layer.7. The method as claimed in claim 1, further comprising:forming contacts to a surface of the integrated circuit package, wherein at least one of the contacts electrically connects the first conductive layer to the surface, and at least one other of the contacts electrically connects the second conductive layer to the surface; and electrically connecting an integrated circuit to the contacts. 8. A method for fabricating a hybrid capacitor within an integrated circuit package, the method comprising:fabricating a parallel plate capacitor embedded within a first layer of the package, wherein the parallel plate capacitor includes a first conductive layer, a second conductive layer, and a dielectric layer that electrically isolates the first conductive layer from the second conductive layer; fabricating a second source of capacitance to the parallel plate capacitor, wherein the second source of capacitance includes a first conductor electrically connected to the first conductive layer, and a second conductor electrically connected to the second conductive layer; and electrically connecting the second source of capacitance to the parallel plate capacitor by fabricating a second layer of the package, wherein the second source of capacitance is formed from a set of self-aligned via capacitors embedded within the second layer, each of the self-aligned via capacitors having an inner conductor and an outer conductor, electrically connecting the inner conductor to the first conductive layer, and electrically connecting the outer conductor to the second conductive layer. 9. The method as claimed in claim 8, wherein fabricating the second layer comprises:forming the outer conductor by forming a layer of conductive material on sidewalls and at least a portion of a top surface of a substrate, wherein one or more holes are formed in the substrate, and the one or more holes are defined by the sidewalls; forming a dielectric layer within the one or more holes on at least a portion of the outer conductor that covers the sidewalls; and forming the inner conductor within the one or more holes by depositing a conductive material within an area within the one or more holes that is not filled by the dielectric layer. 10. A method for fabricating a hybrid capacitor within an integrated circuit package, the method comprising:fabricating a parallel plate capacitor embedded within a first layer of the package, wherein the parallel plate capacitor includes a first conductive layer, a second conductive layer, and a dielectric layer that electrically isolates the first conductive layer from the second conductive layer; fabricating and electrically connecting a second source of capacitance to the parallel plate capacitor, wherein the second source of capacitance includes first conductor electrically connected to the first conductive layer, and a second conductor electrically connected to the second conductive layer; and electrically connecting a third capacitor to the parallel plate capacitor, wherein the third capacitor is an off-chip capacitor with a third conductor electrically connected to the first conductive layer, and a fourth conductor electrically connected to the second conductive layer, wherein the second capacitor comprises one or more discrete capacitors, and the third capacitor comprises one or more self-aligned via capacitors embedded within a second layer of the package. 11. A method for fabricating a hybrid capacitor configuration within an integrated circuit package, the method comprising:fabricating a parallel plate capacitor embedded within a first layer of the package, wherein the parallel plate capacitor includes a first conductive layer, a second conductive layer, and a dielectric layer that electrically isolates the first conductive layer from the second conductive layer; fabricating a second layer of the package within which a set of self-aligned via capacitors is embedded, each of the self-aligned via capacitors having an inner conductor and an outer conductor; electrically connecting the inner conductor to the first conductive layer; and electrically connecting the outer conductor to the second conductive layer. 12. The method as claimed in claim 11, wherein fabricating the second layer of the package comprises:forming the outer conductor by plating sidewalls and at least a portion of a top surface of a substrate with a layer of conductive material, wherein the substrate includes one or more holes defined by the sidewalls; forming a dielectric layer within the one or more holes on at least a portion of the outer conductor that covers the sidewalls; and forming an inner conductor within the one or more holes by depositing a conductive material within an area within the one or more holes that is not filled by the dielectric layer. 13. The method as claimed in claim 11, further comprising:forming contacts to a surface of the integrated circuit package, wherein at least one of the contacts electrically connects the first conductive layer to the surface, and at least one other of the contacts electrically connects the second conductive layer to the surface; and electrically connecting one or more discrete capacitors to the contacts. 14. A method for fabricating a hybrid capacitor configuration within an integrated circuit package, the method comprising:fabricating a parallel plate capacitor embedded within a first layer of the package, wherein the parallel plate capacitor includes a first conductive layer, a second conductive layer, and a dielectric layer that electrically isolates the first conductive layer from the second conductive layer; forming contacts to a surface of the integrated circuit package, wherein at least one of the contacts electrically connects the first conductive layer to the surface, and at least one other of the contacts electrically connects the second conductive layer to the surface; and electrically connecting one or more discrete capacitors to the contacts. 15. The method as claimed in claim 14, wherein forming the contacts comprises forming the contacts around a perimeter of an area where an integrated circuit will be electrically connected to the package.16. A method for fabricating a hybrid capacitor configuration within an integrated circuit package, the method comprising:fabricating a parallel plate capacitor embedded within a first layer of the package, wherein the parallel plate capacitor includes a first conductive layer, a second conductive layer, and a dielectric layer that electrically isolates the first conductive layer from the second conductive layer; forming contacts to a surface of the integrated circuit package, wherein at least one of the contacts electrically connects the first conductive layer to the surface, and at least one other of the contacts electrically connects the second conductive layer to the surface; electrically connecting one or more discrete capacitors to the contacts; fabricating a second layer of the package within which a set of self-aligned via capacitors is embedded, each of the self-aligned via capacitors having an inner conductor and an outer conductor; electrically connecting the inner conductor to the first conductive layer; and electrically connecting the outer conductor to the second conductive layer. |
TECHNICAL FIELD OF THE INVENTIONThe present invention relates generally to apparatus for providing capacitance to a microelectronic circuit, and more particularly to capacitors used in conjunction with a parallel plate capacitor to provide capacitance to a microelectronic circuit, apparatus utilizing such capacitors, and methods of their fabrication.BACKGROUND OF THE INVENTIONElectronic circuits, and particularly computer and instrumentation circuits, have in recent years become increasingly powerful and fast. As circuit frequencies delve into the gigahertz (GHz) region, with their associated high frequency transients, noise in the DC power and ground lines increasingly becomes a problem. This noise can arise due to inductive and capacitive parasitics, for example, as is well known. To reduce such noise, capacitors known as decoupling capacitors are often used to provide a stable signal or stable supply of power to the circuitry.Capacitors are further utilized to dampen power overshoot when an electronic device, such as a processor, is powered up, and to dampen power droop when the electronic device begins using power. For example, a processor that begins performing a calculation may rapidly need more current than can be supplied by the on-chip capacitance. In order to provide such capacitance and to dampen the power droop associated with the increased load, off-chip capacitance should be available to respond to the current need within a sufficient amount of time. If insufficient current is available to the processor, or if the response time of the capacitance is too slow, the die voltage may collapse.Decoupling capacitors and capacitors for dampening power overshoot or droop are generally placed as close to the load as practical to increase the capacitors' effectiveness. Often, these capacitors are surface mounted to the electronic device or the package substrate on which the device is mounted. At increasingly reduced device sizes and packing densities, however, available real estate for surface-mounted capacitors becomes a limiting factor.One solution has involved the formation of a parallel plate capacitor integrated on or embedded within a substrate. FIG. 1 illustrates a parallel plate capacitor 102 in accordance with the prior art. Capacitor 102 includes two planar conductors 104. When separated by a non-conducting material (not shown), a charge can be stored across the capacitor 102.FIG. 2 illustrates a cross section of an embedded parallel plate capacitor coupled to a die 204 in accordance with the prior art. The embedded capacitor includes two planar conductors 206, 208 that are separated by a thin dielectric layer 210 (e.g., 1 micron or less). The dielectric material used in layer 210 must have a high dielectric constant in order to provide the amount of capacitance needed. To have a capacitance large enough for the decoupling of CPUs, this dielectric constant has a value in the thousands (e.g., within a range of 2000 to 5000) or many layers must be stacked. Desirably, the planar conductors 206, 208 are located below the die 204, in order to be close to any die load that may require capacitance, or to effectively reduce noise in the DC power and ground lines supplied to the die.One of the two planar conductors 206 or 208 is connected, via conductive paths 212, to ground terminals of one or more die loads (not shown). The other planar conductor 208 or 206 is connected, via conductive paths 212, to power terminals of the one or more die loads. These planar conductors 206, 208, coupled with the thin dielectric layer 210, provide capacitance for noise, power overshoot, and power droop dampening, as modeled by capacitor 216.FIG. 3 illustrates an electrical circuit that simulates the electrical characteristics of the parallel plate capacitor illustrated in FIG. 2. The circuit shows a die load 302, which may require capacitance or noise dampening in order to function properly. Some of the capacitance can be supplied by capacitance 304 located on the die. Other capacitance, however, must be provided off chip, as indicated by off-chip capacitor 306. The off-chip capacitor 306 could be, for example, the embedded parallel plate capacitor illustrated in FIG. 2. The off-chip capacitor 306 may more accurately be modeled as a capacitor in series with some resistance and inductance. For ease of illustration, however, off-chip capacitance 306 is modeled as a simple capacitor.Naturally, the off-chip capacitor 306 would be located some distance, however small, from the die load 302, due to manufacturing constraints. Accordingly, some inductance 308 exists between the die load and the off-chip capacitance. Because the inductance 308 tends to slow the response time of the off-chip capacitor 306, it is desirable to minimize the distance between the off-chip capacitance 306 and the die load 302, thus reducing the inductance value 308. This can be achieved by placing the off-chip capacitor 306 as close as possible to the die load.In order to increase the amount of capacitance supplied by the parallel plate capacitor, the surface area of the plates can be increased. Increasing the surface area, however, increases the risk of shorts or leakage between the plates, thus reducing yield and increasing reliability concerns.Besides reliability concerns, the parallel plate capacitor solution may not be sufficient for many higher frequency applications. This is because the parallel plate capacitance is spread out over a large area, resulting in large lateral parasitics that may prevent the timely flow of charge to "hot spots" on the die (i.e., localized portions of the die that require large amounts of current in short periods of time). In addition, the propagation of the charge is a strong function of not only inductance, but also capacitance. Therefore, the lateral parasitics and relatively high capacitance characteristic of an embedded parallel plate capacitor may unacceptably slow the charge response time to the die hot spots, resulting in a collapse of the die voltage supply.As electronic devices continue to advance, there is an increasing need for higher levels of capacitance at reduced inductance levels for decoupling, power dampening, and supplying charge. Accordingly, there is a need in the art for alternative capacitance solutions in the a fabrication and operation of electronic and integrated circuit devices.BRIEF DESCRIPTION OF THE DRAWINGFIG. 1 illustrates a parallel plate capacitor in accordance with the prior art;FIG. 2 illustrates a cross section of an embedded parallel plate capacitor coupled to a die in accordance with the prior art;FIG. 3 illustrates an electrical circuit that simulates the electrical characteristics of the parallel plate capacitor illustrated in FIG. 2;FIG. 4 illustrates a three-dimensional cross-section of a self-aligned via capacitor in accordance with one embodiment of the present invention;FIG. 5 illustrates a cross-section of a hybrid capacitor that includes a low inductance capacitor, a high dielectric capacitor, a set of self-aligned via capacitors, and at least one discrete capacitor in accordance with one embodiment of the present invention;FIG. 6 illustrates a top view of the hybrid capacitor shown in FIG. 5 in accordance with one embodiment of the present invention;FIG. 7 illustrates an electrical circuit that simulates the electrical characteristics of the hybrid capacitor illustrated in FIG. 5;FIG. 8 illustrates a flowchart of a method for fabricating a hybrid capacitor in accordance with one embodiment of the present invention;FIGS. 9-22 are schematic cross-sections illustrating various stages of fabricating a hybrid capacitor in accordance with one embodiment of the present invention;FIG. 23 illustrates a cross-section of a hybrid capacitor that includes a low inductance capacitor and at least one discrete capacitor in accordance with one embodiment of the present invention;FIG. 24 illustrates a cross-section of an integrated hybrid capacitor that includes a low inductance capacitor and a set of self-aligned via capacitors in accordance with one embodiment of the present invention;FIG. 25 illustrates an electrical circuit that simulates the electrical characteristics of the hybrid capacitors illustrated in FIGS. 23 and 24;FIG. 26 illustrates an integrated circuit package that includes a hybrid capacitor in accordance with one embodiment of the present invention; andFIG. 27 illustrates a general purpose computer system in accordance with one embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONVarious embodiments of the present invention provide a low-inductance, hybrid capacitor structure that effectively suppresses noise, dampens power overshoot and droop, and supplies charge to die hot spots in a timely manner. In one embodiment, the capacitor structure is implemented in and on a device package, and includes an embedded parallel plate capacitor to which an additional high dielectric parallel plate capacitor, discrete capacitors, and self-aligned, coaxial via capacitors are electrically connected. In other embodiments, the embedded parallel plate capacitor is used in conjunction with either the high dielectric parallel plate capacitor, the discrete capacitors, the self-aligned capacitors or some combination of those capacitors, but not all. Either way, the parallel plate capacitor is a first off-chip capacitor that acts as a first source of capacitance, and the high dielectric parallel plate, self-aligned, and/or discrete capacitors are a second set of capacitors that act as a second source of capacitance.The parallel plate capacitor has a very thin dielectric layer with a relatively low dielectric constant, thus providing an extremely low inductance path between the self-aligned and/or discrete capacitors. In addition, the parallel plate capacitor provides some charge, when needed. Unlike the parallel plate capacitor just mentioned, the additional high dielectric parallel plate capacitor has a relatively high dielectric constant, thus providing more charge, when needed. In order to avoid confusion in the description below, the low dielectric parallel plate capacitor is referred to as the "low inductance" capacitor, while the high dielectric parallel plate capacitor is referred to as the "high dielectric" capacitor.Microelectronic device packages commonly include multiple interconnect levels. In such a package, patterned conductive material on one interconnect level is electrically insulated from patterned conductive material on another interconnect level by dielectric material layers. Connections between the conductive material at the various interconnect levels are made by forming openings, referred to as vias, in the insulating layers and providing an electrically conductive structure such that the patterned conductive material from different interconnect levels are brought into electrical contact with each other. These structures can extend through one or more of the interconnect levels, and are often referred to as contacts or interconnect structures.In one embodiment of the present invention, the generic via structure is modified in order to convert the via into a small capacitor that is similar in its electrical characteristics to a coaxial capacitor. The modified via structure is referred to herein as a self-aligned via capacitor, or self-aligned capacitor. In one embodiment, a set of these self-aligned capacitors are electrically connected to an embedded low inductance capacitor in order to provide a low inductance path between the self-aligned capacitors and a die.FIG. 4 illustrates a three-dimensional cross-section of a self-aligned via capacitor in accordance with one embodiment of the present invention. The self-aligned capacitor is disposed through one or more layers of a substrate 402.Similar to a coaxial capacitor, the self-aligned capacitor includes an outer conductor 404 and an inner conductor 406. The outer and inner conductors 404, 406 are electrically isolated from each other by a dielectric material 408 disposed between the conductors. Because the outer and inner conductors 404, 406 are separated by a non-conducting material 408, a charge can be stored across the conductors, as modeled by capacitor 410.By electrically connecting the outer and inner conductors 404, 406 to opposite terminals of various die loads, a number of the package vias are converted into coaxial capacitors. In some cases, however, these capacitor structures may be characterized by an unacceptable level of inductance due to the relatively large area that separates the self-aligned capacitors. This high inductance to the self-aligned capacitance may render most of this off-chip capacitance unavailable to the die hot spots.Consequently, in one embodiment, a set of coaxial capacitors is combined with an embedded low inductance capacitor, which effectively reduces the inductance to the self-aligned capacitance, and also provides capacitance of its own. In this embodiment, an additional level of capacitance is supplied by discrete capacitors and a high dielectric capacitor, which are also coupled to the low inductance capacitor. This hybrid capacitor structure is connected to the die loads, thus providing multiple levels of off-chip capacitance.FIG. 5 illustrates a cross-section of a hybrid capacitor that includes a low inductance capacitor, a high dielectric capacitor, a set of self-aligned via capacitors, and at least one discrete capacitor in accordance with one embodiment of the present invention. The low inductance capacitor includes a top conductive layer 502 and a bottom conductive layer 504, separated by a thin dielectric layer 506. The high dielectric capacitor includes a top conductive layer 504 and a bottom conductive layer 505, also separated by a thin dielectric layer 507. In one embodiment, the dielectric layer 506 of the low inductance capacitor has a relatively low dielectric constant (e.g., in a range of about 1 to 1000), and the dielectric layer 507 of the high dielectric capacitor has a relatively high dielectric constant (e.g., in a range of about 1000 to 5000 or more). In this manner, the low inductance capacitor provides a low inductance path between any additional capacitance and a die load, but supplies relatively little charge to that load. In contrast, the high dielectric capacitor provides relatively more charge to that load. In alternate embodiments, the high dielectric capacitor may be situated above the low inductance capacitor, in which case dielectric layers 506 and 507 would be switched.Each self-aligned capacitor includes an outer conductor 508, an inner conductor 510, and a dielectric layer 516, as illustrated by the horizontal cross-section 512 of one self-aligned capacitor along section lines A-A. In the embodiment shown, the outer conductor 508 is electrically connected to the bottom conductive layer 505 of the high dielectric capacitor. In an alternate embodiment, the bottom conductive layer 505 may be an additional conductive plane within the package, which may or may not be electrically connected to outer conductor 508.When the top conductive layer 502 serves as the power plane, the bottom conductive layer 504 serves as the ground plane, and vice versa. In this manner, a charge is stored across the layers 502, 504, and this charge is also stored across the high dielectric, self-aligned, and discrete capacitors by virtue of their electrical connections to the conductive layers 502, 504.The top conductive layer 502 of the low inductance capacitor is shown to be electrically connected to the bottom conductive plane 505 of the high inductance capacitor, to the outer conductors 508 of the self-aligned capacitors, and also to the one or more discrete capacitors 514. The bottom conductive layer 504 is shown to be electrically connected to the inner conductors 510 of the self-aligned capacitors, and also to the one or more discrete capacitors 514. In another embodiment, the top conductive layer 502 could connect to the inner conductors 510, and the bottom conductive layer 504 could connect to the outer conductors 508. Like the low inductance, high dielectric, and self-aligned capacitors, each discrete capacitor 514 includes a first conductor and a second conductor separated by a dielectric material (not shown). Each of these conductors are electrically connected to one of the conductive layers 502, 504 of the low inductance capacitor.In one embodiment, at least a portion of the low inductance capacitor is disposed underneath the die 516, along with the high dielectric capacitor and the set of self-aligned capacitors. The self-aligned capacitors may be dispersed evenly underneath the die and low inductance capacitor, or concentrations of self-aligned capacitors could be provided to produce additional capacitance for the die hot spots. FIG. 5 illustrates only five self-aligned capacitors being dispersed underneath the die 516. In practice, many more self-aligned capacitors would be dispersed underneath the die in order to provide sufficient capacitance. In addition, the relative dimensions of the parallel plate, discrete, and self-aligned capacitor components are for illustration purposes only. In reality, these dimensions would likely be very different from those shown in FIG. 5, as will be described below.In one embodiment, the discrete capacitors 514 are connected to the package around a perimeter of the die 516. FIG. 6 illustrates a top view of the hybrid capacitor shown in FIG. 5 in accordance with one embodiment of the present invention. This illustrates that a number of discrete capacitors 602 are dispersed around and in close proximity to the die 604. Although sixteen discrete capacitors 602 are illustrated in FIG. 6, more or fewer discrete capacitors 602 may be dispersed around the die in various embodiments.During operation, the hybrid capacitor shown in FIG. 5 provides three levels of off-chip capacitance. This is modeled in FIG. 7, which illustrates an electrical circuit that simulates the electrical characteristics of the hybrid capacitor illustrated in FIG. 5.The circuit shows a die load 702, which may require capacitance or noise dampening in order to function properly. Some of the capacitance can be supplied by capacitance 704 located on the die. Other capacitance, however, is provided off chip in accordance with one embodiment of the present invention, as indicated by off-chip capacitors 706, 708, and 710. Each of the off-chip capacitors 706, 708, 710 would more accurately be modeled as a capacitor in series with some resistance and inductance. For ease of illustration, however, off-chip capacitance is modeled as a simple capacitor.Off-chip capacitor 706 represents the embedded low inductance and high dielectric capacitors formed by conductive layers 502, 504, 505, shown in FIG. 5. Capacitor 706 is located some distance, however small, from the die load 702. Accordingly, some inductance 712 exists between the die load and capacitor 706. In order to minimize this inductance 712, capacitor 706 is placed as close as possible to the die load 702.Off-chip capacitors 708, 710 represent the self-aligned capacitors 508, 510, 516, and the discrete capacitors 514, shown in FIG. 5, where the type of capacitor that is electrically closer to the die load 702 is represented by capacitor 708. In some cases, a single die may have multiple loads dispersed throughout the die, making the self-aligned capacitors closer to some of the loads and the discrete capacitors closer to other loads. Regardless of which type of capacitance is closest to the load, both types exist some distance from the load and from the low inductance capacitor 706, resulting in some inductance 714, 716 between these off-chip capacitors 708, 710 and the die load 702. Again, these inductances 714, 716 are minimized by placing the self-aligned and discrete capacitors as close as possible to the die load 702.FIG. 8 illustrates a flowchart of a method for fabricating a hybrid capacitor in accordance with one embodiment of the present invention.FIG. 8 should be viewed in conjunction with FIGS. 9-22, which are schematic cross-sections illustrating various stages of fabricating a hybrid capacitor in accordance with one embodiment of the present invention.A method for fabricating the hybrid capacitor in accordance with one embodiment includes four main processes: 1) fabricating 802 a first package layer which includes a set of self-aligned via capacitors; 2) fabricating and electrically connecting 804 high dielectric and low inductance capacitors to the self-aligned via capacitors; 3) forming 806 contacts for electrically connecting discrete capacitors and a die; and 4) electrically connecting 838 the discrete capacitors and the die to the contacts. FIG. 8 shows these processes occurring in the above-listed order. In other embodiments, however, the fabrication of the hybrid capacitor assembly may occur in different orders.Providing the first package layer, which includes a set of self-aligned via capacitors, is described in blocks 810-818 and shown in FIGS. 9-13. The process begins, in block 810, by providing a substrate 902 (FIG. 9), which has a substantially horizontal top surface 904 and bottom surface 906. In one embodiment, substrate 902 is an organic substrate, such as an epoxy material. For example, standard printed circuit board materials such as FR-4 epoxy-glass, polymide-glass, benzocyclobutene, Teflon, other epoxy resins, or the like could be used in various embodiments. In alternate embodiments, substrate 902 could consist of an inorganic substance, such as ceramic, for example.In various embodiments, the thickness of substrate 902 is within a range of about 10-1000 microns. Substrate 902 could consist of one or multiple layers of substrate material, where each layer is within a range of about 10-40 microns in one embodiment. Substrate 902 and its associated layers could be thicker or thinner than these ranges in other embodiments.Referring back to FIG. 8, in block 812, one or more series of holes or vias 1002 (FIG. 10) are formed through one or more layers of substrate 902. In various embodiments, the diameter of each via 1002 is within a range of about 50-300 microns, with it being approximately 200 microns in one embodiment. In addition, the length of each via 1002 could be in a range of about 10-1000 microns, depending on how many layers of substrate 902 each via extends through. The diameters and lengths of vias could be larger or smaller than these ranges in other embodiments.Although vias 1002 are shown as through holes (i.e., holes through all layers of substrate 902) in FIG. 10, each via 1002 could be bounded above and/or below by various layers of substrate 902. A via bounded on only one end is often termed a blind via, and a via bounded on both ends is often termed a buried via.Vias 1002 are defined by sidewalls 1004, which are substantially vertical, or orthogonal, to the top and bottom surfaces 904, 906 of substrate 902. Vias 1002 are formed in a manner known in the art for forming an opening in a substrate. For example, in one embodiment, vias 1002 are mechanically drilled, although vias 1002 may also be punched, laser drilled, or formed using other technologies in various other embodiments. If substrate 902 is an inorganic substance, such as ceramic, other hole formation techniques known to those of skill in the art would be used. For example, substrate 902 could be created with vias 1002 already existing therein. Either way, blocks 810 and 812 result in the fabrication of a substrate 902 having a top surface 904 through which one or more holes 1002 are formed, where those holes are defined by sidewalls 1004.Referring back to FIG. 8, in block 814, a conductive material layer 1102 (FIG. 11) is formed on the sidewalls of holes 1002. Portions of conductive layer 1102 formed on sidewalls 1004 define plated vias 1104. Each plated via 1104 represents the outer conductor of a self-aligned via capacitor, shown as conductor 508 of FIG. 5.In one embodiment, layer 1102 overlies at least a portion of top and bottom surfaces 904, 906 of substrate 902. In another embodiment, layer 1102 overlies all or a portion of top surface 904, but does not overlie any portion of bottom surface 906. The portions of conductive layer 1102 overlying the top and bottom surfaces 904, 906 of the substrate form top and bottom surfaces 1106, 1108 of conductive layer 1102. The top surface 1106 of conductive layer 1102 forms the bottom conductor of a high inductance capacitor, in one embodiment, or the bottom conductor of a low dielectric capacitor, in another embodiment.In one embodiment, conductive layer 1102 is a copper layer, although other conductive metals such as tin, lead, nickel, gold, and palladium, or other materials could be used in other embodiments. In various embodiments, the thickness of conductive layer 1102 is within a range of about 5-15 microns, with it being approximately 10 microns in one embodiment. Conductive layer 1102 could be thicker or thinner than that range in other embodiments.In one embodiment, conductive layer 1102 is formed using standard techniques for forming a conductive layer. In one embodiment, layer 1102 is formed by depositing a seed layer, such as sputter-deposited or electroless-deposited copper, on the substrate 902, followed by electrolytic plating a layer of copper on the seed layer. In another embodiment, layer 1102 is formed using standard photolithographic techniques. Other methods of depositing layer 1102 will be apparent to those skilled in the art, such as screen printing or other printing of conductive inks. In still another embodiment, rather than using a substrate without a conductive material in block 810, a clad laminate, such as a copper-clad laminate, could be used, making block 814 unnecessary.Referring back to FIG. 8, in block 816, a dielectric layer 1202 (FIG. 12) is formed over conductive layer 1102. The portions of dielectric layer 1202 overlying the plated via 1104 represent the dielectric layer disposed between the inner and outer conductors of the self-aligned via capacitor, shown as layer 516 in FIG. 5. In addition, the dielectric layer may form the dielectric layer between the plates of a high dielectric capacitor, in one embodiment, or a low inductance capacitor, in another embodiment. When the dielectric layer is associated with a high dielectric capacitor, it has a high dielectric constant and is relatively thin. In one embodiment, the dielectric constant is in a range of 2000 to 5000, although it could be higher or lower in other embodiments. When the dielectric layer is associated with a low inductance capacitor, it has a dielectric constant in a range of 1 to 1000, although it could be higher or lower in other embodiments. In one embodiment the thickness is in a range of 0.3 to 5 microns, although it could be thicker or thinner in other embodiments.In one embodiment, dielectric layer 1202 contains a metal oxide, such as tantalum oxide (Ta2O5). The metal oxide may be formed using a physical vapor deposition technique of the metal, and anodizing the layer of the metal in a weak acid electrolyte to form the metal oxide. For example, the metal may be sputter deposited from a metal target to form a layer of the metal. In one embodiment, a shadow mask can be placed on or in close proximity to the substrate to block or mask areas where deposition is not desired.Physical vapor deposition techniques can be carried out from both surfaces 1106, 1108 (FIG. 11), so that the dielectric layer 1202 is formed to overlie the plated vias 1104, and the top and bottom surfaces 1106, 1108 of the conductive layer 1102. In another embodiment, the deposition process can be carried out only from the first surface 1106 of the conductive layer 1102, so that dielectric layer 1202 is formed to overlie plated vias 1104 and a portion of the first surface 1106, but not to overlie a portion of the second surface 1108. Alternatively, a metal layer may be deposited by electrolytic plating or photolithographic techniques, and converted to the metal oxide by anodization in a weak acid electrolyte.The thickness of the oxide can be controlled through the application of a controlled voltage. Remaining non-oxidized metal in the dielectric layer 1202 is not a concern, as it will reside at the interface between the conductive layer 1102 and the dielectric layer 1202, and thus not adversely affect the resulting capacitance given its conductivity.In another embodiment, dielectric layer 1202 can be formed by RF sputtering from a composite target of a dielectric material, or through reactive sputtering from multiple elemental targets, without the need for anodization or other oxidation techniques. Metal organic chemical vapor deposition (MOCVD) and sol-gel techniques have further been utilized to directly form metal oxide dielectrics. Other techniques of forming layers of dielectric material are known in the art and can include chemical vapor deposition (CVD) and plasma-enhanced CVD. Furthermore, other dielectric materials can be utilized with the various embodiments. Examples of other dielectric materials include strontium titanate (SrTiO3), barium titanate (BaTiO3), barium strontium titanate (BaSrTiO3; BST), lead zirconium titanate (PbZrTiO3; PZT), aluminum oxide (Al2O3), or zirconium oxide (Zr2O3), often formed by sputtering from a composite target or by MOCVD. Further examples include more conventional dielectric materials, such as silicon dioxide (SiO2), silicon nitride (SiN), and silicon oxynitride (SiOxNy).The dielectric layer 1202 is formed to overlie at least that portion 1104 of conductive layer 1102 formed on sidewalls 1104. Furthermore, the dielectric layer 1202 is formed to leave a portion of the plated via unfilled.Referring back to FIG. 8, in block 818, an inner conductor 1302 (FIG. 13) is formed in the remaining portion of the plated via that is not filled by the dielectric layer 1202. Each inner conductor 1302 represents the inner conductor of a self-aligned via capacitor, shown as conductor 510 in FIG. 5. The inner conductor 1302 is formed by depositing a conductive material within the area within the vias that is not filled by the dielectric layer 1202. In one embodiment, this is done by filling the remaining portion of the via with a conductive paste. The conductive paste is cured, in one embodiment.A second conductive layer 1304 (FIG. 13) also is formed on the top surface of the dielectric layer 1202 in block 820. The second conductive layer 1304 represents the bottom conductive layer of the low inductance capacitor, shown as layer 504 in FIG. 5. In one embodiment, the thickness of the second conductive layer is within a range of about 10 to 30 microns, although it could be thicker or thinner than this range in other embodiments. The second conductive layer can be formed, in various embodiments, using techniques described for forming the first conductive layer 1102.In another embodiment, rather than performing blocks 818 and 820 as separate processes, electroless plating followed by electrolytic plating of metal, such as copper, can be used to form a layer of electroplated metal overlying at least the portion of dielectric layer 1202 formed in the plated via 1104, and also to plate the top surface. In this embodiment, the inner conductor 1302 may have generally a hollow structure defined by the structure of the remaining portion of the via left unfilled, even though the plated metal may completely fill the via. Any portion of the via unfilled by the resulting layer of electroplated metal in this embodiment can be optionally filled (e.g., with a polymer via plug or conductive paste), as is known in the art. Other methods, such as many of those described for the formation of the first conductive layer 1102, can also be used to form a conductive layer as the inner conductor 1302 in the remaining portion of the via left unfilled by dielectric layer 1202.Referring back to FIG. 8, in block 822, portions 1402 (FIG. 14) of the second conductive layer 1304 are selectively removed, exposing portions of the conductive layer 1202 underneath the removed portions. Removal of the portions of conductive material could be performed, for example, using a common subtractive technology, such as chemical mechanical planarization to physically abrade away the material. Alternatively, a photo or laser imaging and etching process could be used. Other subtractive technologies could be used in other embodiments. In still other embodiments, additive technology could be used to deposit the desired portions of second conductive layer 1304. For example, rather than plating and subtracting portions of second conductive layer 1304, portions of second conductive layer 1304 could be selectively screened or stenciled using a conductive paste.In block 824, a second dielectric layer 1502 (FIG. 15) is formed on the second conductive layer 1304. The second dielectric layer 1502 represents the dielectric layer disposed between the top and bottom conductive layers of the low inductance capacitor, shown as layer 506 in FIG. 5. The second dielectric layer 1502 should be very thin, and be composed of a material that has a very low dielectric constant. In one embodiment, the dielectric layer has a thickness within a range of about 0.3 to 5 microns. In alternate embodiments, the thickness of the dielectric layer could be a value larger or smaller than this range.In addition, the dielectric constant could be in the range of about 1 to 1000, with the materials that produce the fastest charge propagation being at the lower end of the range. In other embodiments, the dielectric constant could have a value outside this range. The attributes of the dielectric layer 1502 should result in a rapid propagation of charge, when needed. The second dielectric layer 1502 can be formed using techniques described above for formation of the first dielectric layer 1202 (FIG. 12). As described previously, in an alternate embodiment, the high inductance capacitor could be located above the low inductance capacitor, in which case second dielectric layer may have a dielectric constant in a range of about 2000 to 5000, although the dielectric constant could be higher or lower in other embodiments.Portions 1602 (FIG. 16) of the second dielectric layer 1502 are then selectively removed, in block 826. This provides access to portions of the first conductive layer 1102, while maintaining the electrical isolation of the first conductive layer 1102 and the second conductive layer 1304. Selective removal can be achieved, for example, using chemical mechanical planarization or other techniques known to those of skill in the art.In block 828, a third conductive layer 1702 (FIG. 17) and interconnects 1704 are formed on the top surface of the assembly, and in the removed portions of the second dielectric layer 1502. The third conductive layer 1702 represents the top conductive layer of the low inductance capacitor, such as layer 502 shown in FIG. 5. In one embodiment, the thickness of the third conductive layer is within a range of about 10 to 30 microns, although it could be thicker or thinner than this range in other embodiments.Forming the third conductive layer 1702 can be performed using techniques previously described. In various embodiments, this can be performed in multiple steps (e.g., fill the holes 1602 in the second dielectric layer then form the third conductive layer) or as a single step (e.g., electroless and electrolytic plating).Referring back to FIG. 8, after formation of the embedded low inductance capacitor, electrical connections to these plates are formed as described in blocks 830-836 and shown in FIGS. 18-21. In block 830, portions 1802 (FIG. 18) of the third conductive layer 1702 are selectively removed, using techniques described previously. Next, in block 832, a third dielectric layer 1902 (FIG. 19) is formed over the third conductive layer 1702 and the removed portions, also using techniques previously described.Portions 2002 (FIG. 20) of the additional dielectric layer 1902 are then selectively removed, in block 834. These removed portions 2002 expose the conductive material on portions of the bottom and top conductive layers 1304, 1702 of the low inductance capacitor. In addition, the removed portions 2002 coincide with the desired locations for electrical connections to a die and discrete capacitors. Selective removal of portions 2002 of the additional dielectric layer 1902 can be performed using techniques previously described, such as chemical mechanical planarization, for example, or other techniques known to those of skill in the art.Contacts 2102, 2104 (FIG. 21) are then formed, in block 836, to couple the exposed portions of the bottom and top conductive layers 1304, 1702, respectively, to the top of the package. One set of contacts 2102 is electrically connected to the bottom conductive layer 1304, and the other set of contacts 2104 is electrically connected to the top conductive layer 1702. These contacts can be formed using techniques previously described, such as filling the selectively removed portions of dielectric with conductive paste, electrolytic plating, photolithography, and/or screen printing, for example. This results in a package that includes an embedded low inductance capacitor, comprised of top and bottom conductive layers 1702, 1304, and a set of self-aligned via capacitors, composed of outer and inner conductors 1104, 1302 (FIGS. 11, 13).During operation, at least one contact from the first set of contacts 2102 is coupled to a first potential source, and at least one contact from the second set of contacts 2104 is coupled to a second potential source. For example, the first and second potential sources can be a ground potential and a supply potential, Vcc. Which set of contacts 2102, 2104 is coupled to which potential source is a matter of design, as either set can be connected to either source. In an alternate embodiment, the potential sources could be coupled directly to the bottom and top conductive layers 1304, 1702, or they could be coupled to the outer and inner conductors 1104, 1302 (FIGS. 11, 13) of the self-aligned via capacitors.Referring back to FIG. 8, in block 838, one or more discrete capacitors 2202 (FIG. 22) and a die 2204 are provided and electrically connected to some or all of contacts 2102, 2104. The discrete capacitors 2202 and die 2204 can be connected, for example, by depositing solder bumps on the contacts 2102, 2104, and/or pads (not shown) on die 2204 and capacitors 2202, and reflowing the solder once the capacitors 2202 and die 2204 are arranged over the corresponding contacts.In alternate embodiments, different techniques can be used to interconnect and isolate the various conducting portions of the self-aligned, parallel plate, and discrete capacitors. For example, rather than forming and selectively removing portions of the various conducting and non-conducting layers, openings between the various layers could be included by selectively adding the desired portions of the conducting and non-conducting layers. In other embodiments, removal techniques, such as chemical mechanical planarization, can be used to physically abrade away multiple layers of different types of conducting and non-conducting materials, resulting in the desired openings for various interconnects. In addition, additional layers (not shown) of patterned conductive material and dielectric layers could be disposed between the layers of material shown in the Figures. These additional layers could carry signals, power, and ground to the die.Although five self-aligned capacitors and two discrete capacitors are shown in FIG. 22, the number of each type of capacitor can be varied during the design process to adjust the capacitance values and other electrical characteristics. FIG. 22 also shows various holes in the top and bottom conductive layers of the low inductance capacitor, where these holes enable insulated contacts from the self-aligned capacitors or from the bottom conductive layer to extend upward. In practice, the top and bottom conductive layers would likely include many other holes (not shown) for allowing various inputs, outputs, power, and ground signals to reach the die or other components. In addition, although various conducting and nonconducting material layer sizes and locations are illustrated with specific relative dimensions, the relative dimensions and locations of the layers can also be varied during the design process to adjust the capacitance and other electrical characteristics of the structure.In the embodiment shown in FIG. 22, various loads on the die 2204 are provided with three sources of off-chip capacitance (i.e., the parallel plate capacitors, the self-aligned capacitors, and the discrete capacitors). When a portion of the die, referred to as a die "hot spot," needs a very large amount of current, the first charge that will respond to the current need will come from the capacitance on the die. Next, charge will be provided by the low inductance capacitor; however, since it has a low dielectric constant, the main job of this capacitor is to provide a fast propagation path for the charge. The low inductance capacitor also provides a very low inductance path to the capacitance supplied by the self-aligned capacitors, which release all of the integrated capacitance to provide the next level of charge. Finally, the low inductance capacitor also provides a low inductance path to the discrete capacitors, which provide an additional level of charge. Depending on the location of the die hot spot, the discrete capacitors may respond before the self-aligned capacitors.In alternate embodiments, either the self-aligned or the discrete capacitors can be left out of the design, and thus the various die loads would be provided with two sources of off-chip capacitance. These embodiments are shown in FIGS. 23 and 24.FIG. 23 illustrates a cross-section of a hybrid capacitor that includes a low inductance capacitor and at least one discrete capacitor in accordance with one embodiment of the present invention. The low inductance capacitor includes a top conductor 2302 and a bottom conductor 2304 separated by a very thin, low dielectric material layer 2306. Discrete capacitors 2308 and die 2310 are electrically connected to the top and bottom conductors 2302, 2304 by contacts 2312. In this manner, two levels of off-chip capacitance (i.e., low inductance capacitor and discrete capacitors) are supplied to various loads (not shown) on die 2310. This off-chip capacitance has a low inductance path to the die loads via the top and bottom conductors 2302, 2304.FIG. 24 illustrates a cross-section of an integrated hybrid capacitor that includes a low inductance capacitor and a set of self-aligned via capacitors in accordance with one embodiment of the present invention. Similar to the above-described hybrid capacitor, the low inductance capacitor includes a top conductor 2402 and a bottom conductor 2404 separated by a very thin, low dielectric material layer 2406.A set of self-aligned capacitors, each composed of an inner conductor 2408, a thin dielectric layer 2410, and an outer conductor 2412, are electrically connected to the top and bottom conductors 2402, 2404. In addition, a die 2414 is electrically connected to the top and bottom conductors 2402, 2404 via contacts 2416. In this manner, two levels of off-chip capacitance (i.e., low inductance capacitor and self-aligned capacitors) are supplied to various loads (not shown) on die 2414. This off-chip capacitance has a low inductance path to the die loads via the top and bottom conductors 2402, 2404.The self-aligned capacitors shown in FIGS. 4, 5, 22, and 24 can exist within one layer of the package substrate, in one embodiment. In alternate embodiments, one or more of the self-aligned capacitors can exist through multiple layers of the substrate. For example, a particular self-aligned capacitor could be formed using multiple vias that are stacked on top of each other. Stacking vias of equal diameters directly on top of one another may be difficult using current technologies. Therefore, in one embodiment, the multi-level, self-aligned capacitor is formed from micro-vias, which are vias that are stacked on top of each other, with each higher via having a smaller diameter than the via below it. In another embodiment, the vias that form a self-aligned capacitor are stacked, but each via is slightly offset from the via below it. This would result in a jagged or crankshaft-like structure from the bottom of the stacked via structure to the top.In other embodiments, some or all of the self-aligned vias may exist in substrate levels that are not directly below the bottom conductive layer of the low inductance capacitor. In addition, in various embodiments, the low inductance capacitor may be separated from the top of the package by one or more substrate layers.FIG. 25 illustrates an electrical circuit that simulates the electrical characteristics of the hybrid capacitors illustrated in FIGS. 23 and 24. The circuit shows a die load 2502, which may require capacitance or noise dampening in order to function properly. Some of the capacitance can be supplied by capacitance 2504 located on the die. Other capacitance, however, is provided off chip in accordance with various embodiments of the present invention, as indicated by off-chip capacitors 2506 and 2510.Off-chip capacitor 2506 represents the embedded low inductance capacitor formed by conductive and dielectric layers 2302, 2304, 2306 (FIG. 23) or 2402, 2404, 2406 (FIG. 24). Capacitor 2506 is located some distance, however small, from the die load 2502. Accordingly, some inductance 2508 exists between the die load and capacitor 2506. In order to minimize this inductance 2508, capacitor 2506 is placed as close as possible to the die load 2502.Off-chip capacitor 2510 represents either the discrete capacitors 2308 shown in FIG. 23, or the self-aligned capacitors shown in FIG. 24. This capacitance 2510 exists some distance from the load and from the low inductance capacitor 2506, resulting in some inductance 2512 between off-chip capacitor 2510 and the die load 2502. Again, this inductance 2512 is minimized by placing the self-aligned or discrete capacitors as close as possible to the low inductance capacitor 2506 and the die load 2502.FIG. 26 illustrates an integrated circuit package that includes a hybrid capacitor in accordance with one embodiment of the present invention. Starting from the top of FIG. 26, an integrated circuit 2602 and one or more discrete capacitors 2604 are housed by IC package 2606. Integrated circuit 2602 contains one or more circuits which are electrically connected to IC package 2606 by connectors (not shown). One or more of the IC circuits act as a load, which may require capacitance, noise suppression, and/or power dampening. Some of this capacitance is provided, in one embodiment of the present invention, by discrete capacitors 2604.Integrated circuit 2602 could be any of a number of types of integrated circuits. In one embodiment of the present invention, integrated circuit 2602 is an microprocessor, although integrated circuit 2602 could be other types of devices in other embodiments. In the example shown, integrated circuit 2602 is a "flip chip" type of integrated circuit, meaning that the input/output terminations on the chip can occur at any point on its surface. After the chip has been readied for attachment to IC package 2606, it is flipped over and attached, via solder bumps or balls to matching pads on the top surface of IC package 2606. Alternatively, integrated circuit 2602 could be a surface mount chip, where input/output terminations are connected to IC package 2606 using bond wires to pads on the top surface of IC package 2606.Embedded within package 2606 is a low inductance capacitor, as described previously. In addition, in one embodiment, a set of self-aligned via capacitors are embedded within package 2606. The low inductance capacitor and the discrete and/or self-aligned capacitors function together to provide multiple levels of additional capacitance to integrated circuit 2602, and also to provide power dampening and noise suppression, when needed. The close proximity of these off-chip sources of capacitance means that each source has a relatively low inductance path to the die. In other embodiments, the embedded low inductance capacitor, and/or the self-aligned capacitors, and/or the discrete capacitors could be embedded within or mounted on the PC board 2610, or on an interposer (i.e., a board that provides a dimensional interface between a package and a printed circuit board).IC package 2606 is coupled to a socket 2608 on a PC board 2610. In the example shown, IC package 2606 includes pins 2612 that mate with complementary pin holes in socket 2608. Alternatively, IC package 2606 could be electrically and physically connected to PC board 2610 using solder connections, such as ball grid array connections, for example.PC board 2610 could be, for example, a mother board of a computer system. As such, it acts as a vehicle to supply power, ground, and other types of signals to integrated circuit 2602. These power, ground, and other signals are supplied through traces (not shown) on PC board 2610, socket 2608, pins 2612, and traces (not shown) on IC package 2606.The IC package described above in conjunction with various embodiments could be connected to a PC board forming part of a general purpose computer system. FIG. 27 illustrates a general purpose computer system 2702, which includes a hybrid capacitor in accordance with various embodiments of the present invention.Computer system 2702 is housed on PC board and includes microprocessor 2704, package 2706, bus 2708, power supply signal generator 2710, and memory 2712. Package 2706 includes a hybrid capacitor in accordance with various embodiments of the present invention, described above. Package 2706 couples microprocessor 2704 to bus 2708 in order to communicate power supply signals and non-power supply signals between microprocessor 2704 and devices coupled to bus 2708. For the embodiment of the present invention shown in FIG. 27, bus 2708 couples microprocessor 2704 to memory 2712 and power supply signal generator 2710. However, it is to be understood that in alternative embodiments of the present invention, microprocessor 2704 can be coupled to memory 2712 and power supply signal generator 2710 through two different busses.ConclusionThus, various embodiments of a hybrid capacitor and methods of fabricating that capacitor have been described, along with a description of the incorporation of a package and/or interposer that includes that capacitor on a PC board within a general purpose computer system. Embodiments of the present invention provide a hybrid capacitor that can be used in place of various discrete components on an integrated circuit package, interposer or printed circuit board.While the foregoing examples of dimensions and ranges are considered typical, the various embodiments of the invention are not limited to such dimensions or ranges. It is recognized that the trend within industry is to generally reduce device dimensions for the associated cost and performance benefits.In the foregoing detailed description of the preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific preferred embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention.It will be appreciated by those of ordinary skill in the art that any arrangement which is calculated to achieve the same purpose may be substituted for the specific embodiment shown. For example, illustrative embodiments describe vias between two levels of interconnect. However, those skilled in the art will recognize that many interconnect levels may be connected by vias in accordance with the present invention. In addition, additional layers of patterned conductive materials and interconnects for carrying signals, power, and ground may exist between, above, or below the layers shown in the figures.The various embodiments, above, have been described in the context of providing excess, off-chip capacitance to a die. One of ordinary skill in the art would understand, based on the description herein, that the method and apparatus of the present invention could also be applied in many other applications where a hybrid capacitor having a low inductance path to a circuit load are desired. Therefore, all such applications are intended to fall within the spirit and scope of the present invention.This application is intended to cover any adaptations or variations of the present invention. The foregoing detailed description is, therefore, not to be taken in a limiting sense, and it will be readily understood by those skilled in the art that various other changes in the details, materials, and arrangements of the parts and steps which have been described and illustrated in order to explain the nature of this invention may be made without departing from the spirit and scope of the invention as expressed in the adjoining claims. |
A method for manufacturing fully silicided (FUSI) gates and devices, in particular MOSFET devices, is described. In a method of the invention, the silicide phase can be effectively controlled. |
A method of manufacturing a fully-silicided-gate electrode in a semiconductor device, comprising the steps of:- depositing a metal layer over a semiconductor layer of a gate stack,- providing a first thermal budget to allow a partial silicidation of said semiconductor layer, whereby the silicide layer obtained has a ratio metal-to-semiconductor higher than 1,- removing selectively the remaining, unreacted metal layer, and- providing a second thermal budget to allow a full silicidation of said semiconductor layer.A method according to claim 1, wherein said semiconductor layer comprises silicon and/or germanium.A method according to claim 1 or 2, wherein said semiconductor layer comprises (or consists of) poly-silicon (poly-Si).A method according to any of claims 1 to 3, wherein said metal layer comprises (or consists of) any suitable refractory metal, noble metal, transition metal, or any combination thereof.A method according to claim 4, wherein said metal layer comprises (or consists of) Ni.A method according to any of claims 1 to 5, wherein said first thermal budget is determined by a silicidation kinetics graph drawn up for each silicide phase, Mx Sy , envisaged in the partially silicided gate, wherein M represents said metal and S said semiconductor used, and wherein x and y are real numbers different from 0 and higher than 0.A method according to claim 6, wherein said silicidation kinetics graph is represented by figure 6.A method according to any of claims 1 to 7, wherein the step of providing said first thermal budget consists of an Rapid Thermal Processing (RTP).A method according to any of claims 1 to 8, wherein the step of providing said second thermal budget consists of an RTP.A method according to any of claims 1 to 9, wherein the step of removing said remaining metal layer consists of a selective etching.A method according to any of claims 1 to 9, wherein said metal layer consists of Ni and said semiconductor layer consists of poly-Si.A method according to claim 11, wherein said first thermal budget is such that a Ni2 Si layer is grown, with a thickness comprised between 0,9 and 1,5 of the poly-Si thickness, whereby a NiSi FUSI gate is obtained.A method according to claim 12, further comprising the step of drawing up a silicidation kinetics graph for Ni2 Si, whereby the temperature and time period to apply as said first thermal budget is determined.A method according to claim 13, wherein said silicidation kinetics graph is represented by figure 6.A method of manufacturing a fully-silicided-gate electrode in a MOSFET device, comprising the steps of:- depositing a nickel layer over a poly-Si layer of a gate stack,- providing a first thermal budget to allow a partial silicidation of said poly-Si layer, whereby the silicide layer obtained has a ratio Ni/Si higher than 1,- removing selectively the remaining, unreacted metal layer, and- providing a second thermal budget to allow a full silicidation of said semiconductor layer.A method according to claim 15, wherein said first thermal budget is determined by a silicidation kinetics graph drawn up for each silicide phase, Nix Siy , envisaged in the partially silicided gate, wherein x and y are real numbers different from 0 and higher than 0 and wherein 1<x/y≤3.A method according to claim 15 or 16, wherein the step of providing said first thermal budget consists of an RTP.A method according to any of claims 15 to 17, wherein the step of providing said second thermal budget consists of an RTP.A method according to any of claims 15 to 18, wherein the step of removing said remaining metal layer consists of a selective etching.A method according to any of claims 16 to 19, wherein said first thermal budget is such that a Ni2 Si layer is grown, with a thickness comprised between 0,9 and 1,5 of the poly-Si thickness, whereby a NiSi FUSI gate is obtained.A method according to claim 20, further comprising the step of drawing up a silicidation kinetics graph for Ni2 Si, whereby the temperature and time period to apply as said first thermal budget is determined.A method according to claim 21, wherein said silicidation kinetics graph is represented by figure 6.A method according to any of claims 15 to 22, wherein said poly-Si layer is upon a SiON or upon a HfSiON layer. |
Field of the invention The present invention relates to semiconductor process technology and devices. In particular, the present invention relates to semiconductor devices with metallic gate electrodes formed by a reaction between a metal and a semiconductor material. Background Metal gates are expected to replace partially silicided poly-Si gates in future complementary metal-oxide-semiconductor (CMOS) technology nodes, in order to eliminate poly-Si depletion issues. For this application, the work function (WF) is one of the most critical properties to be considered. Recently, there has been significant interest on the application of silicides as metal gate electrodes, and in particular, on NiSi fully-silicided (FUSI) gates.From a processing point of view, it can be implemented as a variation of the Ni self-aligned silicidation process used in previous technology nodes in which the silicide is formed in the gate down to the dielectric interface, fully consuming the poly-Si film.Ni-silicide appears as an attractive metal gate candidate that allows maintaining several aspects of the flows from previous generations (such as Si gate pattern and etch, and self-aligned silicide processes). A key property that has attracted attention to NiSi FUSI gates is the modulation of their effective work function on SiO2by dopants which may allow for tuning of the threshold voltage (Vt) of nMOS and pMOS devices without the need for two different metals. The integration and properties of Ni FUSI gates on high-k dielectrics is also of interest for advanced CMOS applications.Good control of the WF and Vtof devices is an essential requirement for gate electrode applications. In order to assess the ability to control Vtfor Ni FUSI gate processes and given the large number of silicide phases in the Ni-Si system, it is important to address (a) the ability to control the Ni-silicide phase at the dielectric interface, and (b) the work functions of different silicide phases (both for conventional and for high-K dielectrics).Hence, there is a need for a method for manufacturing metal gate CMOS devices in which the work function and the threshold voltage of the metal gate electrode of each transistor type can be controlled in an easy and efficient way, independent of the geometry and/or dimensions of the transistor or of the gate dielectric used. Summary The present invention relates to a method of manufacturing a fully-silicided-gate electrode in a semiconductor device, comprising the steps of:depositing a metal layer over a semiconductor layer of a gate stack,providing a first thermal budget to allow a partial silicidation of said semiconductor layer, whereby the silicide layer obtained has a ratio metal-to-semiconductor higher than 1,removing selectively the remaining, unreacted metal layer, andproviding a second thermal budget to allow a full silicidation of said semiconductor layer.Preferably, said semiconductor layer comprises (or consists of) silicon and/or germanium, more preferably comprises (or consists of) poly-silicon (poly-Si).Preferably, said metal layer comprises (or consists of) any suitable refractory metal, noble metal, transition metal, or any combination thereof. More preferably, said metal layer comprises (or consists of) Ni.In a method according to the invention, said first thermal budget can be determined by a silicidation kinetics graph (such as the silicidation kinetics graph represented by figure 6) drawn up for each silicide phase, MxSy, envisaged in the partially silicided gate, wherein M represents said metal and S said semiconductor used, and wherein x and y are real numbers different from 0 and higher than 0.Preferably, the step of providing said first thermal budget consists of an Rapid Thermal Processing (RTP).Preferably, the step of providing said second thermal budget consists of an RTP.Preferably, the step of removing said remaining metal layer consists of a selective etching.In a preferred method according to the invention, said metal layer consists of Ni and said semiconductor layer consists of poly-Si. In particular, said first thermal budget is such that a Ni2Si layer is grown, with a thickness comprised between 0,9 and 1,5 of the poly-Si thickness, whereby a NiSi FUSI gate is obtained. By drawing up a silicidation kinetics graph (such as the silicidation kinetics graph represented by figure 6) for Ni2Si, the temperature and time period to apply as said first thermal budget can be determined.More particularly, a method according to the invention for manufacturing a fully-silicided-gate electrode in a MOSFET device, comprises the steps of:depositing a nickel layer over a poly-Si layer of a gate stack,providing a first thermal budget to allow a partial silicidation of said poly-Si layer, whereby the silicide layer obtained has a ratio Ni/Si higher than 1,removing selectively the remaining, unreacted metal layer, andproviding a second thermal budget to allow a full silicidation of said semiconductor layer.Said first thermal budget can be determined by a silicidation kinetics graph drawn up for each silicide phase, NixSiy, envisaged in the partially silicided gate, wherein x and y are real numbers different from 0 and higher than 0 and wherein 1<x/y≤3.Preferably, said first thermal budget is such that a Ni2Si layer is grown, with a thickness comprised between 0,9 and 1,5 of the poly-Si thickness, whereby a NiSi FUSI gate is obtained.Preferably, in a method according to the invention, said poly-Si layer is upon a SiON or upon a HfSiON layer. Brief description of the drawings Exemplary embodiments are illustrated in referenced figures of the drawings. It is intended that the embodiments and figures disclosed herein be considered illustrative rather than restrictive. Same numerals are used to refer to corresponding features in the drawings.Figure 1 represents XRD spectra showing formation of NiSi, Ni2Si and Ni3Si by adjusting Ni to Si thickness ratio.Figure 2 represents the increase in resistivity and silicide thickness with increasing Ni to Si thickness ratio.Figure 3 represents CV curves for FUSI devices, showing larger VFBshifts from NiSi to Ni3Si on HfSiON (330 mV) than on SiON (100 mV).Figure 4 represents the WF for the main Ni silicide phases. The large difference for NiSi between SiO2and HfSiON disappears for larger Ni contents indicating unpinning of the FL.Figure 5 represents schematic processes showing 1-step and 2-step FUSI processes on wide and narrow gates, for varying Ni and Si thicknesses. Due to Ni diffusion from top of spacers, the effective Ni/Si ratio can be larger for narrow devices than for large structures.Figure 6 represents Ni2Si silicidation kinetics showing diffusion limited growth.Figure 7 represents silicide growth rates for NiSi and Ni2Si.Figure 8 represents Rsvs. L showing linewidth effect for 60 nm Ni 1-step FUSI eliminated with 2-step FUSI.Figure 9 represents TEM cross sections of narrow FUSI gates for 1-step and 2-step FUSI processes.Figure 10 represents Vtroll-off for 1-step and 2-step Ni FUSI/HfO2processes. The kink seen for the 1-step process is due to transition from NiSi for long gate lengths to Ni rich silicide on short ones.Figure 11 represents Vtroll-off for Ni FUSI/SiON processes. For tNI/tSi=0.6 (targeting NiSi), the 1-step process shows a kink corresponding to the transition from NiSi at long gate lengths to Ni rich silicide at short gate lengths. Ni3Si and 2-step NiSi FUSI processes show smooth Vtroll-off to 30 nm gate lengths.Figure 12 represents Vtroll-off for Ni FUSI/HfSiON showing scalability with smooth roll-off for Ni3Si and 2-step NiSi FUSI processes.Figure 13 represents Rsvs. RTP1 temperature for 2-step Ni FUSI process. The increase in Rswith increasing temperature for 50 nm gates is due to the transition from NiSi to Ni-rich silicide.Figure 14 (a) represents a RTP1 process window for 2-step NiSi FUSI process. Process margins need to be added to account for process variations and silicide reaction non-uniformity (b) and (c).Figure 15 represents XRD patterns of Ni-silicide on SiO2films for deposited Ni to poly-Si thickness ratios (tNi/tSi) between 0.3 and 0.9.Figure 16 represents RBS spectra of Ni-silicide on SiO2films for deposited Ni to poly-Si thickness ratios (tNI/tSi) between 0.6 and 0.9.Figure 17 represents a cross-section TEM of Ni FUSI gate stack showing bi-layer structure. NiSi was identified in the lower layer by Fourier-transformed high-resolution images. EDX showed a higher Ni/Si composition ratio for the top layer.Figure 18 represents RBS spectra of Ni-silicide on SiO2films for deposited Ni to poly-Si thickness ratios (tNi/tSi) of 0.6 and 1.1 (1 MeV4He++, 160°).Figure 19 represents XRD patterns of Ni-silicide on SiO2films for deposited Ni to poly-Si thickness ratios (tNi/tSi) between 0.6 and 1.7. Results for different silicidation processes are shown for selected thickness ratios (LT and HT indicate lower and higher temperature processes respectively).Figure 20 represents RBS spectra of Ni-silicide on SiO2films for deposited Ni to poly-Si thickness ratios (tNi/tSi) of 0.6 to 1. 7 (2 MeV4He++, 160°).Figure 21 represents flat band voltage vs. EOT for (a) NiSi/SiO2, (b) NiSi/HfSiON/SiO2and (c) Ni2Si/SiO2capacitors showing the effect of dopants.Figure 22 represents CV curves comparing NiSi and Ni3Si FUSI gates for (a) SiON and (b) HfSiON dielectrics.Figure 23 schematically illustrates a method according to the present invention.Figure 24 shows XRD (Cu Kαradiation) characterization of Ni silicide films as function of RPT1 temperature according to an example.Figure 25 shows the reacted nickel to silicon ratio of Ni silicide films as function of the RPT1 temperature according to an example. Detailed description In patterned devices, in particular for narrow lines, typically less than 100nm, the metal / semiconductor ratio is not well defined: the metal from top of spacers and surrounding areas can diffuse and react with the semiconductor in the gate, to increase the effective metal / semiconductor ratio.A new method for manufacturing a fully-silicided-gate device is described that can eliminate the linewidth dependence existing in previous silicidation methods.A method according to the invention for manufacturing a fully-silicided-gate in a semiconductor device comprises the steps of:depositing a metal layer over a semiconductor layer of a gate stack,providing a first thermal budget (selecting temperature and/or time) to allow a partial silicidation of said semiconductor layer,removing selectively the remaining, unreacted metal layer, andproviding a second thermal budget to allow a full silicidation of said semiconductor layer.In a method according to the invention, said metal layer can be of any metal(s) preferably capable of diffusing into the underlying semiconductor material and suitable for metal gate electrodes. More particularly, said metal layer can comprise or consists of a refractory metal such as tantalum or tungsten, a noble metal such as Pt, a near noble metal such as Ni, a transition metal such as Ti, or any combination of two or more of these metals.Said semiconductor layer can be of any material(s) suitable for metal gate electrodes. More particularly, said semiconductor layer can comprise or consists of Si, Ge or a mixture thereof.The first thermal step consists of providing a temperature (also referred to as thermal energy), T°1, within a determined period of time. T°1 is preferably smaller than the temperature applied in the second thermal step, T°2. Preferably, said first thermal step consists of a Rapid Thermal Processing (RTP1)step.Preferably, said temperature is applied within a period of time varying between about 30 seconds and about 60 seconds.The second thermal step consists of providing a temperature, T°2, preferably higher than T°1, within a determined period of time, preferably within a period of time varying between about 30 seconds and about 60 seconds. Preferably, said second thermal step consists of a Rapid Thermal Processing (RTP2) step.Limiting T°1 and the period of time aims at controlling the reaction between said metal and said semiconductor such that a metal-rich silicide layer is grown while a certain thickness of said semiconductor layer remains unreacted.In the framework of the present invention, the terms "silicide", "silicided", "silicidation" can refer to the reaction between a metal and silicon, but is not intended to be limited to silicon. For instance, the reaction of a metal with Ge, or any other suitable semiconductor material, may still be referred to as silicidation.In the framework of the present invention, the term "metal-rich silicide" refers to the material resulting from the reaction between said metal and said semiconductor, wherein the ratio metal-to-semiconductor is larger than 1.The silicide phase (also referred to as metal-semiconductor phase) can be represented by the formula MxSy, wherein M represents the metal, S represents the semiconductor and wherein x and y are integers or real numbers different from 0. In a metal-rich silicide (phase), x/y is larger than 1.For selected metal-semiconductor alloys, i.e. silicides, the work function thereof may depend on the specific phase in which the alloy is formed. Hence, the suitability of such metal-semiconductor combinations as gate electrode for one type of transistor depends on which phase of this combination can be formed for this type of transistor. Said specific phase is to be formed at least at the bottom part of the gate electrode, the last few nanometers of the gate electrode (e.g. the last nanometer, or the last 2, 3, 4, 5, 10 nanometers or even more), i.e. at least at the part which is the nearest to the gate dielectric, also referred to in the present invention as the "interface". In other words, in the context of the present invention, the term "interface", when referring to the silicide phase of the gate electrode, refers to the bottom part of the gate electrode (which is the nearest to the gate dielectric), of few nanometers thickness, e.g. between about 1 nm and about 10 nm, preferably between about 1 nm and about 5 nm.For example, when the metal deposited is Ni and the semiconductor is poly-Si, several phases can result from their reaction, such as NiSi2, NiSi, Ni2Si, Ni31Si12, Ni3Si, etc. For instance, Ni2Si, Ni31Si12, Ni3Si are Ni-rich silicides.More particularly, having regard to nickel silicide, for metal-rich phases, such as Ni2Si, Ni3Si2, Ni31Si12, or Ni3Si, the ratio x/y is higher than 1 and preferably less than or equal to 3 (i.e. 1<x/y≤3), while for metal-poor phases, such as NiSi or NiSi2, the ratio x/y is higher than 0 and less than or equal to 1 (i.e. 0<x/y≤1).Indeed, said metal-rich silicide layer obtained after said first thermal step has a ratio metal-to-semiconductor (x/y) larger than 1.After the step of removing the remaining (unreacted) metal layer, preferably in a selective etching step, said metal-rich silicide layer can act as the only source of metal during said second thermal step to fully silicide the semiconductor layer.In other words, the total amount of metal in the fully silicided gate is the amount of metal that is stored in said metal-rich silicide layer after the step of removing the remaining metal. Thus the only metal available for the reaction in the second thermal step is the amount of metal incorporated in said metal-rich silicide layer. In the second thermal step only a redistribution of this fixed amount of metal occurs.By selecting the metal-semiconductor phase (MxSy) envisaged for the fully silicided gate to be manufactured, and the dimension of that fully silicided gate, the total amount of metal present in said fully silicided gate to be manufactured can thus be determined.Said determined total amount of metal is also the amount of metal that is to be incorporated in the partially silicided semiconductor layer (obtained after said first thermal step), which is the only source of metal after the removal of the remaining, unreacted metal layer.In order to obtain the desired amount of metal in said partially silicided semiconductor layer, the metal diffusion rate, in the reaction metal / semiconductor, is controlled in a method of the invention, by providing a thermal budget (T°1 and time) based on a silicidation kinetics graph pre-established for each silicide phase.In other words, T°1 and time parameters can be determined for each silicide phase by establishing a silicidation kinetics graph, such as the Ni2Si silicidation kinetics graph drawn up and represented in figure 6.A method of the invention can thus further comprise the step of establishing a silicidation kinetics graph for determining T°1 and the time to be applied in said first thermal step.The present invention will be described herein after with respect to particular embodiments and with reference to certain drawings, but the invention is not intended to be limited thereto.In a preferred embodiment, the metal layer consists of Ni and the semiconductor material consists of poly-Si.In a method according to the invention, the effective Ni/Si ratio is controlled by limiting the first thermal budget, growing a Ni-rich silicided layer, wherein the poly-Si layer is not fully consumed. The unreacted Ni on top of the Ni-rich silicided layer (if any), on the spacers and surrounding areas are then removed, preferably in a selective etching step. A second thermal budget is applied to fully silicide the gate.The first thermal step consists of providing a temperature T°1 during a period of time, both determined on the basis of a silicidation kinetics graph established for each silicide phase.The second thermal step consists of providing a temperature T°2 during a period of time, also determined on the basis of a silicidation kinetics graph.For instance, when a NiSi fully silicided gate is envisaged, the Ni-rich phase in the partially silicided layer can be any NixSiyphase(s), wherein x/y is larger than 1, preferably equal or larger than 2.For example, when the silicide envisaged is a NixSiysilicide with 0<x/y≤1, preferably with x/y=1 (or with x/y substantially equal to 1), the metal-rich silicide in the partially silicided layer can be a NixSiysilicide with x/y ≥ 2 (more particularly with 2≤x/y≤3).More particularly, when a NiSi fully silicided gate is envisaged, the Ni-rich phase in the partially silicided layer can be Ni2Si and/or Ni3Si2.In a particular embodiment where the Ni-rich phase is Ni2Si, T°1 and the time period of the first thermal step are determined on the basis of figure 6.T°1 can be comprised between about 240°C and about 700°C, is preferably smaller than 500°C, more preferably comprised between about 240°C and 350°C.Preferably, T°1 is applied during a period of time varying between about 30 seconds and about 60 seconds.Preferably, said first thermal step consists of a Rapid Thermal Processing (RTP1).T°2 can be comprised between about 350°C and about 900°C, is preferably higher than 500°C, more preferably comprised between 500°C and about 850°C.Preferably, T°2 is applied during a period of time varying between about 30 seconds and about 60 seconds.Preferably, said second thermal step consists of a Rapid Thermal Processing (RTP2).In a method according to the invention, the effective (reacted) Ni/Si ratio is controlled by limiting the RTP1 thermal budget, growing a Ni-rich silicided that does not fully consume the poly-Si thickness. Excess Ni and Ni films on top of spacers / surrounding areas are then removed in the selective etch step. A second RTP step at a higher temperature is then used to grow NiSi, fully siliciding the gates.The present invention can also be described as follows.We demonstrate for the first time, the scalability of NiSi and Ni3Si FUSI gate processes down to 30 nm gate lengths, with linewidth independent phase and Vt control.We show that 1-step FUSI is inadequate for NiSi FUSI gates, because it results in incomplete silicidation at low thermal budgets or in a linewidth dependent Ni silicide phase, inducing Vtshifts at higher thermal budgets.We show that Vtand WF shifts are larger on high-K (HfO2(250 mV) or HfSiON (330mV)) than on SiON (110mV) and report Fermi level unpinning for Ni-rich FUSI on high-K.In contrast, we demonstrate the scalability of Ni3Si FUSI, with no phase control issues, and report HfSiON Ni3Si FUSI PMOS devices with Vt= -0.33 V.Lastly, we show that, for NiSi, phase control down to narrow gate lengths can be obtained with a 2-step FUSI process.Ni FUSI gates have recently attracted attention as metal gate candidates for scaled CMOS technologies (Cf. B. Tavel et al., IEDM Tech. Dig., 825 (2001 ); J. Kedzierski et al.,IEDM Tech. Dig., 247 (2002), 441 (2003); K. G. Anil et al., Symp. VLSI Tech., 190 (2004 ); A. Veloso et al., IEDM Tech. Dig., 855 (2004 ); K. Takahashi et al., IEDM Tech. Dig., 91 (2004 )).NiSi, NiSi2and Ni3Si have been studied as possible gate materials. Due to its high nucleation temperature, NiSi2is less attractive for integration into self-aligned FUSI gate processes. The scalability of Ni FUSI gate processes to small gate lengths critical for advanced CMOS applications has not yet been addressed in detail, and is the focus of this work.MOSFET devices with Ni FUSI gates (SiON, HfSiON and HfO2) were fabricated using a self-aligned process with independent silicidation of the source/drain (S/D) and the poly-Si gate using a CMP approach as described in K. G. Anil et al., Symp. VLSI Tech., 190 (2004 ) and in A. Veloso et al., IEDM Tech. Dig., 855 (2004 ). Different Ni/Si ratios were used to obtain the different Ni silicide phases and study their formation as function of gate length. Physical characterization included TEM, SEM, RBS and XRD (RBS and XRD only for blanket films).For blanket Ni films on poly-Si/dielectric stacks, the silicide phase can be effectively controlled by the Ni/Si thickness ratio (tNi/tSi), when sufficient thermal budgets are used to drive the reaction to completion (Figs. 1, 2). NiSi, Ni2Si and Ni3Si phases were found for tNi/tSi= 0.6, 1.2 and 1.7, respectively (Fig. 1). Since Ni silicides have limited composition range, mixed-phase films are formed between the stoichiometric ratios (with Ni3Si2and Ni31Si12also able to grow at low temperatures). NiSi2grows by a nucleation-controlled process and does not form uniformly below 600C, so that incomplete silicidation is seen for tNi/tSi<0.5. For 0.6< tNi/tSi<1, stacks with NiSi at the bottom and Ni-rich silicide layers on top are formed. For tNi/tSi>1.7 Ni3Si is the stable phase, and the excess Ni is removed in the selective etch step. The resistivity and thickness of the silicide films increase with increasing Ni/Si ratio (Fig. 2). CV measurements performed on FUSI devices showed larger VFBshifts with change in Ni/Si composition ratio for HfSiON than for SiON (330 and 100 mV respectively, between NiSi and Ni3Si, Fig. 3). WFs for the most relevant Ni silicide phases are shown in Fig. 4. A significant increase in WF with increasing Ni/Si ratio is observed for HfSiON devices, with only a milder change seen for SiO2. The difference in NiSi WF observed between HfSiON and SiON, is attributed to Fermi level pinning on high-K devices. This difference disappears for Ni-rich silicides, suggesting the unpinning of the FL.On patterned devices, the story is quite different. For narrow lines, the Ni/Si ratio is not well defined: Ni from top of spacers and surrounding areas can diffuse and react with poly-Si in the gate to increase the effective Ni/Si ratio (Fig. 5). To understand silicidation of narrow gates, we consider the Ni silicide phase sequence. Ni2Si grows at low temperatures by Ni diffusion-limited kinetics (Fig. 6), while NiSi growth follows at higher temperatures with same type of kinetics (Fig. 7), only if available Ni is fully consumed and poly-Si is not. If the Ni supply is not limited, the reaction reaches completion with full Ni3Si silicidation. As a consequence, a FUSI linewidth-effect is found for conventional 1-step FUSI processes. Using conditions developed for blanket films (60 nm Ni/100 nm poly-Si, 520C 30s RTP), a transition from full silicidation with NiSi at large gate lengths to full silicidation with Ni-rich silicide at 50 nm gate lengths was found, with the corresponding increase in sheet resistance and silicide thickness (Figs. 8 and 9). The sheet resistance of small gate lengths corresponds to Ni3Si (Fig. 8). The key negative implication of this is that devices fabricated with this process show kinks in the Vtroll-off characteristics (Figs. 10, 11), consistent with a transition from NiSi to Ni3Si with decreasing gate length. The kink is of ~250 mV on HfO2and of ~110 mV on SiON, consistent with the difference in WF between NiSi and Ni3Si. The gate lengths at which the transition occurs depends on the thermal budget used (of magnitude of the Ni-rich silicide thickness grown at that thermal budget) and can also depend on details of the geometry (spacer height, etc.). A split of Vtshowing a bi-modal distribution that correlates well with RS values (low Vt-high RS on PMOS) was observed at the transition gate length. In contrast, for a Ni/Si ratio targeting Ni3Si (tNi/tSi=1.7), no phase control issues were observed and Vtroll-off characteristics are smooth (Figs. 11, 12). Scalability with good phase and Vtcontrol down to 30 nm gate lengths for Ni3Si is demonstrated (Fig. 11). PMOS Vt= -0.33 V was obtained for Ni3Si on HfSiON making it an attractive system. Vtvalues for the tNi/tSi=0.6 1-step process (NiSi on large structures) and tNi/tSi=1.7 (Ni3Si) process are seen to merge at small gate lengths (Fig. 11), further confirming the formation of Ni rich silicide at small gate lengths in the 1-step FUSI process.To solve the linewidth dependence of NiSi FUSI, a 2-step NiSi FUSI process (Fig. 5) was developed. The effective (reacted) Ni/Si ratio is controlled by limiting the RTP1 thermal budget, growing a Ni-rich silicide that does not fully consume the poly-Si thickness. Excess Ni and Ni films on top of spacers/surrounding areas are then removed in the selective etch step. A second RTP step at a higher temperature is then used to grow NiSi, fully siliciding the gates. Fig. 13 shows the effect of RTP1 temperature on sheet resistance of 50 nm and 1000 nm gates for 60nm Ni/100 nm poly-Si, showing the convergence of RS values with decreasing RTP1 temperature, corresponding to the transition from Ni-rich to NiSi on 50 nm gates.In the 2-step FUSI process, the RTP1 thermal budget needs to be controlled such that the grown Ni2Si layer has a thickness between 0.9 and 1.5 of the poly-Si thickness, to avoid incomplete silicidation and full silicidation with Ni-rich silicide respectively. The RTP1 process window estimated from Ni2Si kinetic data (Figs. 6 and 7) is shown in Fig. 14. Margins for process variations and the intrinsic non-uniformity of silicidation need to be taken into account, making the process window ≤ 20 C. Figs. 8 and 9 show that a 2-step NiSi FUSI process can eliminate the linewidth dependence, allowing the growth of NiSi on large and small structures. Smooth Vtroll-off for the 2-step NiSi FUSI process further confirms that NiSi can be maintained to small gate lengths (Figs. 11, 12).In this work, for the first time, scalability of NiSi and Ni3Si FUSI gate processes was demonstrated to 30 nm gate lengths and its underlying mechanisms discussed in detail. For Ni-rich silicides (Ni3Si), the same WF values (4.8 eV) are observed on SiON and HfSiON, suggesting unpinning of the Fermi level for HfSiON devices. A very attractive Vt= -0.33V is thus obtained for those devices with a scalable process. Smooth Vtroll-off characteristics and elimination of narrow line effect were also shown for a 2-step NiSi FUSI process.The Ni-silicide phases and morphology in Ni fully silicided gates were investigated for varying deposited Ni to Si thickness ratios and rapid thermal processing conditions. The presence of NiSi2, NiSi, Ni3Si2, Ni2Si, Ni31Si12and Ni3Si as predominant phases was observed for increasing Ni to Si thickness ratios. In most samples typically two of these phases were detected by X-ray diffraction. No secondary phases were detected on Ni3Si samples (Ni/Si thickness ratio ~1.7). For samples targeting NiSi as gate electrode, RBS and TEM analysis confirmed a layered structure with NiSi at the interface and a Ni-rich silicide layer (Ni2Si, Ni3Si2) on top. Process conditions were determined for formation of gate electrodes for NiSi, Ni2Si and Ni3Si. Only small changes in flat-band voltage or work function were found between these phases on SiO2or SiON for undoped samples. While significant changes in work function with dopants were observed for NiSi on SiO2, little or no effects were found for NiSi on HfSiON (suggesting Fermi-level pinning) and for Ni2Si on SiO2. An increase of > 300 mV was found from NiSi to Ni3Si on HfSiON, suggesting unpinning of the Fermi-level with the Ni-rich silicide.Metal gates are expected to replace partially silicided poly-Si gates in future complementary metal-oxide-semiconductor (CMOS) technology nodes, in order to eliminate poly-Si depletion issues. For this application, the work function (WF) is one of the most critical properties to be considered. Recently, there has been significant interest on the application of silicides as metal gate electrodes, and in particular, on NiSi fully-silicided (FUSI) gates (Cf. M. Qin, V. M. C. Poon and S. C. H. Ho, J. Electrochem. Soc., 148, (2001) 271 ; J. Kedzierski, D. Boyd, P. Ronsheim, S. Zafar, J. Newbury, J. Ott, C. Cabral Jr., M. Ieong and W. Haensch, IEDM Tech. Dig., (2003) 315 ; J. A. Kittl, A. Lauwers, O. Chamirian, M. A. Pawlak, M. Van Dal, A. Akheyar, M. De Potter, A. Kottantharayil, G. Pourtois, R. Lindsay and K. Maex, Mater. Res. Soc. Symp. Proc., 810, (2004) 31 ; K. G. Anil, A. Veloso, S. Kubicek, T. Schram, E. Augendre, J. -F. de Marneffe, K. Devriendt, A. Lauwers, S. Brus, K. Henson and S. Biesemans, Symp. VLSI Tech. Dig., (2004) 190 .).From a processing point of view, it can be implemented as a variation of the Ni self-aligned silicidation process used in previous nodes in which the silicide is formed in the gate down to the dielectric interface, fully consuming the poly-Si film. Ni-silicide appears as an attractive metal gate candidate that allows maintaining several aspects of the flows from previous generations (such as Si gate pattern and etch, and self-aligned silicide processes). A key property that has attracted attention to NiSi FUSI gates is the modulation of their effective work function on SiO2by dopants, which may allow for tuning of the threshold voltage (Vt) of NMOS and PMOS devices without the need for two different metals. The integration and properties of Ni FUSI gates on high-k dielectrics is also of interest for advanced CMOS applications.Good control of the WF and Vtof devices is an essential requirement for gate electrode applications. In order to assess the ability to control Vtfor Ni FUSI gate processes and given the large number of silicide phases in the Ni-Si system (Cf. A. Nicolet, S. S. Lau, in N. G. Einspruch and G. B. Larrabee (eds.), VLSI Electronics: Microstructure Science, Vol. 6, Ch. 6, Academic Press, New York (1983 )), it is important to address a) the ability to control the Ni-silicide phase at the dielectric interface, and b) the work functions of different silicide phases (both for conventional and for high-K dielectrics). A study of these key materials issues is presented in this work.Ni/poly-Si/dielectric stacks were deposited on (100) Si wafers, for varying film thicknesses, in the 30-170 nm and 60-100 nm ranges for Ni and Si films respectively. Dielectric films used in this study included SiO2, SiON, HfSiON and HfSiON/SiO2stacks of varying thicknesses with equivalent oxide thickness (EOT) in the 1-20 nm range. Samples were reacted by rapid thermal processing (RTP) to form silicide films at temperatures in the 280-850 °C range, typically for 30 to 60 s. A wet etch used in self-aligned Ni-silicide processes (diluted sulfuric-peroxide solution) was subsequently performed. In some samples a second RTP anneal step was performed after the selective etch. Samples were characterized by X-ray diffraction (XRD) using Cu-Kαradiation, transmission electron microscopy (TEM), scanning electron microscopy (SEM) and Rutherford backscattering spectrometry (RBS). Patterned fully silicided gate devices were also fabricated for electrical characterization, using a chemical-mechanical polishing (CMP) flow as described in reference [4] or a conventional flow (the latter used only for fabrication of capacitors overlapping isolation). Ion implantation was performed on selected samples after poly-Si deposition, with some of these samples receiving an activation anneal.Ni-silicide phases in fully silicided gates.Several silicide phases can be formed in the Ni-Si system. For the reaction of a thin Ni film with a Si-substrate, Ni-rich phases form first at low temperatures (Cf. A. Nicolet, S. S. Lau, in N. G. Einspruch and G. B. Larrabee (eds.), VLSI Electronics: Microstructure Science, Vol. 6, Ch. 6, Academic Press, New York (1983 ); and C. Lavoie, F. M. d'Heurle, C. Detavernier and C. Cabral Jr., Microelectronic Engineering, 70, (2003) 144 ).The presence of Ni31Si12has been reported at early stages of the reaction, followed by formation of Ni2Si. Ni2Si is generally the predominant phase at low temperatures and early stages of the reaction, forming a layer that grows by diffusion-limited kinetics. At higher temperatures and as Ni is consumed, NiSi nucleates and grows also by diffusion-limited kinetics. The presence of Ni3Si2has also been reported during early stages of the reaction, before nucleation of NiSi. The formation of the different Ni-rich silicide phases can also depend on film thickness and thermal history (ramp rates, etc.) during the reaction. As the reaction proceeds for the case of a Nifilm on a Si substrate, NiSi grows fully consuming the Ni-rich silicides. NiSi2nucleates and grows at higher temperatures.For Ni FUSI gate applications, deposited Ni films are reacted with either amorphous or polycrystalline Si films of limited thickness, deposited on top of a dielectric. The deposited Ni thickness to Si thickness ratio (tNi/tSi) controls (in combination with the thermal history) the reacted Ni/Si ratio and phases obtained. It is essential for gate electrode applications that the silicide phase at the dielectric interface be well controlled, in order to ensure good control of the Vtof devices. The phases and morphology after full silicidation for varying tNi/tSiratios and thermal processes were investigated, in order to assess and identify conditions for formation of gates with a controlled silicide phase at the dielectric interface. NiSi2films with NiSi as secondary phase (as determined by XRD) were obtained for tNi/tSi~ 0.30-035 at 800 °C (Fig. 15). For applications in self-aligned silicide processes, the nucleation-controlled growth mechanism of NiSi2and its high nucleation temperature make it a less appealing candidate. If processing temperatures are kept below the nucleation temperature of NiSi2, a minimum tNi/tSiratio of ~0.55 is required to allow full silicidation of the gate with NiSi. A larger tNi/tSiratio (e.g. 0.6) is desirable, however, to ensure full silicidation and prevent the presence of Si grains at the dielectric interface, allowing for possible process variations in deposited film thickness. As a result, when targeting NiSi as gate electrode material, bi-layer silicide films are typically obtained with NiSi at the bottom and a Ni-rich silicide on top (Figs. 15 to 17). The thickness of each layer depends on the Ni/Si ratio, with a larger proportion of Ni-rich silicide as the ratio is increased (Figs. 15 and 16). The phases present in the upper Ni-rich silicide layer can depend on the Ni/Si ratio chosen and on the thermal history. For samples with deposited Ni thickness of ~50-70 nm and varying poly-Si thickness with tNi/tSiratios in the ~0.6 to 0.9 range and reacted at 450 °C, the main phases observed by XRD are NiSi and Ni2Si (Fig. 15), with NiSi and Ni2Si as the bottom and top layers respectively as indicated by analysis of the RBS spectra (Fig. 16).Characterization of bi-layer samples with deposited tNi/tSi~ 0.6 by scanning TEM (STEM) energy dispersive X-ray (EDX) analysis was performed for varying processing conditions. The ratio of Ni content (x in NixSi) from the top layer to the bottom layer (xtop layer/xbottom layer) was found to be in the ~1.3 to 2 range, suggesting that Ni3Si2and/or Ni2Si may be present in the top layer, depending on processing conditions. RBS analysis also suggested that a bi-layer structure with Ni3Si2as top layer and NiSi as bottom layer can be obtained (Fig. 18). We note, however, that RBS analysis can only provide information on the average composition vs. depth, and cannot distinguish between pure phases and phase mixtures. The presence of Ni3Si2as secondary phase was confirmed by XRD analysis of NiSi samples formed from deposited tNi/tSi~ 0.6 by reaction at higher temperatures (HT); see Fig. 19.XRD patterns and RBS spectra after silicidation (and selective etch) for samples with deposited tNi/tSiin the 0.6 to 1.7 range (100 nm poly-Si) are shown in Figs. 19 and 20 respectively. As for samples targeting NiSi, a slightly Ni-richer ratio rather than the exact stoichiometric ratio was used for samples targeting the different silicide phases. Fig. 19 shows that Ni-silicide phases with increasing Ni content are observed by XRD as the Ni/Si ratio is increased. As tNi/tSiis increased above ~0.9, poly-Si is consumed with formation of Ni-rich silicide phases and NiSi does not form. For tNi/tSi~ 0.9 reacted at higher temperatures (HT), the presence of Ni3Si2and Ni2Si is observed by XRD (Fig. 19). Ni2Si films were formed at tNi/tSi~ 1.2 (Figs. 19 and 20). The XRD patterns also indicate the presence of Ni31Si12for this thickness ratio, particularly on samples reacted at higher temperatures (Fig. 19). The RBS spectra shown in Fig. 20 for tNi/tSi~ 1.2 indicates that the silicide film has a higher Ni content on the top portion and a composition of ~ Ni2Si at the interface, suggesting a layered structure with the Ni-richer phase at the top for this case as well. Samples with a more uniform Ni2Si composition could also be obtained (Fig. 18). For tNitSi~ 1.4, the main phases present as determined from the XRD spectra are Ni31Si12and Ni3Si. At tNi/tSi~ 1.7, Ni3Si films are formed (Figs. 19 and 20), with no indication of second phases in the XRD pattern. For self-aligned FUSI applications, phase control for Ni3Si is not a significant problem, since this is the Ni-silicide phase with highest Ni content and in consequence it is stable in contact with Ni. Thus, the reaction reaches completion with formation of a uniform Ni3Si layer and any excess Ni is then removed in the selective etch. Ni3Si is the only phase obtained for reacted Ni to Si thickness ratios >1.6.Electrical characterization of Ni fully silicided gates.The WF of Ni FUSI gates was extracted from capacitance-voltage (CV) measurements performed on devices for several dielectric EOT values. A dual thickness series with HfSiON/SiO2stacks of varying SiO2and HfSiON thicknesses was used to evaluate the WF of NiSi on HfSiON, accounting for the effect of bulk charges in HfSiON. Figure 21 shows the dependence of the flat band voltage (Vfb) on EOT for NiSi/SiO2, NiSi/HfSiON/SiO2and Ni2Si/SiO2gate stacks, and the effect of dopants. Shifts in WF with dopants (-230 mV for As and +160 mV for B) are seen for NiSi on SiO2. In contrast, dopant effects on WF are much smaller for NiSi on HfSiON (Fig. 21 (b)). The effective WF values extracted for undoped NiSi were ~4.72 eV on SiO2and ~4.5 eV on HfSiON. The lack of significant dopant effects and the WF value observed in this study for NiSi on HfSiON suggest that Fermi level pinning, previously reported for poly-Si gates on Hf containing high-K dielectrics, is still present for NiSi FUSI gates. Fig. 21 (c) shows that the WF of undoped Ni2Si on SiO2(~4.7 eV) is quite similar to that of NiSi on SiO2. However, in contrast to the case of NiSi, the WF of Ni2Si on SiO2appears not to be affected significantly by the addition of dopants. Fig. 22 (a) shows that for Ni3Si/SiON, an increase in Vfbof ~100 mV from the value for NiSi/SiON is obtained. The change in Vfbwith silicide phase is quite larger for the case of HfSiON, with an increase of >300 mV from NiSi to Ni3Si (Fig. 22 (b)), and suggests unpinning of the Fermi level with Ni3Si.The phases and morphology of Ni FUSI gates were studied for varying Ni to Si thickness ratios. The presence of NiSi2, NiSi, Ni3Si2, Ni2Si, Ni31Si12and Ni3Si as predominant phases was obtained for increasing Ni to Si ratios. A slightly Ni-richer thickness ratio than that corresponding to stoichiometric NiSi was found suitable for NiSi FUSI gate applications, resulting in a layered structure with NiSi at the interface and a Ni-rich silicide layer on top. No secondary phases were detected on Ni3Si samples (Ni to Si thickness ratio ~ 1.7). Electrical characterization for NiSi, Ni2Si and Ni3Si devices on SiO2, SiON and high-K dielectrics was performed. Only small changes in flat-band voltage or work function were found between these phases on SiO2or SiON for undoped samples. While significant changes in work function with dopants were observed for NiSi on SiO2, little or no effects were found for NiSi on HfSiON (suggesting Fermi level pinning) and for Ni2Si on SiO2. An increase of > 300 mV was found from NiSi to Ni3Si on HfSiON, suggesting unpinning of the Fermi level with the Ni-rich silicide.A method according to the present invention comprises selecting parameters of first thermal step to control diffusion of metal into semiconductor such that the (total) amount of metal present in the silicided portion of the semiconductor gate electrode is well-controlled and the ratio metal-to-semiconductor is larger than 1.Preferably the metal is the moving species silicidation process and the pile-up of metal in the semiconductor is controlled.Preferably abundant metal is present during the first thermal step and hence no metal shortage is likely to occur which would impact the amount of metal incorporated in the semiconductor gate electrode during the first thermal step.Preferably the semiconductor gate electrode has limited dimensions, i.e. it is not a complete substrate. Consequently all semiconductor material of the gate electrode will be consumed during complete silicidiation. In case of a gate electrode the semiconductor gate constitutes a container for accepting the metal and the whole container can take part in the overall silicidation process.The process parameters of the silicidation process can be determined as described herein and/or as illustrated in figure 23.Figure 23(i) shows the stack of unreacted metal M and semiconductor gate electrode S. For simplicity the sidewall spacers adjacent to the gate electrode and the source/drain regions present in the substrate on which the gate is formed are not shown.Figure 23(ii) shows the partially silicided gate electrode after the first thermal step and after having selectively removed the unreacted metal M.Figure 23(iii) shows the fully silicided gate electrode after the second thermal step.A method according to the invention can further comprise:selecting the metal phase Mx3 Sy3 in the fully silicided semiconductor gate (x3 , y3 or in other words the total amount of metal present in the semiconductor gate electrode) with a given thickness;optionally selecting the total thickness of the fully silicided gate t3 and hence the thickness of the unsilicided semiconductor gate.The correlation between the thickness of the fully silicided gate electrode (t3), and the thickness of the unsilicided semiconductor gate (t1), can be made if the metal phase Mx3Sy3is determined.As shown, each silicidation phase is characterized by a certain volume expansion coefficient as is known from M.A. Nicholet et al in "VLSI electronics: Microstructure Science Vol. 6", editors N.G. Einspruch and G. B. Larrabee, Academic Press, New York 1983, pages 455 to 459 . If there is abundant metal available one can in first instance say: t3= expansion coefficient * t1. Consequently the thickness t1is determined.The total amount of metal in the fully silicided gate is the amount of metal that has to be stored in the partially silicided semiconductor gate during the first thermal step. Hence after the first thermal step a metal-rich silicide phase is to be formed.After this first thermal step the only metal available for the use in the second thermal step is the amount of metal incorporated in the semiconductor gate during the first thermal step. In the second thermal step only a redistribution of this incorporated amount of metal will occur.The parameters of the first thermal step can be determined by establishing a graph indicating how much metal diffuses into the semiconductor gate electrode for a given time and temperature. The suitable time and temperature settings are thus selected such as to have the necessary amount of metal incorporated in a part of the semiconductor gate.In first instance this amount of metal can be regarded as proportional to t3*x3and to t2*x2. By selecting x2and t2with x2> x3and t2< t3, the phase Mx2Sy2and the thickness of the silicided portion t2can be determined.To be more accurate, the total number of metal atoms available before and after the first thermal step can be compared.Figures such as figure 6 can be used for each silicide phase Mx2Sy2, since such figures are representative of the amount of metal (x2*t2) that can be stored for a given time temperature combination with only thickness t2as parameter.In a method according to the invention, for every metal and for every metal-semiconductor phase envisaged for forming a fully silicided gate electrode, curves (silicidation kinetics graphs) similar to figure 6 can be made (established).Such curves can be made by any person skilled in the art using known experimental techniques: select a metal, deposit the selected metal with on a semiconductor substrate, vary time and temperature of the first thermal step and measure the phase x2, y2and the thickness t2of the thus formed silicide phase.Figure 14 shows the result of the second thermal step.In case a fully silicided gate electrode with Mx3Sy3with x3=y3=1 is envisaged, e.g NiSi, at least at the interface between the fully silicided gate electrode and the gate dielectric:if the first thermal step lasts to long, too much metal will diffuse into the semiconductor gate electrode and will be incorporated such that t2 *x2 > t3 *x3 . Hence a metal rich silicide will be formed.If the first thermal step is not long enough, insufficient amount of metal t2 *x2 < t3 *x3 is incorporated in the partial silicided semiconductor gate electrode. Consequently after redistribution the semiconductor gate electrode might not be completely silicided.Figure 24 shows the crystallographic characterization of silicide films manufactured using a manufacturing process as illustrated by figure 5.The silicide films are characterized using X-ray Diffraction (XRD).These fully silicided gates are obtained by depositing 170nm of nickel on 100 nm polycrystalline silicon, such that the silicidation reaction is not limited by the supply of the refractory metal, in this example nickel.The gate dielectric is a hafnium-siliconoxide-nitride dielectric. The two-step thermal process is performed using an ASM Levitor RTP system.The temperature of the first thermal step (RTP1) is varied from about 340°C to about 675°C. The time of this first thermal step was set at about 30s.A selected etch is performed to remove unreacted nickel after the first thermal step.Thereafter a second thermal step (RTP2) is executed at 480°C for about 30s.It is found that when the reaction is not limited by the availability of nickel, the resulting silicide phase of a fully silicided polysilicon gate can be effectively controlled by the thermal budget of the first thermal step (RTP1).Within the conditions set for figure 24 in terms of thicknesses (170nm Ni / 100nm poly-Si) and time (30 seconds), for RTP1 temperatures less than or equal to 350°C, the polysilicon gates are not fully silicided (even if abundant nickel is present).Poly-Si (+Si) X-ray Diffraction (XRD) peaks are observed as shown in figure 24. From 350°C onwards complete silicidation of the 100nm polysilicon gate can occur.For RTP1 temperatures in the range of 355°C to 375°C, XRD shows the presence of various nickel silicide phases in the fully silicided gate.Figure 24 illustrates the nickel silicide phases present in the fully silicided gate electrode for first thermal temperatures of respectively 360°C, 370°C and 375°C: NiSi (open circles), Ni3Si2(stars) and Ni2Si (open diamonds) phases are formed.Rutherford Backscattering Spectrometry (RBS) and Transmission Electron Microscopy (TEM) analysis showed that the thus-obtained fully silicided films have a layered structure with the metal-poor phase, in this case NiSi, at the bottom of the fully silicided gate electrode (7) in contact with the underlying gate dielectric (6) (i.e. at the interface), while the metal-rich silicides, in this case Ni3Si2and Ni2Si, are in the upper part of the fully silicided gate electrode.The work function of these gate electrodes will thus be determined by the NiSi phase at the interface.Within the conditions set for figure 24, the RTP1 temperature process window for forming NiSi, at least at the interface, is about 20°C or less.Within the conditions set for figure 24, the process window of the first thermal step (RTP1) for forming Ni2Si at least adjacent the gate dielectric (7) (i.e. at least at the interface) is 25°C or less.If the temperature of the first thermal step (RTP1) is above 400°C, Ni31Si12will start to grow. In the temperature range from about 400°C to about 600°C a fully silicided gate electrode will be formed during the first thermal step where essentially only a Ni31Si12phase can be detected. Figure 24 shows fully silicided gate electrodes formed respectively at RTP1 temperatures of 400°C and 575°C only exhibiting Ni31Si12XRD peaks (open triangle).If the temperature of the first thermal step (RTP1) is above 625°C, Ni3Si will start to grow. In a temperature range above about 625°C a fully silicided gate electrode will be formed during the first thermal step where essentially only a Ni3Si phase can be detected. Figure 24 shows fully silicided gate electrodes formed respectively at RTP1 temperatures of 625°C and 675°C only exhibiting Ni3Si XRD peaks (open squares). The RTP1 temperature process window to form Ni31Si12, at least at the interface, is about 200°C.The method of controlling the phase formation for a fully silicided gate electrode is illustrated by figure 25 using the experimental results in relation with Fig. 24 for the example of nickel silicide.The Ni to Si reacted ratio is controlled by the thermal budget of the first RTP step (RTP1) which thermal budget is determined by its time and temperature. In figure 25, the time is kept constant at 30s, while the temperature of the first thermal step is varied to vary the thermal budget thereof.At low RTP1 thermal budgets, i.e. below 350°C, insufficient Ni has reacted, at least for transistors having a large gate lengths, e.g. 100nm or above, and the polycrystalline silicon gate electrode remains incompletely silicided even after performing a second thermal step RTP2.In the thermal budget range associated with the RP1 temperature range 350°C to 375°C sufficient Ni has reacted to result in a full silicidation of the gate electrode after RTP2 with NiSi in contact with the gate dielectric (7) (i.e. at the interface) for gate lengths above and below 100nm.In the thermal budget range associated with the RP1 temperature range 375°C to 400°C sufficient Ni has reacted to result in a full silicidation of the gate electrode after RTP2 with Ni2Si in contact with the gate dielectric (7) (i.e. at the interface) for gate lengths above and below 100nm.In the thermal budget range associated with the RP1 temperature range 400°C to 600°C sufficient Ni has reacted to result in a full silicidation of the gate electrode after RTP2 with Ni31Si12in contact with the gate dielectric (7) (i.e. at the interface) for gate lengths above and below 100nm.In the thermal budget range associated with the RP1 temperature range above 600°C sufficient Ni has reacted to result in a full silicidation of the gate electrode after RTP2 with Ni3Si1in contact with the gate dielectric (7) (i.e. at the interface) for gate lengths above and below 100nm. |
The present disclosure describes a process and apparatus for improving insertions of entries into a hash table. A large number of smaller virtual buckets may be combined together and associated with buckets used for hash table entry lookups and/or entry insertion. On insertion of an entry, hash table entries associated with a hashed-to virtual bucket may be moved between groups of buckets associated with the virtual bucket, to better distribute entries across the available buckets to reduce the number of entries in the largest buckets and the standard deviation of the bucket sizes across the entire hash table. |
ClaimsWhat is claimed is:1. An apparatus for computing, comprising:one or more computer processors;a storage device coupled to the one or more computer processors;an insertion module communicatively coupled to the one or more processors, to manage insertion of entries into a hash table residing on the storage device, wherein the hash table has a plurality of buckets divided into groups and the groups of buckets are correspondingly associated with a plurality of virtual buckets; and wherein the insertion module is to:receive a data packet for insertion as an entry into the hash table;apply a hash function to the data packet to determine a virtual bucket associated with the entry;select a first bucket of the group of buckets associated with the virtual bucket; compare respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets; determine, based on the comparison, a second bucket of the group of buckets having a lower counter value;move one or more entries associated with the virtual bucket from the first bucket to the second bucket; andinsert the entry into the second bucket.2. The apparatus of claim 1, wherein an entry includes a key, a value, and a virtual bucket identifier.3. The apparatus of claim 1, wherein move one or more entries associated with the virtual bucket from the first bucket to the second bucket further includes:determine a number of entries to be moved;decrement a counter of the first bucket by the number of entries to be moved; increment a counter of the second bucket by the number of entries to be moved plus 1.4. The apparatus of any one of claims 1-3, wherein select the first bucket further includes selected the first bucket based upon entries in a mapping array.5. The apparatus of any one of claims 1-3, wherein the groups of buckets correspondingly associated with the plurality of virtual buckets are identified in a mapping array.6. The apparatus of claim 5, wherein the mapping array is on the storage device.7. The apparatus of any one of claims 1-3, wherein a data packet includes a network packet address.8. The apparatus of any one of claims 1-3, wherein to insert the entry comprises to manage a standard deviation of a plurality of counters of a respective plurality of buckets.9. The apparatus of any one of claims 1-3, wherein the plurality of buckets is a fixed number.10. The apparatus of any one of claims 1-3, wherein the number of entries in a bucket is limited to a fixed number.11. The apparatus of any one of claims 1-3, wherein to insert the entry is to be complete in 0(1) operations.12. The apparatus of any one of claims 1-3, wherein to insert an entry comprises to manage the number of entries respectively in the plurality of buckets.13. A method for computing, comprising:receiving, by a computing system, a data packet for insertion as an entry into a hash table having a plurality of buckets divided into groups and the groups of buckets correspondingly associated with a plurality of virtual buckets;applying, by the computing system, a hash function to the data packet to determine a virtual bucket associated with the entry;selecting, by the computing system, a first bucket of a group of buckets associated with the virtual bucket;comparing, by the computing system, respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets;determining, by the computing system, based on the comparison, a second bucket of the group of buckets having a lower counter value;moving, by the computing system, one or more entries associated with the virtual bucket from the first bucket to the second bucket; andinserting, by the computing system, the entry into the second bucket.14. The method of claim 13, wherein moving one or more entries associated with the virtual bucket from the first bucket to the second bucket further includes:determining, by the computing system, a number of entries to be moved;determining, by the computing system, a counter of the first bucket by the number of entries to be moved;incrementing, by the computing system, a counter of the second bucket by the number of entries to be moved plus 1.15. The method of claim 13, wherein selecting the first bucket further includes selecting the first bucket based upon entries in a mapping array.16. The method of claim 13, wherein the groups of buckets correspondingly associated with the plurality of virtual buckets are identified in a mapping array.17. The method of claim 16, wherein the mapping array is on a storage device.18. The method of claim 13, wherein a data packet includes a network packet address.19. The method of claim 13, wherein inserting the entry comprises managing, by the computing system, a standard deviation of a plurality of counters of a respective plurality of buckets.20. The method of claim 13, wherein the plurality of buckets is a fixed number.21. The method of claim 13, wherein the number of entries in a bucket is limited to a fixed number.22. The method of claim 13, wherein inserting the entry comprises managing, by the computing system, the number of entries respectively in the plurality of buckets.23. One or more computer-readable media comprising instructions that cause a computing device, in response to execution of the instructions by the computing device, to perform any one of the methods of claims 13-22.24. An apparatus for computing, comprising:means for receiving a data packet for insertion as an entry into a hash table having a plurality of buckets divided into groups and the groups of bucketscorrespondingly associated with a plurality of virtual buckets;means for applying a hash function to the data packet to determine a virtual bucket associated with the entry;means for selecting a first bucket of a group of buckets associated with the virtual bucket;means for comparing respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets;means for determining, based on the comparison, a second bucket of the group of buckets having a lower counter value;means for moving one or more entries associated with the virtual bucket from the first bucket to the second bucket; andmeans for inserting the entry into the second bucket.25. The apparatus of claim 24, wherein moving one or more entries associated with the virtual bucket from the first bucket to the second bucket further includes:means for determining a number of entries to be moved;means for determining a counter of the first bucket by the number of entries to be moved;means for incrementing a counter of the second bucket by the number of entries to be moved plus 1. |
HASH TABLE ENTRIES INSERTION METHOD AND APPARATUS USINGVIRTUAL BUCKETSIncorporation by ReferenceThis application claims priority to U.S. Application No. 15/473,413 entitled"HASH TABLE ENTRIES INSERTION METHOD AND APPARATUS USING VIRTUAL BUCKETS," filed March 29, 2017, which claims priority to U.S. provisional application 62/410,284 entitled "Hash Table Entries Rebalancing Method and Apparatus Using Virtual Buckets" filed October 19, 2016.Technical FieldThe present disclosure relates to the fields of computing and networking. In particular, the present disclosure is related to processes and apparatuses for rebalancing hash table entries using virtual buckets.BackgroundThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.Various computing/networking applications involve flow classifications and/or hash table lookups. For example, as the telecommunication industry transitions to software defined network (SDN) or network function virtualization (NFV) to support legacy and upcoming usage models such as 5G on standard high volume servers, or the cloud computing industry, scalable and distributed software routing/s witching has become one of the key requirements of the packet processing architecture. Two level distributed hashing architecture has been developed to achieve high performance packetswitching/routing using either a cluster of standard high-volume servers or a single system composed of multiple CPU cores. In such architecture, network packets are identified in part by lookup in various hash tables. Whether it is two level or one level hashing, the performance of hash table degrades as the number of elements in each hash "bucket" increases.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.FIG. 1 illustrates an example computing/networking device having the hash table entries rebalancing technology of the present disclosure, in accordance with various embodiments.FIGS. 2-4 illustrate an example implementation of the hash table entries rebalancing mechanism of the present disclosure, according to some embodiments.FIG. 5 illustrates an example process for rebalancing entries in a hash table using virtual buckets, in accordance with various embodiments.FIG. 6 illustrates a block diagram of an example architecture of acomputing/networking device suitable for use to practice the present disclosure, in accordance with various embodiments.FIG. 7 illustrates an example computer-readable storage medium with instructions configured to enable a computing/networking device to practice aspects of the present disclosure, in accordance with various embodiments.Detailed DescriptionIn legacy hash table implementations, as more entries fill up each bucket of the hash table it takes longer to find an available bucket to insert a new entry into the table. This results in decreased insertion performance. In legacy implementations, when entries are inserted randomly into a fixed number of buckets (as is often the case with real workloads), the number of entries into those buckets tend to form a normal distribution with some buckets containing far more entries than the average, some almost empty, and the rest clustered around the average.This disclosure addresses these problems by providing a mechanism by which the entries in the buckets can be rebalanced at insertion time by adding only 0(1) operations. This may have the effect of transferring entries from "more full" to "less full" buckets, reducing the maximum number of entries in any one bucket. This may significantly improve performance, particularly when the hash table is highly sensitive to bucket size.In the following description, various aspects of the illustrative implementations are described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.In the following description, reference is made to the accompanying drawings that form a part hereof, wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments in which the subject matter of the present disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B, and C).The description may use perspective-based descriptions such as top/bottom, in/out, over/under, and the like. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of embodiments described herein to any particular orientation.The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.The terms "coupled with" and "coupled to" and the like may be used herein."Coupled" may mean one or more of the following. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements indirectly contact each other, but yet still cooperate or interact with each other, and may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. By way of example and not limitation, "coupled" may mean two or more elements or devices are coupled by electrical connections on a printed circuit board such as a motherboard, for example. By way of example and not limitation, "coupled" may mean two or more elements/devices cooperate and/or interact through one or more network linkages such as wired and/or wireless networks. By way of example and not limitation, a computing apparatus may include two or more computing devices "coupled" on a motherboard or by one or more network linkages.The term "module" in general, "insertion module" in particular, may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a combinatorial circuit, such as field programmable gate array (FPGA) programmed with the implementation logic, a processor (shared, dedicated, or group) and/or memory(shared, dedicated, or group) that execute one or more software or firmware programs with implementation logic, and/or other suitable components that provide the described functionality with references to FIGS 1-5 below.The term "computer-readable storage media" may refer to, be a part of, or otherwise include media on which data, including instructions of a module ( e.g., the insertion module described below) that may be executed, may reside. Computer-readable storage media may be either transitory, or non-transitory.Various operations are described as multiple discrete operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent.To improve hash table performance, a large number of smaller virtual buckets may be combined together and associated with buckets used for hash table entry lookups and/or entry insertion. In this way, entries may be moved between groups of buckets at entry insertion time while substantially maintaining the performance of the underlying hash table.For example, a hash table may be sized to hold 221entries (a little over 2 million). The table may be made up of 216(65,536) buckets that may each store 25(32) items. In addition, 220virtual buckets may be used, with 24(16) virtual buckets associated for each real bucket.Thus, when randomly inserting entries each virtual bucket may have on average 2 entries, with some being larger and smaller again following a statistical normal distribution. As (non-virtual) buckets start to get too full, some virtual buckets may be subtracted from a too-full bucket and added to another bucket that is less full to balance the number of entries per bucket. This is in contrast to legacy hash table implementations that do not use virtual buckets, for example where an entry may be hashed and then 16 bits of the hash may be used to address one of 216buckets. Statistically, if random keys are inserted in this fashion, it is statistically likely that 30% of 2 million may be inserted before one of those buckets exceeds 32 items and the insert fails.In embodiments, implementation of the systems and processes herein will better distribute entries across the available buckets and reduce the number of entries in the largest buckets and the standard deviation of the bucket sizes across the entire hash table.FIG. 1 illustrates an example computing/networking device having the hash table entries rebalancing technology of the present disclosure, in accordance with various embodiments. A computing/networking device 100 may include an application or networking function 102. In embodiments, this may include an elastic flow distributor (EFD) of an SDN/NFD. In embodiments, the application or networking function 102 may include a number of hash tables 104. In embodiments, the hash tables 104 may include an array of M buckets 106, and virtual buckets 108. Additionally, the application or networking function 102 may further include a mapping array 109, and an insertion module 110 associated with the hash table 104 for inserting and rebalancing the hash table 104. An insertion module 110 may also include a hash function. The insertion module 110 may take an entry as input, compute deterministically the virtual bucket index where that entry may be stored, and adjust the distribution of virtual buckets among buckets to balance the number of entries stored in the buckets.In embodiments, buckets 106 may contain an array of entries, which may also be referred to as an array of keys. In embodiments, the array associated with the buckets 106 may contain a fixed number of entries (N). In embodiments, an entry may include a network packet, an identification for a network packet, or data associated with the network packet. In embodiments, an entry may be any other item information that may wish to be stored and/or retrieved from a hash table. In embodiments, each of the buckets 106 may include a counter indicating the number of keys, or entries, that are stored in that bucket.In embodiments, one or more virtual buckets 108 may be mapped to one or more of the buckets 106, which may also be referred to as a group of buckets. Virtual buckets 108 may contain a choice that may indicate to which group of buckets 106 a particular virtual bucket may be associated. In embodiments, the mapping array 109 may be an array, or other data structure, that identifies the choice of each virtual bucket and maps the relationship between virtual buckets 108 and groups of buckets 106.In embodiments, computing or networking device 100 may be any one of such devices known in the art, e.g., servers, routers, switches, gateways, and so forth. In particular, in embodiments, computing or networking device 100 may be one of a plurality of networking devices implementing the modular forwarding table scalability technology of U.S. Patent Application Number 14/750,918, filed on June 25, 2015, entitled"TECHNOLOGIES FOR MODULAR FORWARDING TABLE SCALABILITY," which specification is hereby fully incorporated by reference.FIGS. 2-4 illustrate an example implementation of the hash table entries rebalancing mechanism of the present disclosure, according to some embodiments.FIG. 2 illustrates an example implementation of a hash table that uses 256 (0-255) virtual buckets 215 each having a virtual bucket identifier 216 and a number of entries (keys) 217 associated with the virtual bucket. In embodiments, a hash function applied to an entry may produce an identifier 216 for a virtual bucket (e.g., 215a) to which that entry may be assigned. In embodiments, a virtual bucket may have no entries (e.g., 215d) or multiple entries (e.g., 215a or 215b), as reflected by the number 217 for the virtual bucket.In the example, there are 64 (0-63) buckets 218. Each respective virtual bucket 215a, 215b and so forth may have four choices for each of the 64 buckets with which it may be associated, as may be indicated by the multiple arrows shown. In embodiments, there may be a different number of choices for each of the respective virtual buckets 215. A mapping array, such as mapping array 109 of FIG. 1, may be used to identify the buckets associated with each virtual bucket. In embodiments, a function (interpolation) may be used to identify the buckets associated with each virtual bucket. This may include, for example, using an offset and multiplication based on an initial number, using a fixed pseudorandom shuffle array, or some other suitable algorithm.In the example implementation, a bucket identifier may be indicated in the upper left corner, for example, bucket "0" 222a and bucket "1" 222b. For each virtual bucket 215 there may be a group of buckets 218 to which a virtual bucket may be associated. This information may be recorded in a mapping array as discussed above. This example implementation includes, for virtual bucket "0" 215a a mapping to bucket "0" 218a. For virtual bucket "2" 215b a mapping to buckets "1" 218b, "2" 218c, "4" 218d, and "6" 218e. Consequently, bucket "1" 218b is associated with virtual buckets "2" 215b and "7" 215c. The total number of entries associated with a bucket 218a may be indicated in the lower left corner of the bucket, for example the number "4" 224a. This number for a particular bucket may be the sum of each of the entries associated with each of the virtual buckets 215 associated with the bucket. For example, "5" 224b is the sum of the number of entries of virtual bucket 2 (3) and virtual bucket 7 (2). FIG. 3 illustrates an example implementation of an insertion of an entry into the hash table. For example, an entry to insert 326 may have part of its data used as input to a hash function 328 that may result in a hash value 330. In this example, the hash value 330 corresponds to virtual bucket "2" 215b that already has 3 entries associated with it 217b. Virtual bucket "2" 215b is associated with four buckets 218b, 218c, 218d, 218e.The four buckets 218b, 218c, 218d, 218e associated with virtual bucket "2" 215b are examined to determine whether rebalancing may be needed. The current entry counts 224b, 224c, 224d, 224e for each of the four buckets 218b, 218c, 218d, 218e are compared to determine which bucket has the lowest number of entries. In this example, bucket "4" 218d has an entry count of "1" 224d, which is lower than "5" 224b, "9" 224c or "5" 224e. Therefore, rebalancing of the hash table may proceed by moving the entries associated with virtual bucket "2" 215b from bucket "1" 218b to bucket "4" 218d. After rebalancing, the entry 326 may be inserted into bucket "4" 218d. In embodiments, the insertion may happen first, for example into bucket "1" 218b, and then rebalancing performed after insertion.FIG. 4 illustrates an example implementation of rebalancing based on an insertion of an entry into the hash table. The four entries of virtual bucket "2" 415b, which may be similar to 215b of FIG. 3, but now incremented by one to account for the inserted entry, is associated with bucket "1" 418b, which may be similar to bucket "1" 218b of FIG. 3, have been moved to bucket "4" 418d, which may be similar to bucket "4" 218d of FIG. 3. As a result, the bucket "1" count 424b, which may be similar to 224b of FIG. 3, is now 2 (5-3), and the bucket "4" count 424d, which may be similar to 224d of FIG. 3, is now 5 (the original 1 entry plus the 3 entries moved plus the 1 new entry inserted).FIG. 5 illustrates an example process for rebalancing entries in a hash table using virtual buckets, in accordance with various embodiments. The process 500 may be performed, for example, by the device 100 of FIG. 1 or the system 600 (e.g., computing device) configured to implement the insertion module 650, similar to insertion module 110 and/or the hashing function module 852, similar to hashing function 328, described in reference to FIGS. 1-5.The process 500 may begin at block 502, and may include receiving a data packet for insertion as an entry into a hash table having a plurality of buckets divided into groups and the groups of buckets correspondingly associated with the plurality of virtual buckets. In embodiments, this may include a plurality of buckets such as buckets 218 of FIG. 2. In embodiments, the buckets may include a number of entries, for example entries of data packets. There may be a fixed number of buckets (M). The buckets may include a fixed number of slots for entries (N). The plurality of virtual buckets, such as virtual buckets 215 of FIG. 2, may each mapped to a group of one or more buckets. In embodiments, because the number of virtual buckets 215 outnumbers the number M of buckets 218, a hashing function, for example as implemented by the hashing function module 652 of FIG. 6 or 328 of FIG. 3, may distribute hashed entries over the virtual buckets with smaller maximum bucket size than may be achieved if the hashing function mapped entries directly into the limited number of buckets. In embodiments, an entry may include all or a portion of the data packet, or may be some other data item that is to be stored in a hash table and later retrieved.At block 504, the process 500 may include applying a hash function to the data packet to determine a virtual bucket associated with the entry. In embodiments, the hash function may be implemented by the hash function module 652 of FIG. 6 or 328 of FIG 3. In embodiments, the hash function may be one of any suitable hash functions, where the results of the hash function are to identify a virtual bucket to which the entry may be associated.At block 506, the process 500 may include selecting a first bucket of a group of buckets associated with the virtual bucket. In embodiments, this may include the functionality described with respect to FIG. 3 or the insertion module 650 of FIG. 6. In embodiments, there may be a default bucket within the group of buckets for a virtual bucket identified as the first bucket. For example, this may be the bucket having the lowest identification number, such as bucket 218b of FIG. 3. In embodiments, some other suitable algorithm may be used to select the first bucket. In embodiments, a mapping array, such as mapping array 109 of FIG. 1, may be used to identify the virtual bucket and to identify one or more buckets, or a group of buckets, that are associated with the virtual bucket. In addition, the mapping array 109 may be used to determine the default bucket or first bucket associated with the virtual bucket. In embodiments, the computational complexity of the actions of this block may involve 0(1) additional time for the indirect action of hashing to a virtual bucket in order to determine a bucket.At block 508, the process 500 may include comparing respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets. In embodiments, the counters may be similar to a counter 224a associated with bucket 218a of FIG. 2. In embodiments, the counter for a bucket may count entries associated with multiple virtual buckets that are associated with the bucket.At block 510, the process 500 may include determining, based on the comparison, a second bucket of the group of buckets having a lower counter value. In embodiments, the second bucket of the group of buckets may correspond to bucket "4" 218d of FIG. 2, having a counter value of 1 224d. In embodiments, the computational complexity of the actions of blocks 508 and/or 510 may involve 0(1) additional time to scan through groups of buckets to select a second bucket.At block 512, the process 500 may include moving one or more entries associated with the virtual bucket from the first bucket to the second bucket. In embodiments, the result of this move may be seen by the two entries associated with virtual bucket "2" 415a moved from bucket "1" 418b to bucket "4" 418d. In embodiments, the entries associated with other virtual buckets that are associated with bucket "1" 418b are not moved. In embodiments, all of the entries associated with the virtual bucket "2" 415a may be moved to the second bucket "4" 418d. In embodiments, the counts associated with each bucket may be updated accordingly. In embodiments, the computational complexity of the actions of this block may involve 0(1) additional time for moving entries from the first to the second bucket.At block 514, the process 500 may include inserting the entry into the second bucket. In embodiments, the result of this insertion may be seen by incrementing the entry count of the counter 424d associated with bucket "4" 418d, as well as incrementing the element count of virtual bucket "2" 417a from 3 to 4 as shown in FIG.4. In embodiments, the computational complexity of the actions of this block may involve 0(1) additional time for updating counters.It should be understood that the actions described in reference to process 500 may not necessarily occur in the described sequence. In addition, some actions may be added or omitted.FIG. 6 illustrates a block diagram of an example architecture of acomputing/networking device suitable for use to practice the present disclosure, in accordance with various embodiments. As shown, computing device 600 may include one or more processors 602, each having one or more processor cores, and system memory 604. The processor 602 may include any type of unicore or multi-core processors. Each processor core may include a central processing unit (CPU), and one or more level of caches. The processor 602 may be implemented as an integrated circuit. The computing device 600 may include mass storage devices 606 (such as diskette, hard drive, volatile memory (e.g., dynamic random access memory (DRAM)), compact disc read only memory (CD-ROM), digital versatile disk (DVD) and so forth). In general, system memory 604 and/or mass storage devices 606 may be temporal and/or persistent storage of any type, including, but not limited to, volatile and non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth. Volatile memory may include, but not be limited to, static and/or dynamic random access memory. Non-volatile memory may include, but not be limited to, electrically erasable programmable read only memory, phase change memory, resistive memory, and so forth.The computing device 600 may further include input/output (I/O) devices 608 such as a display, keyboard, cursor control, remote control, gaming controller, image capture device, one or more three-dimensional cameras used to capture images, and so forth, and communication interfaces 610 (such as network interface cards, modems, infrared receivers, radio receivers (e.g., Bluetooth), and so forth). I/O devices 608 may be suitable for communicative connections with user devices or other system devices. In some embodiments, I/O devices 608 when used as user or system devices may include a device necessary for implementing the functionalities of receiving a data packet for insertion as an entry as described in reference to FIG. 5.The communication interfaces 610 may include communication chips (not shown) that may be configured to operate the device 600 in accordance with a Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA), or Long Term Evolution (LTE) network. The communication chips may also be configured to operate in accordance with Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial RadioAccess Network (UTRAN), or Evolved UTRAN (E-UTRAN). The communication chips may be configured to operate in accordance with Code Division Multiple Access(CDMA), Time Division Multiple Access (TDMA), Digital Enhanced CordlessTelecommunications (DECT), Evolution-Data Optimized (EV-DO), derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The communication interfaces 610 may operate in accordance with other wireless protocols in other embodiments.The above-described computing device 600 elements may be coupled to each other via system bus 612, which may represent one or more buses. In the case of multiple buses, they may be bridged by one or more bus bridges (not shown). Each of these elements may perform its conventional functions known in the art. In particular, system memory 604 and mass storage devices 606 may be employed to store a working copy and a permanent copy of the programming instructions implementing the operations and functionalities associated with FIG. 1 and/or FIG.5, generally shown as computational logic 622.Computational logic 622 may be implemented by assembler instructions supported by processor(s) 602 or high-level languages that may be compiled into such instructions.In embodiments, the Computational Logic 622 may contain an insertion module 650, which may perform one or more of the functions associated with FIGS. 1-5.Computational Logic 822 may contain a hash function module 652, which may perform one or more of the hash functions associated with FIG. 1-5.The permanent copy of the programming instructions may be placed into mass storage devices 606 in the factory, or in the field, though, for example, a distribution medium (not shown), such as a compact disc (CD), or through communication interfaces 610 (from a distribution server (not shown)).In embodiment, computing device 600 may be a wearable device, a smartphone, a computing tablet, a laptop computer, a desktop computer, a server, a set-top box, a game console, a router, a switch, a gateway, or other networking equipment.FIG. 7 illustrates an example computer-readable storage medium with instructions configured to enable a computing/networking device to practice aspects of the present disclosure, in accordance with various embodiments. As illustrated, non-transitory computer-readable storage medium 702 may include a number of programming instructions 704 (e.g., including insertion module 650 and hashing function module 652). Programming instructions 704 may be configured to enable a device, e.g., computing device 600, in response to execution of the programming instructions, to perform one or more operations of the processes described in reference to FIGS. 1-5. In alternate embodiments, programming instructions 704 may be disposed on multiple non-transitory computer-readable storage media 702 instead. In still other embodiments, programming instructions 704 may be encoded in transitory computer-readable signals.Referring also to Figure 6, for some embodiments, processor 602 may be packaged together with a computer-readable storage medium having programming instructions 704 configured to practice all or selected aspects of the hash table insertion and rebalancing related operations earlier described. For one embodiment, processor 602 may be packaged together with a computer-readable storage medium having programming instructions 704 to form a System in Package (SiP). For one embodiment, processor 602 may be integrated on the same die with a computer-readable storage medium having programming instructions 704. For one embodiment, processor 602 may be packaged together with a computer-readable storage medium having programming instructions 704 to form a System on Chip (SoC).The corresponding structures, material, acts, and equivalents of all means or steps plus function elements in the claims below are intended to include any structure, material or act for performing the function in combination with other claimed elements are specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for embodiments with various modifications as are suited to the particular use contemplated.EXAMPLESExamples, according to various embodiments, may include the following.Example 1 may be an apparatus for computing, comprising: one or more computer processors; a storage device coupled to the one or more computer processors; an insertion module communicatively coupled to the one or more processors, to manage insertion of entries into a hash table residing on the storage device, wherein the hash table has a plurality of buckets divided into groups and the groups of buckets arecorrespondingly associated with a plurality of virtual buckets; and wherein the insertion module is to: receive a data packet for insertion as an entry into the hash table; apply a hash function to the data packet to determine a virtual bucket associated with the entry; select a first bucket of the group of buckets associated with the virtual bucket; compare respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets; determine, based on the comparison, a second bucket of the group of buckets having a lower counter value; move one or more entries associated with the virtual bucket from the first bucket to the second bucket; and insert the entry into the second bucket.Example 2 may include the apparatus of example 1, wherein an entry includes a key, a value, and a virtual bucket identifier. Example 3 may include the apparatus of example 1, wherein move one or more entries associated with the virtual bucket from the first bucket to the second bucket further includes: determine a number of entries to be moved; decrement a counter of the first bucket by the number of entries to be moved; increment a counter of the second bucket by the number of entries to be moved plus 1.Example 4 may include the apparatus of any examples 1-3, wherein select the first bucket further includes selected the first bucket based upon entries in a mapping array.Example 5 may include the apparatus of any examples 1-3, wherein the groups of buckets correspondingly associated with the plurality of virtual buckets are identified in a mapping array.Example 6 may include the apparatus of example 5, wherein the mapping array is on the storage device.Example 7 may include the apparatus of any one of examples 1-3, wherein a data packet includes a network packet address.Example 8 may include the apparatus of any one of examples 1-3, wherein to insert the entry comprises to manage a standard deviation of a plurality of counters of a respective plurality of buckets.Example 9 may include the apparatus of any one of examples 1-3, wherein the plurality of buckets is a fixed number.Example 10 may include the apparatus of any one of examples 1-3, wherein the number of entries in a bucket is limited to a fixed number.Example 11 may include the apparatus of any one of examples 1-3, wherein to insert the entry is to be complete in 0(1) operations.Example 12 may include the apparatus of any one of examples 1-3, wherein to insert an entry comprises to manage the number of entries respectively in the plurality of buckets.Example 13 may be a method for computing, comprising: receiving, by a computing system, a data packet for insertion as an entry into a hash table having a plurality of buckets divided into groups and the groups of buckets correspondingly associated with a plurality of virtual buckets; applying, by the computing system, a hash function to the data packet to determine a virtual bucket associated with the entry;selecting, by the computing system, a first bucket of a group of buckets associated with the virtual bucket; comparing, by the computing system, respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets; determining, by the computing system, based on the comparison, a second bucket of the group of buckets having a lower counter value;moving, by the computing system, one or more entries associated with the virtual bucket from the first bucket to the second bucket; and inserting, by the computing system, the entry into the second bucket.Example 14 may include the method of example 13, wherein moving one or more entries associated with the virtual bucket from the first bucket to the second bucket further includes: determining, by the computing system, a number of entries to be moved;determining, by the computing system, a counter of the first bucket by the number of entries to be moved; incrementing, by the computing system, a counter of the second bucket by the number of entries to be moved plus 1.Example 15 may include the method of any one of examples 13-14, wherein selecting the first bucket further includes selecting the first bucket based upon entries in a mapping array.Example 16 may include the method of any one of examples 13-14, wherein the groups of buckets correspondingly associated with the plurality of virtual buckets are identified in a mapping array.Example 17 may include the method of example 16, wherein the mapping array is on a storage device.Example 18 may include the method of any one of examples 13-14, wherein a data packet includes a network packet address.Example 19 may include the method of any one of examples 13-14, wherein inserting the entry comprises managing, by the computing system, a standard deviation of a plurality of counters of a respective plurality of buckets.Example 20 may include the method of any one of examples 13-14, wherein the plurality of buckets is a fixed number.Example 21 may include the method of any one of examples 13-14, wherein the number of entries in a bucket is limited to a fixed number.Example 22 may include the method of any one of examples 13-14, wherein inserting the entry is to be complete in 0(1) operations.Example 23 may include the method of any one of examples 13-14, wherein inserting the entry comprises managing, by the computing system, the number of entries respectively in the plurality of buckets. Example 24 may be one or more computer-readable media comprising instructions that cause a computing device, in response to execution of the instructions by the computing device, to: receive, by the computing device, a data packet for insertion as an entry into a hash table having a plurality of buckets divided into groups and the groups of buckets correspondingly associated with a plurality of virtual buckets; apply, by the computing device, a hash function to the data packet to determine a virtual bucket associated with the entry; select, by the computing device, a first bucket of a group of buckets associated with the virtual bucket; compare, by the computing device, respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets; determine, by the computing device, based on the comparison, a second bucket of the group of buckets having a lower counter value; move, by the computing device, one or more entries associated with the virtual bucket from the first bucket to the second bucket; and insert, by the computing device, the entry into the second bucket.Example 25 may include the one or more computer-readable media of example 24, wherein move one or more entries associated with the virtual bucket from the first bucket to the second bucket includes: determine, by the computing device, a number of entries to be moved; determine, by the computing device, a counter of the first bucket by the number of entries to be moved; increment, by the computing device, a counter of the second bucket by the number of entries to be moved plus 1.Example 26 may include the one or more computer-readable media of any one of examples 24-25, wherein select the first bucket further includes select the first bucket based upon entries in a mapping array.Example 27 may include the one or more computer-readable media of any one of examples 24-25, wherein the groups of buckets correspondingly associated with the plurality of virtual buckets are identified in a mapping array.Example 28 may include the one or more computer-readable media of claim 27, wherein the mapping array is on a storage device.Example 29 may include the one or more computer-readable media of any one of examples 24-25, wherein a data packet includes a network packet address.Example 30 may include the one or more computer-readable media of any one of examples 24-25, wherein insert the entry comprises manage a standard deviation of a plurality of counters of a respective plurality of buckets. Example 31 may include the one or more computer-readable media of any one of examples 24-25, wherein the plurality of buckets is a fixed number.Example 32 may include the one or more computer-readable media of any one of examples 24-25, wherein the number of entries in a bucket is limited to a fixed number.Example 33 may include the one or more computer-readable media of any one of examples 24-25, wherein insert the entry is to be complete in 0(1) operations.Example 34 may include the one or more computer-readable media of any one of examples 24-25, wherein insert the entry comprises manage the number of entries respectively in the plurality of buckets.Example 35 may be an apparatus for computing, comprising: means for receiving a data packet for insertion as an entry into a hash table having a plurality of buckets divided into groups and the groups of buckets correspondingly associated with a plurality of virtual buckets; means for applying a hash function to the data packet to determine a virtual bucket associated with the entry; means for selecting a first bucket of a group of buckets associated with the virtual bucket; means for comparing respective counters of the group of buckets, wherein the respective counters indicate a number of entries associated with each bucket of the group of buckets; means for determining, based on the comparison, a second bucket of the group of buckets having a lower counter value; means for moving one or more entries associated with the virtual bucket from the first bucket to the second bucket; and means for inserting the entry into the second bucket.Example 36 may include the apparatus of example 35, wherein moving one or more entries associated with the virtual bucket from the first bucket to the second bucket further includes: means for determining a number of entries to be moved; means for determining a counter of the first bucket by the number of entries to be moved; means for incrementing a counter of the second bucket by the number of entries to be moved plus 1.Example 37 may include the apparatus of any one of examples 35-36, wherein selecting the first bucket further includes means for selecting the first bucket based upon entries in a mapping array.Example 38 may include the apparatus of any one of examples 35-36, wherein the groups of buckets correspondingly associated with the plurality of virtual buckets are identified in a mapping array.Example 39 may include the apparatus of example 38, wherein the mapping array is on a storage device. Example 40 may include the apparatus of any one of examples 35-36, wherein a data packet includes a network packet address.Example 41 may include the apparatus of any one of examples 35-36, wherein inserting the entry comprises means for managing a standard deviation of a plurality of counters of a respective plurality of buckets.Example 42 may include the apparatus of any one of examples 35-36, wherein the plurality of buckets is a fixed number.Example 43 may include the apparatus of any one of examples 35-36, wherein the number of entries in a bucket is limited to a fixed number.Example 44 may include the apparatus of any one of examples 35-36, wherein inserting the entry is to be complete in 0(1) operations.Example 45 may include the apparatus of any one of examples 35-36, wherein inserting the entry comprises means for managing the number of entries respectively in the plurality of buckets. |
Methods, apparatus and machine-readable medium are described to terminate a memory bus line. In some embodiments, the memory bus line is terminated with one or more transistors of an output buffer that are used to drive the memory bus line during a memory write. |
What is claimed is: [cl] 1. A method comprising driving a memory bus line through either a first impedance device or a second impedance device in response to a memory write, and terminating the memory bus line with the first impedance device and the second impedance device after driving the memory bus line. [c2] 2. The method of claim 1 wherein terminating occurs during a memory read. Ms3] 3. The method of claim 1 wherein terminating occurs during an idle state of the memory bus line. [c4] 4. The method of claim 1 wherein driving comprises selectively turning on one or more switching devices of either the first impedance device or the second impedance device to provide the memory bus line with a first impedance. D5] 5. The method of claim 4 wherein terminating comprises selectively turning on one or more switching devices of the first impedance device and one or more switching devices of the second impedance device to provide the memory bus line with a second impedance. [c6] 6. A method comprising turning on either one or more pull-up transistors or one or more pull-down transistors to drive a memory bus line in response to a memory write, and turning on one or more of the pull-up transistors and one or more of the pull-down transistors to terminate the memory bus line in response to a memory read. [c7] 7. The method of claim 6 further comprising <Desc/Clms Page number 11> determining which pull-up transistors and pull-down transistors to turn on in response to the memory write to provide the memory bus line with a first impedance that is adjusted for environmental variations. [c 581 8. The method of claim 7 further comprising determining which pull-up transistors and pull-down transistors to turn on in response 10 to the memory read to provide the memory bus line with a second impedance that is adjusted for environmental variations. [c9] 9. A memory controller comprising 15 a memory line terminal to couple to a memory bus line, an output buffer coupled to the memory bus line terminal to drive the memory bus line in response to first control signals and to terminate the memory bus line in response to 20 second control signals, and circuitry to provide the output buffer with the first control signals in response to a memory write and to provide the output buffer with the second control signals in response to a memory read. D51 01 10. The memory controller of claim 9 further comprising a receiver coupled to the memory bus line terminal to receive data during the memory 30 read. [cl 1] 11. The memory controller of claim 10 further comprising 35 a write latch coupled to the output buffer to provide the output buffer with data to drive on the memory bus line in response to the first control signals, and a read latch coupled to the receiver to latch data received by the receiver during the memory read. ? 12] 12. The memory controller of claim 9 wherein the circuitry is to further program the output buffer with a first impedance during the memory write and is to program the <Desc/Clms Page number 12> output buffer with a second impedance during the memory read such that the first impedance and second impedance are adjusted for process variations. [c13] 5 13. The memory controller of claim 9 wherein the output buffer comprises a plurality of first transistors coupled between a first voltage source and the memory bus line terminal, and a plurality of second transistors coupled between a second voltage source and the memory bus line terminal, and 10 the circuitry is to generate the second control signals to selectively turn on one or more of the first transistors and one or more of the second transistors during the memory read. [c14] 1514. The memory controller of claim 13 wherein the circuitry is to generate the first control signals to selectively turn on either one or more of the first transistors or one or more of the second transistors during the memory 20 write. [c15] 15. The memory controller of claim 13 wherein the circuitry further comprises a table to provide a first indication as to which transistors of the first transistors and the second 25 transistors to turn on during the memory write, and to provide a second indication as to which transistors of the first transistors and the second transistors to turn on during the memory read. [c16] 30 16. The memory controller of claim 15 wherein the the table is to select the first control value and the second control value from a plurality of control values based upon an index value derived from one or more environmental parameters. [c17] 35 17. An computing device, comprising a processor to generate read requests and write requests, a volatile memory to store data, and 40 a memory controller coupled to the processor via a processor bus and coupled to the volatile memory via a memory bus, the memory controller comprising <Desc/Clms Page number 13> an output buffer to write data to the volatile memory via the memory bus, a receiver to receive data from the volatile memory via the memory bus, and circuitry to cause the output buffer to write data to the volatile memory in response to a write request of the processor bus and to cause the output buffer to provide the memory bus with a termination impedance in response to a read request of the processor bus. [cl 8] 18. The computing device of claim 17, wherein the output buffer comprises a first impedance device coupled between a memory bus line of the memory bus and a first voltage source, and a second impedance device coupled between the memory bus line and a second voltage source, and the circuitry is to cause the first impedance device to pull the memory bus line toward the first voltage source to write first data, to cause the second impedance device to pull the memory bus line toward the second voltage source to write second data, and to cause both the first impedance device and the second impedance device to respectively pull the memory bus line toward the first voltage source and the second voltage source in response to the read request. [cl 9] <Desc/Clms Page number 14> 19. The computing device of claim 17, wherein the output buffer comprises 5 a first programmable impedance device having a first impedance magnitude that is controlled by a first control signal, the first programmable impedance device to pull 10 the memory bus line toward a first voltage source, and a second programmable impedance device having a second impedance magnitude 15 that is controlled by a second control signal, the second programmable impedance device to pull the memory bus line toward a second voltage source, and the circuitry is to generate the first control signal to drive a first data signal on the memory bus, is to generate the second control signal to drive a second data signal on the 20 memory bus, and is to generate the first control signal and the second control signal to terminate the memory bus during a memory read. [c20] 20. The computing device of claim 19, wherein 25 the memory comprises double data rate memory. [c21] 21. A method comprising 30 driving a memory bus line with an output buffer during a memory write, and terminating the memory bus line with the output buffer during a memory read. [c22] 35 22. The method of claim 21 wherein terminating comprises programming a pull-up impedance device and a pull-down impedance device to provide a termination impedance for the memory bus line during the memory read. ! M23] 23. The method of claim 22 wherein programming comprises turning on one or more transistors of the pull-up impedance device to establish a pull- 45 up impedance, and <Desc/Clms Page number 15> turning on one or more transistors of the pull-down impedance device to establish a pull-down impedance, the pull-up impedance and pull-down impedance providing the termination impedance for the memory bus line. [c, 24] 24. A machine-readable medium comprising a plurality of instructions that in response to being executed, result in a system driving a memory bus line with an output buffer during a memory write, and terminating the memory bus line with the output buffer during a memory read. [c25] 25. The method of claim 24 wherein terminating comprises programming a pull-up impedance device and a pull-down impedance device to provide a termination impedance for the memory bus line during the memory read. [c26] 26. The method of claim 25 wherein programming comprises turning on one or more transistors of the pull-up impedance device to establish a pullup impedance, and turning on one or more transistors of the pull-down impedance device to establish a pull-down impedance, the pull-up impedance and pull-down impedance providing the termination impedance for the memory bus line. |
<Desc/Clms Page number 1> MEMORY BUS TERMINATION BACKGROUND [ (10011 Data transfer rates between system memory and memory controllers are ever increasing. To improve signal integrity at these higher transfer rates, memory devices and memory controllers include terminating resisters that match the impedance of the memory bus lines in order to reduce signal reflections on the memory bus lines. Traditional memory controllers include separate terminating resistors that are coupled to the memory bus lines during read and/or idle states of the memory bus line. Further, these memory controllers include additional logic to maintain a constant resistance over process, voltage, and temperature. These memory controllers further include circuitry to disconnect the terminating resistors from the memory bus lines during memory writes. The additional terminating resistors, logic and circuitry associated with terminating memory bus lines consumes additional die area. BRIEF DESCRIPTION OF THE DRAWINGS [0002] The invention described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. [0003] FIG. 1 illustrates an embodiment of a computing device. [0004] FIG. 2 illustrates an embodiment of a memory controller of the computing device of FIG. 1. [0005] FIG. 3 illustrates an embodiment of a memory input/output buffer of the memory controller of FIG. 2. [0006] FIG. 4 illustrates operation of an embodiment of the memory controller depicted in FIGS. 2 and 3. DETAILED DESCRIPTION [0007] The following description describes techniques for terminating memory bus lines. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system <Desc/Clms Page number 2> components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one sl < illed in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction 5 sequences have not been shown in detail in order not to obscure the invention. Those of ordinary sl < ill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to"one embodiment", "an embodiment","an 10 example embodiment", etc. , indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is 15 submitted that it is within the knowledge of one slcilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. [0009] Embodiments of the invention may be implemented in hardware, firmware, 20 software, or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e. g., a computing device). For example, a machine-readable medium may include read only 25 memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other forms of propagated signals (e. g. , carrier waves, infrared signals, digital signals, etc. ), and others. [0010] An example embodiment of a computing device 100 is shown in FIG. 1. The 30 computing device 100 may comprise one or more processors 102 coupled to a chipset 104 via a processor bus 106. The chipset 104 may comprise one or more integrated circuit packages or chips to couple the processors 102 to system memory 108 and other devices 110 (e. g. a mouse, keyboard, video controller, hard disk, floppy disk, firmware, etc. ). The chipset 104 may comprise a processor bus interface 112 to access the 35 processor bus 106, a memory controller 114 to access the system memory 108, and one or more device interfaces 116 to access devices 110. In other embodiments, the processors 102 may comprise all or a portion of the memory controller 1 14. The processor bus interface 112 may decode processor bus transactions issued by the processors 102 and may generate processor bus transactions on behalf of the memory 40 controller 114 and/or the device interfaces 116. The device interfaces 116 provide interfaces to communicate with the devices 110 that are coupled to the chipset 104 via device buses 118 such as peripheral component interconnect (PCI) buses, accelerated graphics port (AGP) buses, universal serial bus (USB) buses, low pin count (LPC) buses, and/or other I/O buses. 45 <Desc/Clms Page number 3> [00111 The memory controller er 114 may comprise one or more memory input/output (I/0) buffers 120 to send and receive data to and from the system memory 108 via memory bus lines 122 of a memory bus 124. The system memory 108 may be implemented with various volatile and non-volatile memory technologies such as, for example, flash memory, static memory (SRAM), dynamic memory (DRAM), double data rate memory (DDR), and RAMBUS memory. The memory controller 114 may further comprise write latches 126 to store data to be transfered to system memory 108 via the memory 1/0 buffers 120 and read latches 128 to store data received from the system memory 108 via the memory)/0 buffers 120. The memory controller 114 may further comprise control logic 130 to control data transfers between the latches 126, 128 and the processor bus interface 112. The control logic 130 may further calibrate the memory lao buffers 120 and may control transfers between the latches 126, 128 and the system memory 108 via the memory)/0 buffers 120. JB012] Referring now to FIG. 2, an embodiment of the memory controller 1 14 is shown. As depicted, the memory)/0 buffer 120 of the memory controller 114 comprises an input buffer 200 that comprises a receiver 202 and an output buffer 204. The output buffer 204 and the receiver 202 are coupled to a memory bus line terminal 206 such as, for example, a memory bus line pad, contact, or pin to transfer data to and from system memory 108. The input buffer 200 in one embodiment uses the output buffer 204 to terminate the terminal 206 during a memory read and/or an idle state so that the receiver 202 may accurately receive a data signal from the terminal 206 and provide the read latch 128 with the received data. Eol 3] In one embodiment, the output buffer 204 comprises a programmable pull-up impedance device 208 that is coupled between a high voltage source VHIGH (e. g. 1. 5 volts) and the terminal 206. The output buffer 204 further comprises a programmable pull-down impedance device 210 that is coupled between the terminal 206 and a low voltage source (e. g. ground). The pull-up device 208 comprises an impedance control input PUIMP to receive a pull-up control signal and the pull-down device 210 comprises an impedance control input PDIMP to receive a pull-down control signal. in one embodiment, the impedance control inputs PUIMP, PDIMP each comprise multiple input lines to receive multi-bit control signals. In another embodiment, the impedance control inputs PUIMP, PDIMP each comprise a single input line to receive control signals having only two states. In yet another embodiment, the impedance control inputs PUIMP, PDIMP each comprise a single input line to receive encoded or serially transmitted control signals. [0014] The pull-up device 208 is to disconnect the high voltage source VHIGH from the terminal 206 in response to being deactivated by the pull-up control signal. In one embodiment, the pull-up device 208 disconnects the high voltage source VHIGH from the terminal 206 by establishing a very high impedance between the high voltage source VHIGH and the terminal 206. Further, the pull-up device 208 is to pull the terminal 206 toward the high voltage source VHIGH in response to being activated by the pull-up <Desc/Clms Page number 4> control signal. In one embodiment, the pull-up device 208 pulls the terminal toward the high voltage source VHIGH by establishing a pull-up impedance between the high voltage source VHIGH and the terminal 206 that has a magnitude controlled by the pull-up control signal. 5 [0015] Similarly, the pull-down device 210 is to disconnect the low voltage source VLOW from the terminal 206 in response to being deactivated by the pull-down control signal. In one embodiment, the pull-down device 210 disconnects the low voltage source VLOW from the terminal 206 by establishing a very high impedance between the voltage 10 source VLOW and the terminal 206. Further, the pull-down device 210 is to pull the terminal 206 toward the low voltage source VLOW in response to being activated by the pull-down control signal. In one embodiment, the pull-down device 210 pulls the terminal toward the low voltage source VLOW by establishing a pull-down impedance between the low voltage source VLOW and the terminal 206 that has a magnitude 15 controlled by the pull-down control signal. [0016] The memory controller 114 further comprises an impedance control 212 to control the impedance of the pull-up and pull-down devices 208,210. In one embodiment, the impedance logic 212 comprises a data input D to receive a data signal 20 that is indicative of data to be written to system memory 108 and a write input W/RI to receive a write signal or a read signal that indicates whether to configure the memory 1/0 buffer 120 for a memory write or a memory read. The impedance control 212 may further comprise a write impedance input WIMP to receive a write control signal that indicates the programmable impedance of the pull-up and pull-down devices 208,210 25 during a memory write. The impedance control logic 212 may also comprise a read impedance input RIMP to receive a read control signal that indicates the programmable impedance of the pull-up and pull-down devices 208,210 during a memory read or idle state. 017] The impedance control 212 may further comprise a pull-up control output PUCTL coupled to the impedance control input PUIMP of the pull-up device 208. In one embodiment, the impedance control 212 generates on the pull-up control output PUCTL a pull-up control signal that is dependent upon data signals, write signals, write control signals, and read control signals received by its data input D, write input W/RI, write 35 impedance input WIMP, and read impedance input RIMP. The impedance control 212 may also comprise a pull-down control output PDCTL coupled to the impedance control input PDIMP of the pull-down device 210. In one embodiment, the impedance control 212 generates on the pull-down control output PDCTL a pull-down control signal that is dependent upon data signals, write signals, write control signals, and read control 40 signals received by its data input D, write input W/RI, write impedance input WIMP, and read impedance input RIMP. [0018] The control logic 130 of the memory controller 114 may comprise an impedance calibration unit 214 to provide read control signals and write control signals 45 to the impedance control 212 via its read control output RCTL and its write control <Desc/Clms Page number 5> output WCTL. The impedance calibration unit 214 may comprise one or more environment inputs EIN to receive one or more environmental parameters from which the impedance calibration unit 214 may adjust the read control signals and the write control signals. The impedance calibration unit 214 may utilize various techniques to adjust the 5 read control signals and write control signals based upon environmental signals of the environmental inputs EIN. For example, in one embodiment, the impedance calibration unit 214 may receive temperature signals, voltage signals, and/or silicon process signals from sensors, configuration registers, or other devices and may adjust the read and write control signals based upon the received signals. 10 [0019] In another embodiment, the impedance calibration unit 214 may receive signals as a result of a calibration resister RCOMP and a reference voltage VREF being coupled to the environmental inputs EIN. The impedance calibration unit 214 may obtain a pull-up calibration value and a pull-down calibration value by selectively switching on 15 transistors of the impedance calibration unit 214 until a predetermined relationship to the calibration resister RCOMP and the reference voltage VREF is obtained. See, US 6,347, 850"Programmable Buffer Circuit"filed 23 December 1999 for an implementation of an impedance calibration unit 214 that obtains a pull-up calibration value and a pull- down calibration value based upon the effective resistance of a calibration resistor 20 RCOMP and a reference voltage VSWING. However, it should be appreciated that other known calibration techniques may be used to compensate for process, voltage, and/or temperature variations. The impedance calibration unit 214 may further comprise a calibration table 25 216 of control values from which the impedance calibration unit 214 may generate the write control signals and the read control signals. The impedance calibration unit 214 may index the calibration table 216 with index values derived from the parameter signals of the environment inputs EIN to receive control values that account for process, voltage, and/or temperature variations. In one embodiment, the calibration table 216 contains 30 write pull-up values and read pull-up values that are indexed to pull-up calibration values derived from the calibration resistor RCOMP and the voltage reference VREF. Further, the calibration table 216 contains write pull-down values and read pull-down values that are indexed to pull-down calibration values derived from the calibration resistor RCOMP and the voltage reference VREF. It should be appreciated that the the 35 control values may be indexed to other values that account for process, voltage, and/or temperature variations. [0021] As depicted, the memory controller 114 comprises a single memory I/O buffer 120. However, in other embodiments, the memory controller 114 may comprise a 40 separate memory 1/0 buffer 120 for each memory bus line 122 or group of memory bus lines 122. Further, the memory controller 114 may comprise a separate impedance control 212 and/or a separate impedance calibration unit 214 for each memory 1/0 buffer 120. Such embodiments enable separately programming the impedances of the memory)/0 buffers 120. 45 <Desc/Clms Page number 6> [0022] In FIG. 3, an embodiment of the impedance control 212 and the output buffer 204 is shown. As illustrated, the output buffer 204 may comprise a set of p-channel MOSFETs 300 arranged in parallel between the high voltage source VHIGH and the terminal 206 and a set of n-channel MOSFETs 302 arranged in parallel between the low 5 voltage source VLOW and the terminal 206. The number and values of the p-channel MOSFETs 300 that are turned on when the pull-up device 208 is activated determines the impedance established between high voltage source VHIGH and the terminal 206. Similarly, the number and values of the MOSFETs that are turned on when the pull-down device 210 is activated determines the impedance established between low voltage 10 source VLOW and the terminal 206. In one embodiment, the MOSFETs 300,302 are sized in a binary progression to allow a wide range of impedance programming (e. g. between 25 and 500 ohms) and with a sufficient number to obtain a sufficiently small granularity (e. g. about 1.5 ohms). As depicted, the pull-up device 208 of the output buffer 204 comprises four p-channel MOSFETS 300 and the pull-down device 210 comprises four n- 15 channel MOSFETS 302. However, other embodiments the pull-up device 208 and the pull-down device 210 may comprise other numbers of switching devices (e. g. MOSFETS, JFETS, etc. ). Further, in other embodiments, the pull-up device 208 may comprise fewer or more switching devices than the pull-down device 210. DW231 As illustrated, the impedance control 212 comprises a pull-up multiplexer 304 and a pull-down multiplexer 306. The pull-up multiplexer 304 comprises AND gates 308,310 and NOR gates 312 and the pull-down multiplexer 306 comprises AND gates 314,316 and OR gates 318. However, it should be appreciated that other embodiments may implement the impedance control 212 differently. The pull-up multiplexer 304 25 generates a pull-up control signal that selectively turns on zero or more of the p-channel MOSFETs 300, and the pull-down multiplexer 306 generates a pull-down control signal that selectively turns on zero or more of the n-channel MOSFETs 302. In one embodiment, the pull-up multiplexer 304 generates the pull-up control signal based upon a data signal of the data input D, a write signal of the write input W/RI, a pull-up 30 portion WPU [0: 3] of the write control signal received on write impedance inputs WIMP, and a pull-up portion RPU [0: 3] of the read control signal received on read impedance inputs RIMP. Similarly, the pull-down multiplexer 306 generates the pull-down control signal based upon the data signal of the data input D, the write signal of the write input W/RI, a pull-down portion WPD [0: 3] of the write control signal received on write 35 impedance input WIMP, and a pull-down portion RPD [0: 3] of the read control signal received on read impedance inputs RIMP. [0024] In one embodiment, the impedance control 212 and the memory 1/0 buffer 120 operate in a write mode in response to the value of the write input W/RI being HIGH. 40 As can be seen, when the value of the data input D is LOW and the value of the write input W/RI is HIGH, the output of each AND gate 308,310 is LOW thus causing the output of each NOR gate 312 to be HIGH. As a result of the output of the NOR gate 312 being HIGH, each p-channel MOSFET 300 is turned off and the pull-up device 208 is deactivated. Further, when the value of the data input D is LOW and the value of the write <Desc/Clms Page number 7> input is HIGH, the output of each AND gate 314 and therefore the output of each OR gate 318 is dependent upon the state of a corresponding bit of the write pull-down portion WPD [0: 3]. In particular, if a bit of the write pull-down portion WPD [0: 3] is HIGH, the corresponding output of the OR gate 318 is HIGH thus activating the pull-down device 5 210 by turning on the corresponding n-channel MOSFET 302. Conversely, if a bit of the write pull-down portion WPD [0: 3] is LOW, the corresponding output of the OR gate 318 is LOW thus turning off the corresponding n-channel MOSFET 302. [0025] Similarly, when the value of the data input D is HIGH and the value of the write 10 input W/R ! is HIGH, the output of each AND gate 314,316 is LOW thus causing the output of each OR gate 318 to be LOW. As a result of the output of the OR gate 318 being LOW, each n-channel MOSFET 300 is turned off and the pull-down device 210 is deactivated. Further, when the value of the data input D is HIGH and the value of the write input is HIGH, the output of each AND gate 308 and therefore the output of each 15 NOR gate 312 is dependent upon the state of a corresponding bit of the write pull-up portion WPU [0: 3]. In particular, if a bit of the write pull-up portion WPU [0: 3] is HIGH, the corresponding output of the NOR gate 318 is LOW thus activating the pull-up device 208 by turning on the corresponding P-channel MOSFET 300. Conversely, if a bit of the write pull-up portion WPU [0: 3] is LOW, the corresponding output of the NOR gate 312 is HIGH 20 thus turning off the corresponding p-channel MOSFET 300. [0026] In one embodiment, the impedance control 212 and the memory 1/0 buffer 120 operate in a read mode and/or idle mode in response to the value of the write input W/RI being LOW. As can be seen, when the value of the write input W/RI is LOW 25 irrespective of the value of the data input D, the output of each AND gate 310 and therefore the output of each NOR gate 312 is dependent upon the state of a corresponding bit of the read pull-up portion RPU [0: 3]. In particular, if a bit of the read pull-up portion RPU [0: 3] is HIGH, the corresponding output of the NOR gate 312 is LOW thus activating the pull-up device 208 by turning on the corresponding p-channel 30 MOSFET 300. Conversely, if a bit of the read pull-up portion RPU [0: 3] is LOW, the corresponding output of the NOR gate 312 is HIGH thus turning off the corresponding p- channel MOSFET 300. [0027] Similarly, when the value of the write input W/RI is LOW irrespective of the 35 value of the data input D, the output of each AND gate 316 and therefore the output of each OR gate 318 is dependent upon the state of a corresponding bit of the read pull- down portion RPD [0: 3]. In particular, if a bit of the read pull-down portion RPD [0: 3] is HIGH, the corresponding output of the OR gate 318 is HIGH thus activating the pull-down device 210 by turning on the corresponding n-channel MOSFET 302. Conversely, if a bit 40 of the read pull-down portion RPD [0: 3] is LOW, the corresponding output of the OR gate 318 is LOW thus turning off the corresponding n-channel MOSFET 302. [0028] Referring now to FIG 4, operation of an embodiment of the memory controller 114 is depicted. In block 400, the impedance calibration unit 214 adjusts a write control 45 signal and a read control signal to account for process, voltage, and/or temperature <Desc/Clms Page number 8> variations. In one embodiment, the write control signal comprises a write pull-up portion WPU [0: 3] and a write pull-down portion WPD [0: 3] to respectively control the impedance of the pull-up device 208 and the pull-down device 210 during a memory write. Similarly, in one embodiment, the read control signal comprises a read pull-up portion 5 RPU [0: 3] and a read pull-down portion RPD [0: 3] to respectively control the impedance of the pull-up device 208 and the pull-down device 210 during a memory read and/or an idle state. [0029] In block 402, the control logic 130 provides the write impedance input WIMP 10 of the impedance control 212 with the write control signal comprising the write pull-up portion WPU [0: 3] and the write pull-down portion WPD [0: 3]. Similarly, the control logic 130 in block 404 provides the read impedance input RIMP of the impedance control 212 with the read control signal comprising the read pull-up portion RPU [0: 3] and the read pull-down portion RPD [0: 3]. 15 [0030] In block 406, the control logic 130 determines whether to perform a memory write based upon signals received from the processor bus interface 112 and the state of the memory bus 124. In response to determining to perform a memory write, the control logic 130 in block 408 provides the write input W/Ri of the impedance control 212 with a 20 HIGH write signal to indicate a memory write. Conversely, the control logic 130 in block 410 provides the write input W/Rl of the impedance control 212 with a LOW write signal to indicate a memory read and/or an idle state in response to determining not to perform a memory write, D30311 The impedance control 212 in block 412 activates either the pull-up device 208 or the pull-down device 210 to drive a data signal on the memory bus line 122. In one embodiment, the impedance control 212 in response to its data input D being HIGH provides the impedance control input PUIMP of the output buffer 204 with a pull-up control signal that activates the pull-up device 208 with an impedance specified by the 30 write pull-up portion WPU [0: 3] of its write impedance input WIMP and provides the impedance control input PDIMP of the output buffer 204 with a pull-down control signal that deactivates the pull-down device 210. Similarly, in one embodiment, the impedance control 212 in response to its data input D being LOW provides the impedance control input PDIMP of the output buffer 204 with a pull-down control signal that activates the 35 pull-down device 210 with an impedance specified by the write pull-down portion WPD [0: 3] of its write impedance input WIMP and provides the impedance control input PUIMP of the output buffer 204 with a pull-up control signal that deactivates the pull-up device 208. bu The output buffer 204 in block 414 drives a data signal upon the memory bus line 122 via the terminal 206. In one embodiment, the output buffer 204 pulls the memory bus line 122 toward the high voltage source VHIGH via the programmed impedance of the pull-up device 208 to drive a HIGH data signal on the memory bus line 122 in response to the pull-up device 208 being activated and the pull-down device 210 45 being deactivated. Similarly, the output buffer 204 pulls the memory bus line 122 toward <Desc/Clms Page number 9> the low voltage source VLOW via the programmed impedance of the pull-down device 210 to drive a LOW data signal on the memory bus line 122 in response to the pull-down device 210 being activated and the pull-up device 208 being deactivated. [0033] In response to determining not to perform a memory write, the impedance control 212 in block 416 activates and controls the impedance of both the pull-up device 208 and the pull-down device 210 to terminate the memory bus line 122 during a memory read and/or idle state. In one embodiment, the impedance control 212 in response to its write input W/RI being LOW provides the impedance control input PUIMP 10 of the output buffer 204 with a pull-up control signal that activates the pull-up device 208 with an impedance specified by the read pull-up portion RPU [0: 3] of its read impedance input RIMP. Further, the impedance control 212 in response to its write input W/RI being LOW provides the impedance control input PDIMP of the output buffer 204 with a pull-down control signal that activates the pull-down device 210 with an 15 impedance specified by the read pull-down portion RPD [0: 3] of its read impedance input RIMP. The output buffer 204 in blocl < 418 terminates the memory bus line 122 based upon the received pull-up and pull-down control signals. In one embodiment, the 20 output buffer 204 pulls the memory bus line 122 toward the high voltage source VHIGH via the programmed impedance of the pull-up device 208 and pulls the memory bus line 122 toward the low voltage source VLOW via the programmed impedance of the pull- down device 210. Accordingly, the programmed impedances of the pull-up and pull- down devices 208,210 combine to terminate the memory bus line 122. For example, the 25 pull-up device 208 may establish a 400 Ohm impedance between the high voltage source VHIGH and the terminal 206 and the pull-down device 210 may establish a 400 Ohm impedance between the low voltage source VLOW and the terminal 206 thereby establishing a 200 Ohm read termination impedance between the terminal 206 and the voltage sources VHIGH, VLOW. 30 [0035] While certain features of the invention have been described with reference to example embodiments, the description is not intended to be construed in a limiting sense. Various modifications of the example embodiments, as well as other embodiments of the invention, which are apparent to persons skirted in the art to which the invention 35 pertains are deemed to lie within the spirit and scope of the invention. |
Embodiments of the invention provide dielectric films and low k dielectric films and methods for making dielectric and low k dielectric films. Dielectric films are made from carbosilane-containing precursors. In embodiments of the invention, dielectric film precursors comprise attached porogen molecules. In further embodiments, dielectric films have nanometer-dimensioned pores. |
CLAIMS We claim: 1. A device comprising, a substrate, a dielectric film disposed on the substrate, wherein the dielectric film is comprised of crosslinked cyclic carbosilanes wherein a cyclic carbosilane has a ring structure comprising carbon and silicon, and wherein the dielectric film is hydrophobic. 2. The device of claim 1 wherein the dielectric film has a k value of from 1.6 to 2.5. 3. The device of claim 1 wherein the dielectric film comprises between 45 and 60 atomic percent C, between 25 and 35 atomic percent Si, and between 10 and 20 atomic percent O. 4. The device of claim 1 wherein the substrate additionally comprises the components of an integrated circuit device and the dielectric film is between at least two components of the integrated circuit device. 5. The device of claim 1 wherein the dielectric film additionally comprises a reacted photo acid generator, a reacted photo base generator, a reacted thermally-activated acid generator, or a reacted thermally-activated base generator. 6. The device of claim 1 wherein the dielectric film is porous and the porosity of the film is in the range of 5 % and 60 %. 7. The device of claim 1 wherein the dielectric film is porous and the porosity of the film is in the range of 35 % and 50 %. 8. The device of claims 6 or 7 wherein the pores of the dielectric film have dimensions that are between 0.25 nm and 2 nm. 9. The device of claims 6 or 7 wherein the film is chemically stable. 10. A device comprising, a substrate, a dielectric film disposed on the substrate, wherein the dielectric film is comprised of crosslinked cyclic carbosilanes wherein a cyclic carbosilane has a ring structure comprising carbon and silicon, and wherein the dielectric film is porous and the porosity is in the range of 25 % to 60 %. 11. The device of claim 10 wherein the pores of the dielectric film have dimensions that are between 0.25 nm and 2 nm. 12. The device of claim 10 wherein the dielectric film has a k value of from 1.6 to 2.5. 13. The device of claim 10 wherein the dielectric film is porous and the porosity of the film is in the range of 35 % and 50 %. 14. The device of claim 10 wherein the dielectric film comprises between 45 and 60 atomic percent C, between 25 and 35 atomic percent Si, and between 10 and 20 atomic percent O. 15. The device of claim 10 wherein the substrate additionally comprises the components of an integrated circuit device and the dielectric film is between at least two components of the integrated circuit device. 16. The device of claim 10 wherein the film is chemically stable. 17. A method for making a dielectric film comprising, providing a substrate having a surface, depositing a mixture comprising oligomers of cyclic carbosilanes, wherein a cyclic carbosilane has a ring structure comprising carbon and silicon, and a polymerization initiator wherein the polymerization initiator is selected from the group consisting of photo acid generators, photo base generators, thermally-activated acid generators, and thermally-activated base generators, onto the substrate surface, and exposing the substrate to light or heat causing the photo acid generator, photo base generator, thermally-activated acid generator, or thermally-activated base generator to be activated producing and acid or a base and causing the mixture to solidify. 18. The method of claim 17 wherein the oligomers comprise between 3 and 10 cyclic carbosilane units. 19. The method of claim 17 wherein the oligomers of cyclic carbosilanes are linear oligomers. 20. The method of claim 17 wherein the oligomers of cyclic carbosilanes are branched oligomers. 21. The method of claim 17 wherein the mixture also comprises a porogen molecule that has attached carbosilanes. 22. The method of claim 17 wherein the method also includes spinning the substrate to distribute the mixture across the substrate surface before exposing the substrate to light or heat. |
CYCLIC CARBOSILANE DIELECTRIC FILMS BACKGROUND OF THE INVENTION FIELD OF THE INVENTION The embodiments of the invention relate generally to semiconductor processing and manufacture, integrated circuits, dielectric materials, interlayer dielectric materials, spin-on dielectric materials, and materials comprising cyclic carbosilanes. BACKGROUND INFORMATION The desire for ever-smaller integrated circuits (IC) devices places enormous performance demands on the techniques and materials used to construct IC devices. In general, an integrated circuit chip is also known as a microchip, a silicon chip, or a chip. IC chips are found in a variety of common devices, such as the microprocessors in computers, cars, televisions, CD players, and cellular phones. A plurality of IC chips are typically built on a silicon wafer (a thin silicon disk, having a diameter, for example, of 300 mm) and after processing the wafer is diced apart to create individual chips. A 1 cm2 IC chip having feature sizes around of about 90 nm can comprise hundreds of millions of components. Current technologies are pushing feature sizes even smaller than 45 nm. BRIEF DESCRIPTION OF THE FIGURES FIGURE 1 shows cyclic carbosilane precursors useful for making dielectric films and low-k dielectric films. FIGURE 2 illustrates a method for the synthesis of cyclic carbosilane precursors useful for making dielectric films and low-k dielectric films. FIGURE 3 shows an additional cyclic carbosilane precursor useful for making dielectric films and low-k dielectric films. FIGURE 4 illustrates a method for the synthesis of additional cyclic carbosilane precursors useful for making dielectric films and low-k dielectric films. FIGURES 5A-C illustrate cyclic carbosilane precursor molecules useful for making dielectric films and low-k dielectric films. FIGURES 6A-C show additional cyclic carbosilane precursor molecules useful for making dielectric films and low-k dielectric films. FIGURE 7 provides a cyclic carbosilane-attached porogen molecule. FIGURE 8 shows the acid or base catalyzed polymerization of cyclic carbosilane molecules. FIGURE 9 illustrates a synthesis scheme for making dielectric films and low-k dielectric films. FIGURE 10 describes a method for making a dielectric films and low-k dielectric films. DETAILED DESCRIPTION OF THE INVENTION Embodiments of the invention provide dielectric films for integrated circuits. Cyclic carbosilane precursors are capable of providing films with small dielectric constants and the cyclic carbosilane precursors are useful in semiconductor processing applications. Dielectric films according to embodiments of the invention are useful in a variety of applications for integrated circuit devices. For example, the films described herein are useful as dielectric films, and low-k dielectric films, spin-on dielectric films, interlayer dielectric films (ILDs, intermetal dielectric films, or IMDs), and etch-selective layers. Figure 1 illustrates linear oligomers of cyclic carbosilane molecules that are useful as precursors for making dielectric films and low-k dielectric films. In Figure 1, R is a functional group, such as, for example, an alkyl group comprising hydrogen atoms and from 1 to 10 carbon atoms or from 1 to a large number of carbon atoms. In addition, R also optionally comprises, oxygen atoms, nitrogen atoms, sulfur atoms, chlorine atoms, and or fluorine atoms. The functional group R is a group such as, for example, -CH3, -CH2CH3, -CH2CH2CH3, -CH2CH2CH2CH3, -CH2CH2CH2CH2CH3, -CH2CH(CH3)2, -CH2CH2CH(CH3)2, -CH2CH2CH(CH2CH3)2, -CH2OCH3, -CH2CH2OCH3, and others. In embodiments of the invention, the R group is less than 50 % larger than the size of the porogen molecule chosen. In Figure 1, m is a number from 1 to 10. In embodiments of the invention m is a number from 3 to 10. Other values for m are also possible, such as larger numbers. Further, one or two of the carbon atoms (i.e., -CH2- groups) in the cyclic carbosilane molecules is optionally replaced with an oxygen atom. The carbosilane oligomer composition that is used to create a dielectric film is typically a mixture of different oligomers having different lengths (different numbers of cyclic carbosilane units), so that m represents an average oligomer length for the molecules present in the mixture. Figure 2 provides a synthesis scheme for oligomers of cyclic carbosilane molecules that are useful as precursors for making dielectric films and low-k dielectric films. The cyclic carbosilane monomer is functionalized with crosslinking groups and then crosslinked with carbosilane monomers. Although, in Figure 2, ethyl (-Et) functional groups are shown, other alkyl groups are also possible, such as, for example, an alkyl group comprising hydrogen atoms and from 1 to 10 carbon atoms or from 1 to a large number of carbon atoms. In addition, R also optionally comprises, oxygen atoms, nitrogen atoms, sulfur atoms, chlorine atoms, and or fluorine atoms. The functional group R is a group such as, for example, -CH3, -CH2CH3, -CH2CH2CH3, -CH2CH2CH2CH3, -CH2CH2CH2CH2CH3, -CH2CH(CH3)2, -CH2CH2CH(CH3)2, -CH2CH2CH(CH2CH3)2, -CH2OCH3, -CH2CH2OCH3, and others. In embodiments of the invention, the R group is less than 50 % larger than the size of the porogen molecule chosen. Further, one or two of the carbon atoms of the cyclic carbosilanes is optionally replaced with an oxygen atom. In Figure 2, scheme (1), molecule I (in this case, l,3,5-triethoxy-l,3,5-trimethyl- 1,3,5-trisilacyclohexane) is reacted with t-butyl lithium and then subsequently Me2SiHCl to form molecule II, in which one of the cyclic carbosilane ring carbons has been silanated. Molecule II is then reacted with molecule I in the presence of B(CsF5)3 to yield a mixture of oligomers in which m is a function of the number of equivalents of molecule II used, such that m = n - 1. The cyclic carbosilane oligomer composition produced by the method of Figure 2 is often a mixture of different oligomers having different lengths, so that m represents an average oligomer length for the molecules present in the mixture. Figure 3 provides an additional oligomeric cyclic carbosilane precursor useful for making dielectric and low-k dielectric films. The molecule of Figure 3 is a branched oligomer. In Figure 3, R is a functional group, such as, for example, an alkyl group comprising hydrogen atoms and from 1 to 10 carbon atoms or from 1 to a large number of carbon atoms. In addition, R also optionally comprises, oxygen atoms, nitrogen atoms, sulfur atoms, chlorine atoms, and fluorine atoms. The functional group R is a group such as, for example, -CH3, -CH2CH3, -CH2CH2CH3, -CH2CH2CH2CH3, -CH2CH2CH2CH2CH3, -CH2CH(CH3)2, -CH2CH2CH(CH3)2, -CH2CH2CH(CH2CH3)2, -CH2OCH3, -CH2CH2OCH3, and others. In embodiments of the invention, the R group is less than 50 % larger than the size of the porogen molecule chosen. Further, one or two of the carbon atoms of the cyclic carbosilanes is optionally replaced with an oxygen atom. In additional embodiments, there are, for example, 1, 2, or 3 modified cyclic carbosilane groups around the central cyclic carbosilane group. Different oligomers comprising different numbers of cyclic carbosilane groups are possible. Figure 4 illustrates methods for synthesizing a branched oligomeric cyclic carbosilane precursor. Although, in Figure 4, ethyl (-Et) functional groups are shown, other alkyl groups are also possible, such as, for example, an alkyl group comprising hydrogen atoms, and from 1 to 10 carbon atoms or from 1 to a large number of carbon atoms. In addition, R also optionally comprises, oxygen atoms, nitrogen atoms, sulfur atoms, chlorine atoms, and or fluorine atoms. The functional group R is a group such as, for example, -CH3, -CH2CH3, -CH2CH2CH3, -CH2CH2CH2CH3, -CH2CH2CH2CH2CH3, -CH2CH(CH3)2, -CH2CH2CH(CH3)2, -CH2CH2CH(CH2CH3)2, -CH2OCH3, -CH2CH2OCH3, and others. In embodiments of the invention, the R group is less than 50 % larger than the size of the porogen molecule chosen. Further, one or two of the carbon atoms of the cyclic carbosilanes is optionally replaced with an oxygen atom. In Figure 4, two different methods of dendrimeric cyclic carbosilane precursor synthesis are shown. In Figure 4, molecule la is reacted with t-butyl lithium to form molecule Ila. Molecule lb is reacted with three equivalents of Ila to condense the molecules into molecule III. Alternately, in Figure 4, molecule lb is reacted with three equivalents of molecule lib in the presence of SiMe2HCl and B(C6F5)3 in toluene to make molecule III. Figures 5 A-C provide additional useful dielectric film precursor molecules that have attached porogens (pore-creating functional groups). In Figure 5 A, a cyclic carbosilane ring comprises a porogen functional group, X, linked to a silicon of the carbosilane ring through a linker group, L. In Figure 5B the cyclic carbosilane ring comprises two porogen functional groups, X, linked to silicon atoms of the carbosilane ring through a linker group, L. In Figure 5C the cyclic carbosilane ring comprises three porogen functional groups, X, linked to silicon atoms of the carbosilane ring through a linker group, L. In alternate embodiments, one or two of the carbon atoms (i.e., -CH2- groups) of the cyclic carbosilane ring is replaced with an oxygen atom. In an embodiment of the invention, porogen functional groups have dimensions (widths, lengths, and heights or radii) that are from 0.25 nm to 2 nm. In alternate embodiments, the porogen functional groups have dimensions that are from 0.25 nm to 0.5 nm or from 0.5 nm to 5 nm. Pore sizes in the resulting films have dimensions (widths, lengths, and heights or radii, depending on the shape of the pore) that are from 0.25 nm to 2 nm (or from 0.25 nm to 0.5 nm or from 0.5 nm to 5 nm), depending on the porogen group chosen. Further, porogen groups decompose (upon heating, UV curing, or electron beam curing, for example) with approximately 100 % volatile yield (approximately indicating 80 % ± 20 %). Porogen functional groups are, for example, cyclodextrins, polyethylene oxides, polystyrenes, polyacrylates, or poly-alpha-methylstyrenes. Linker groups are carbon- containing groups containing hydrogen and carbon atoms. Linker groups also optionally contain oxygen atoms. Linkers include groups, such as for example, -CH2-, -OCH2-, -CH2O-, -CH2CH2-, -CH2OCH2-, -CH2(CH3)CH2-. The functional group labeled R in Figures 5A-C is an alkyl group comprising hydrogen atoms and from 1 to 10 carbon atoms or from 1 to a large number of carbon atoms. In addition, R also optionally comprises, oxygen atoms, nitrogen atoms, sulfur atoms, chlorine atoms, and or fluorine atoms. The functional group R is a group such as, for example, -CH3, -CH2CH3, -CH2CH2CFI3, -CH2CH2CH2CH3, -CH2CH2CH2CH2CH3, -CH2CH(CH3)2, -CH2CH2CH(CH3)2, -CH2CH2CH(CH2CH3)2, -CH2OCH3, -CH2CH2OCH3, and others. In embodiments of the invention, the R group is less than 50 % larger than the size of the porogen molecule chosen. Figures 6A-C provide further additional useful dielectric film precursor molecules that have attached porogens (pore-creating functional groups). In Figure 6A, a cyclic carbosilane ring comprises a porogen functional group, X, linked to a carbon of the carbosilane ring through a linker group, L. In alternate embodiments of the invention, one or two of the carbon atoms (-CH2- groups) of the cyclic carbosilane ring is replaced with an oxygen atom. In Figure 6B the cyclic carbosilane ring comprises two porogen functional groups, X, linked to carbon atoms of the carbosilane ring through a linker group, L. In alternate embodiments of the invention, one of the carbon atoms (-CH2- groups) of the cyclic carbosilane ring is replaced with an oxygen atom. In Figure 6C the cyclic carbosilane ring comprises three porogen functional groups, X, linked to carbon atoms of the carbosilane ring through a linker group, L. In an embodiment of the invention, porogen functional groups have dimensions (widths, lengths, and heights or radii) that are from 0.25 nm to 2 nm. In alternate embodiments, the porogen functional groups have dimensions that are from 0.25 nm to 0.5 nm or from 0.5 nm to 5 nm. Pore sizes in the resulting films have dimensions (widths, lengths, and heights or radii, depending on the shape of the pore) that are from 0.25 nm to 2 nm (or from 0.25 nm to 0.5 nm or from 0.5 nm to 5 nm), depending on the porogen group chosen. Further, porogen groups decompose (upon heating, UV curing, or electron beam curing, for example) with approximately 100 % volatile yield (approximately indicating 80 % ± 20 %). Porogen functional groups are, for example, cyclodextrins, polyethylene oxides, polystyrenes, polyacrylates, or poly-alpha-methylstyrenes. Linker groups are carbon- containing groups containing hydrogen and carbon atoms. Linker groups also optionally contain oxygen atoms. Linkers include groups, such as for example, -CH2-, -OCH2-, -CH2O-, -CH2CH2-, -CH2OCH2-, -CH2(CH3)CH2-. The functional group labeled R in Figures 5A-C is an alkyl group comprising hydrogen atoms and from 1 to 10 carbon atoms or from 1 to a large number of carbon atoms. In addition, R also optionally comprises, oxygen atoms, nitrogen atoms, sulfur atoms, chlorine atoms, and or fluorine atoms. The functional group R is a group such as, for example, -CH3, -CH2CH3, -CH2CH2CH3, -CH2CH2CH2CH3, -CH2CH2CH2CH2CH3, -CH2CH(CH3)2, -CH2CH2CH(CH3)2, -CH2CH2CH(CH2CH3)2, -CH2OCH3, -CH2CH2OCH3, and others. In embodiments of the invention, the R group is less than 50 % larger than the size of the porogen molecule chosen. Figure 7 provides a porogen molecule linked to a plurality of carbosilane rings. In Figure 7, the porogen molecule is an alpha cyclodextrin molecule comprises six attached cyclic carbosilanes. Each cyclic carbosilane is attached to one porogen molecule in this embodiment. The carbosilane-linked porogen molecule of Figure 7 can be made, for example, by reacting the cyclodextrin with about 6 equivalents of molecule II of Figure 2 in the presence of B(CeF5)3 in toluene. Figure 8 illustrates generally the acid or base catalyzed crosslinking of an exemplary cyclic carbosilane molecule. Through acid or base catalyzed reactions similar to the one illustrated in Figure 8, liquid-phase carbosilane film precursors of Figures 1-7 become solidified films. In Figure 8, R is an alkyl functional group such as one described with respect to any one of Figures 1-6. Figure 9 shows the formation of a dielectric film on a substrate. In Figure 9, a cyclic carbosilane precursor that is an oligomer of cyclic carbosilane units (molecule III in Figure 9), is mixed with a photo acid generator (PAG), a photo base generator (PGB), a thermally-activated acid generator (TAG) or a thermally-activated base generator (TBG), spun onto a substrate surface, such as a semiconductor wafer surface, and exposed to either heat or light to activate the acid- or base-producing compound. A photo acid generator or photo base generator is exposed to light to produce an acid or a base (respectively) and a thermally-activated acid or a thermally-activated base is exposed to heat to produce an acid or a base (respectively). Once the acid or base species is produced, crosslinking of the carbosilane precursors occurs and the film solidifies. In this manner a dielectric and or a low-k dielectric film is produced. Cyclic carbosilane precursors according to Figures 1, 3, 5A-C, 6A-C, and 7 and mixtures thereof are useful for forming dielectric films in the method described in Figure 9. Mixtures of porogen-comprising precursors and non-porogen containing precursors are used to generate films having desired porosities. In general, exemplary photo acid generators are diaryliodonium and triarylsulfonium salts possessing weakly coordinating counter anions such as trifluoromethanesulfonate, nonaflurorbutanesulfonate, hexafluorophosphate, tetrafluoroborate, para-toluenesulfonate, and others. Examples of neutral photoacid generators include those in the arylsulfonate family such as phenyltrifluoromethanesulfonate and those in the N-sulfonated amine and imides family such as N-trifluoromethanesulfonatomaleimide. Other classes of compounds common in the photolithographic and photopolymerization fields are also useful in embodiments of the invention. Examples of photobase generators include amines protected with photodecomposable nitrobenzylcarbamate or other carbamate groups. Other classes of compounds common in the photolithographic and photopolymerization fields and used as PAGs and PBGs are also useful in embodiments of the invention. Through the introduction of less stable substituents, the above described photoacid and photobase generators can be tuned to also behave as thermal acid and thermal base generators, respectively. For example, sulfonium salts possessing two aryl substituents and one alkyl substituent can behave as thermal acid generators. Additionally, due to the thermal instability of carbamate towards the release of C02, common photobase generators can also serve as thermal base generators in films. Typical temperatures for carbamate- containing TAGs are temperatures between 200 and 400 °C. Although, other photo and thermally-activated acid and photo and thermally-activated base generators are possible. Figure 10 describes a method for the formation of a spin-on-dielectric film. In Figure 10, a mixture of a polymerization initiator and oligomerized carbosilane precursors is deposited onto a substrate surface. In an alternate embodiment the mixture of the polymerization initiator and the oligomerized carbosilane precursors additionally comprises porogen-linked cyclic carbosilanes. In further embodiments the mixture comprises a polymerization initiator and porogen-linked cyclic carbosilanes. In embodiments of the invention, oligomers comprise between 3 and 10 cyclic carbosilane units The polymerization initiator is a photo acid generator, a photo base generator, a thermally-activated acid, or a thermally-activated base. The substrate is spun distributing the film precursor mixture across the substrate surface. The polymerization initiator is then activated through exposing the substrate surface to light for photo-activated initiators or heating the substrate surface for heat-activated initiators. Polymerization of the cyclic carbosilanes creates a solidified film. Depending on the composition of the film precursors used, the resulting film has a porosity that is between 5 % and 60 %. In additional embodiments the resulting film has a porosity that is between 25 % and 60 %, between 35 % and 50 %, or between 35 % and 45 %. In general, porosity is a measure of the space taken up by empty space (pores) in the material, and is described as a fraction of the volume of the empty space over the total volume of the material. The pores in the resulting films have dimensions that are that are from 0.25 nm to 2 nm. In alternate embodiments, the pores have dimensions that are from 0.25 nm to 0.5 nm or from 0.5 nm to 5 nm Additionally, the resulting films are hydrophobic. As used herein, hydrophobic means that the films do not absorb or adsorb significant amounts of water from the atmosphere. In embodiments of the invention, less than 5 % water uptake (as a volume of water taken up by the film to total volume of the film) is observed for the hydrophobic carbosilane films as measured by ellipsometric porosimetry in a saturated H20 atmosphere at room temperature (20 to 23.5 °C). In additional embodiments, less than 3 % water uptake or less than 1 % water uptake is observed for the hydrophobic carbosilane films as measured by ellipsometric porosimetry. The dielectric constant (k) values for the carbosilane films range from 1.6 to 3.5. In additional embodiments, the dielectric constant (k) values for the carbosilane films are from 1.6 to 3.0, or from 1.6 to 2.5. Dielectric constant values are measured using a CV dot technique in which the film is deposited on a highly doped Si substrate and metallic dots are deposited on top of the film. The dielectric constant across the film is then measured. Additionally, films according to embodiments of the invention have percent compositions in the range of 45-60 % C, 25-35 %Si, and 10-20 % O (atomic percent). Films according to embodiments of the invention are chemically stable. In general, chemical stability means that the film is significantly resistant to chemical degradation. For example, chemically stable films according to embodiments of the invention are resistant to degradation when a sample of the film is placed in a solution of 0.5 % HF (at 23 °C), 1.0 % KOH (at 50 °C), 15 % TMAH (tetramethylammonium hydroxide) (at 60 °C), or 30 % H202 (at 50 °C) for 10 minutes. Resistant to degradation means that 10 nm or less of film loss and 5 % or less change in refractive index is observed. In general, a porogen molecule or functional group is a molecule or functional group that is present in the precursor film that is capable of creating pores in the final film. Typically the porogen molecule is removed from the final film through heating, although other methods are possible. Other methods for porogen removal, include, for example, UV-curing or electron beam curing. After removal, the space occupied by the porogen molecule becomes a pore. The substrate on which the devices that make up the IC circuit chip are built and dielectric films are used is, for example, a silicon wafer or a silicon-on-insulator substrate. Silicon wafers are substrates that are typically used in the semiconductor processing industry, although embodiments of the invention are not dependent on the type of substrate used. The substrate could also be comprised of, for example, germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, gallium antimonide, and or other Group III-V materials either alone or in combination with silicon or silicon dioxide or other insulating materials. IC devices that make up the chip are built on the substrate surface. Devices are optionally distributed across the substrate surface and or stacked on top of each other. In general, a spin-on-dielectric film (SOD) is a dielectric film created by spinning a solution to distribute it across a surface and then solidifying the solution on the surface. A liquid form of the film is placed in the center of the substrate (such as a wafer). The substrate is spun causing the liquid film material to distribute across the wafer surface. The thickness of the resulting film depends in part on the viscosity of the liquid film. Excess liquid film material is spun off the substrate. In general a low-k dielectric material is a dielectric material that has a lower dielectric constant that silicon dioxide (Si02). Silicon dioxide has a dielectric constant of 3.9. The use of low-k dielectric materials in integrated circuit devices has enabled continued device size reduction. Although a variety of materials have lower dielectric constants that Si02 not all materials are suitable for integration into integrated circuits and integrated circuit manufacturing processes. An inter-layer dielectric (ILD) or inter-metal dielectric (IMD) film is the insulating material used between metal conductors and devices (such as transistors) in integrated circuit devices. Persons skilled in the relevant art appreciate that modifications and variations are possible throughout the disclosure and combinations and substitutions for various components shown and described. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, but does not necessarily denote that they are present in every embodiment. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. Various additional layers and or structures may be included and or described features may be omitted in other embodiments. |
Devices and methods for modifying content rendered on the display of a computing device as a function of eye focus area include receiving sensor data from one or more eye tracking sensors, determining an eye focus area on the display screen as a function of the sensor data, and adjusting one or more visual characteristics of the rendered content as a function of the eye focus area. Perceived quality of the rendered content may be improved by improving the visual characteristics of the content displayed within the eye focus area. Rendering efficiency may be improved by degrading the visual characteristics of the content displayed outside of the eye focus area. Adjustable visual characteristics include the level of detail used to render the content, the color saturation or brightness of the content, and rendering effects such as anti-aliasing, shading, anisotropic filtering, focusing, blurring, lighting, and/or shadowing. |
CLAIMS: 1. A computing device to modify rendered content on a display of the computing device as a function of eye focus area, the computing device comprising: a display having a display screen on which content can be displayed; an eye tracking sensor to generate sensor data indicative of the position of an eye of a user of the computing device; an eye tracking module to receive the sensor data from the eye tracking sensor and determine an eye focus area on the display screen as a function of the sensor data; and a rendering module to adjust a visual characteristic of the rendered content on the display as a function of the eye focus area. 2. The computing device of claim 1, wherein the eye tracking module further comprises a change filter to filter the sensor data to remove saccades from fixations. 3. The computing device of claim 1, wherein the eye tracking module is further to update a heat map with the sensor data and reference the heat map to determine the eye focus area. 4. The computing device of claim 1, wherein to adjust the visual characteristic of the rendered content comprises to improve the visual characteristic of the rendered content within the eye focus area. 5. The computing device of claim 1, wherein to adjust the visual characteristic of the rendered content comprises to degrade the visual characteristic of the rendered content located outside of the eye focus area. 6. The computing device of claim 1, wherein to adjust the visual characteristic of the rendered content comprises to improve the visual characteristic of the rendered content at an area on the display screen of the display outside of the eye focus area. 7. The computing device of claim 1, wherein to adjust the visual characteristic of the rendered content comprises to degrade the visual characteristic of the rendered content on the display screen of the display except for an area on the display screen outside of the eye focus area. 8. The computing device of claim 4, 5, 6, or 7, wherein to adjust the visual characteristic comprises to adjust a level of detail of the rendered content. 9. The computing device of claim 8, wherein to adjust the level of detail comprises to adjust a count of polygons used to render the rendered content. 10. The computing device of claim 8, wherein to adjust the level of detail comprises to adjust a set of textures used to render the rendered content. 11. The computing device of claim 8, wherein to adjust the level of detail comprises to adjust a number of rays traced to render the rendered content. 12. The computing device of claim 8, wherein to adjust the level of detail comprises to adjust a number of display elements used to render the rendered content. 13. The computing device of claim 4, 5, 6, or 7, wherein to adjust the visual characteristic comprises to adjust at least one rendering effect selected from the group consisting of: anti-aliasing, shading, anisotropic filtering, lighting, shadowing, focusing, or blurring. 14. The computing device of claim 4, 5, 6, or 7, wherein to adjust the visual characteristic comprises to adjust color saturation. 15. The computing device of claim 4, 5, 6, or 7, wherein to adjust the visual characteristic comprises to adjust brightness of the display screen. 16. The computing device of claim 15, wherein to adjust brightness of the display screen comprises to adjust brightness of an area of the display screen less than the entire display screen. 17. The computing device of claim 4, 5, 6, or 7, wherein to adjust the visual characteristic comprises to adjust rendering priority, wherein the rendered content comprises a plurality of parts that are rendered at different times. 18. The computing device of claim 17, wherein the plurality of parts that are rendered at different times comprises a plurality of hypertext elements represented in a hypertext markup language. 19. A method for modifying rendered content on a display of a computing device as a function of eye focus area, the method comprising: receiving, on the computing device, sensor data indicative of the position of an eye of a user of the computing device from an eye tracking sensor of the computing device; determining, on the computing device, an eye focus area on a display screen of the display as a function of the sensor data; and adjusting, on the computing device, a visual characteristic of the rendered content on the display as a function of the eye focus area. 20. The method of claim 19, wherein determining the eye focus area further comprises filtering, on the computing device, the sensor data to remove saccades from fixations. 21. The method of claim 19, wherein determining the eye focus area further comprises updating, on the computing device, a heat map with the sensor data and referencing, on the computing device, the heat map to determine the eye focus area. 22. The method of claim 19, wherein adjusting the visual characteristic of the rendered content comprises improving the visual characteristic of the rendered content within the eye focus area. 23. The method of claim 19, wherein adjusting the visual characteristic of the rendered content comprises degrading the visual characteristic of the rendered content located outside of the eye focus area. 24. The method of claim 19, wherein adjusting the visual characteristic of the rendered content comprises improving the visual characteristic of the rendered content at an area on the display screen of the display outside of the eye focus area. 25. The method of claim 19, wherein adjusting the visual characteristic of the rendered content comprises degrading the visual characteristic of the rendered content on the display screen of the display except for an area on the display screen outside of the eye focus area. 26. The method of claim 22, 23, 24, or 25, wherein adjusting the visual characteristic comprises adjusting a level of detail of the rendered content. 27. The method of claim 26, wherein adjusting the level of detail comprises adjusting a count of polygons used to render the rendered content. 28. The method of claim 26, wherein adjusting the level of detail comprises adjusting a set of textures used to render the rendered content. 29. The method of claim 26, wherein adjusting the level of detail comprises adjusting a number of rays traced to render the rendered content. 30. The method of claim 26, wherein adjusting the level of detail comprises adjusting a number of display elements used to render the rendered content. 31. The method of claim 22, 23, 24, or 25, wherein adjusting the visual characteristic comprises adjusting at least one rendering effect selected from the group consisting of: antialiasing, shading, anisotropic filtering, lighting, shadowing, focusing, or blurring. 32. The method of claim 22, 23, 24, or 25, wherein adjusting the visual characteristic comprises adjusting color saturation. 33. The method of claim 22, 23, 24, or 25, wherein adjusting the visual characteristic comprises adjusting brightness of the display screen. 34. The method of claim 33, wherein adjusting brightness of the display screen comprises adjusting brightness of an area of the display screen less than the entire display screen. 35. The method of claim 22, 23, 24, or 25, wherein adjusting the visual characteristic comprises adjusting rendering priority, wherein the rendered content comprises a plurality of parts that are rendered at different times. 36. The method of claim 35, wherein the adjusting rendering priority comprises adjusting rendering priority of a plurality of hypertext elements represented in a hypertext markup language. 37. A computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of claims 19-36. 38. One or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of claims 19-36. |
DEVICE AND METHOD FOR MODIFYING RENDERING BASED ON VIEWER FOCUS AREA FROM EYE TRACKING BACKGROUND Users and developers generally demand ongoing increases in the quality of content rendered on computing devices. For example, video gaming tends to demand increased realism and quality in rendered content to create an immersive, compelling gaming experience. Traditional computing devices render content with the expectation that the user may focus his or her gaze on any part of the display screen of the computing device at any particular time. To realize improvements in rendering quality, traditional computing devices generally rely on increasing the amount of hardware resources available for rendering (e.g., by increasing the number of silicon logic gates, the clock frequency, available bus bandwidth, or the like). Eye -tracking sensors track the movement of a user's eyes and thereby calculate the direction of the user's gaze while using the computing device. Eye -tracking sensors allow the computing device to determine on what part or parts of the display screen the user is focusing his or her gaze. Already common in research settings, eye -tracking technology will likely become less expensive and more widely adopted in the future. BRIEF DESCRIPTION OF THE DRAWINGS The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. FIG. 1 is a simplified block diagram of at least one embodiment of a computing device to modify rendered content on a display based on a viewer focus area; FIG. 2 is a simplified block diagram of at least one embodiment of an environment of the computing device of FIG. 1; FIG. 3 is a simplified flow diagram of at least one embodiment of a method for modifying rendered content on the display based on the viewer focus area, which may be executed by the computing device of FIGS. 1 and 2; and FIG. 4 is a schematic diagram representing a viewer focusing on an area on the display of the computing device of FIGS. 1 and 2. DETAILED DESCRIPTION OF THE DRAWINGS While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific exemplary embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present disclosure. It will be appreciated, however, by one skilled in the art that embodiments of the disclosure may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation. References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Embodiments of the invention may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the invention implemented in a computer system may include one or more bus-based interconnects between components and/or one or more point-to- point interconnects between components. Embodiments of the invention may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) medium, which may be read and executed by one or more processors. A machine-readable medium may be embodied as any device, mechanism, or physical structure for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine -readable medium may be embodied as read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; mini- or micro-SD cards, memory sticks, electrical signals, and others. In the drawings, specific arrangements or orderings of schematic elements, such as those representing devices, modules, instruction blocks and data elements, may be shown for ease of description. However, it should be understood by those skilled in the art that the specific ordering or arrangement of the schematic elements in the drawings is not meant to imply that a particular order or sequence of processing, or separation of processes, is required. Further, the inclusion of a schematic element in a drawing is not meant to imply that such element is required in all embodiments or that the features represented by such element may not be included in or combined with other elements in some embodiments. In general, schematic elements used to represent instruction blocks may be implemented using any suitable form of machine-readable instruction, such as software or firmware applications, programs, functions, modules, routines, processes, procedures, plug-ins, applets, widgets, code fragments and/or others, and that each such instruction may be implemented using any suitable programming language, library, application programming interface (API), and/or other software development tools. For example, some embodiments may be implemented using Java, C++, and/or other programming languages. Similarly, schematic elements used to represent data or information may be implemented using any suitable electronic arrangement or structure, such as a register, data store, table, record, array, index, hash, map, tree, list, graph, file (of any file type), folder, directory, database, and/or others. Further, in the drawings, where connecting elements, such as solid or dashed lines or arrows, are used to illustrate a connection, relationship or association between or among two or more other schematic elements, the absence of any such connecting elements is not meant to imply that no connection, relationship or association can exist. In other words, some connections, relationships or associations between elements may not be shown in the drawings so as not to obscure the disclosure. In addition, for ease of illustration, a single connecting element may be used to represent multiple connections, relationships or associations between elements. For example, where a connecting element represents a communication of signals, data or instructions, it should be understood by those skilled in the art that such element may represent one or multiple signal paths (e.g., a bus), as may be needed, to effect the communication. Referring now to FIG. 1, in one embodiment, a computing device 100 is configured to modify content on a display of the computing device 100 as a function of a viewer's eye focus area. To do so, as discussed in more detail below, the computing device 100 is configured to utilize one or more eye tracking sensors to determine the viewer's eye focus area. The computing device 100 responsively, or continually, adjusts one or more visual characteristics of the rendered content within and/or outside of the eye focus area. Modifying the rendered content as a function of the eye focus area may provide cost, bandwidth, and/or power savings over traditional rendering techniques. For example, in some embodiments, by prioritizing rendering within the viewer's eye focus area, the computing device 100 may render content that is perceived by the viewer to be of higher quality than typical rendering, using the same hardware resources (e.g., the same number of silicon logic gates). Alternatively, in other embodiments the computing device 100 may use fewer hardware resources or require less bandwidth to render content perceived by the viewer to be of equivalent quality to typical rendering. It should be appreciated that the reduction of hardware resources may reduce the cost of the computing device 100. Also, reducing hardware resources and using existing hardware resources more efficiently may reduce the power consumption of the computing device 100. In addition to cost and power savings, modifying rendered content as a function of the eye focus area may allow the computing device 100 to provide an improved user experience. In some embodiments, the computing device 100 may prioritize visual characteristics within the viewer's eye focus area, thus providing better quality for areas of user interest. Additionally or alternatively, the computing device 100 may prioritize visual characteristics at an area of the display screen outside of the viewer's eye focus area in order to draw the viewer's attention to a different area of the screen. Such improved user experience may be utilized by productivity applications (e.g., prioritizing the portion of a document the viewer is working on, or providing visual cues to direct the user through a task), by entertainment applications (e.g., changing the focus point of a 3-D scene for dramatic effect), and by other applications. The computing device 100 may be embodied as any type of computing device having a display screen and capable of performing the functions described herein. For example, the computing device 100 may be embodied as, without limitation, a computer, a desktop computer, a personal computer (PC), a tablet computer, a laptop computer, a notebook computer, a mobile computing device, a smart phone, a cellular telephone, a handset, a messaging device, a work station, a network appliance, a web appliance, a distributed computing system, a multiprocessor system, a processor-based system, a consumer electronic device, a digital television device, a set- top box, and/or any other computing device having a display screen on which content may be displayed. In the illustrative embodiment of FIG. 1, the computing device 100 includes a processor 120, an I/O subsystem 124, a memory 126, a data storage 128, and one or more peripheral devices 130. Of course, the computing device 100 may include other or additional components, such as those commonly found in a computer (e.g., various input/output devices), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 126, or portions thereof, may be incorporated in the processor 120 in some embodiments. The processor 120 may be embodied as any type of processor currently known or developed in the future and capable of performing the functions described herein. For example, the processor may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the memory 126 may be embodied as any type of volatile or non-volatile memory or data storage currently known or developed in the future and capable of performing the functions described herein. In operation, the memory 126 may store various data and software used during operation of the computing device 100 such as operating systems, applications, programs, libraries, and drivers. The memory 126 is communicatively coupled to the processor 120 via the I/O subsystem 124, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 120, the memory 126, and other components of the computing device 100. For example, the I/O subsystem 124 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 124 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 120, the memory 126, and other components of the computing device 100, on a single integrated circuit chip. The data storage 128 may be embodied as any type of device or devices configured for the short-term or long-term storage of data. For example, the data storage 128 may include any one or more memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. In some embodiments, the computing device 100 maintains a heat map 206 (see FIG. 2) stored in the data storage 128. As discussed in more detail below, the heat map 206 stores changes in viewer focus area over time. Of course, the computing device 100 may store, access, and/or maintain other data in the data storage 128 in other embodiments. In some embodiments, the computing device 100 may also include one or more peripheral devices 130. Such peripheral devices 130 may include any number of additional input/output devices, interface devices, and/or other peripheral devices. For example, in some embodiments, the peripheral devices 130 may include a display, touch screen, graphics circuitry, keyboard, mouse, speaker system, and/or other input/output devices, interface devices, and/or peripheral devices. In the illustrative embodiment, the computing device 100 also includes a display 132 and eye tracking sensor(s) 136. The display 132 of the computing device 100 may be embodied as any type of display capable of displaying digital information such as a liquid crystal display (LCD), a light emitting diode (LED), a plasma display, a cathode ray tube (CRT), or other type of display device. Regardless of the particular type of display, the display 132 includes a display screen 134 on which the content is displayed. The eye tracking sensor(s) 136 may be embodied as any one or more sensors capable of determining an area on the display screen 134 of the display 132 on which the viewer's eyes are focused. For example, in some embodiments, the eye tracking sensor(s) 136 may use active infrared emitters and infrared detectors to track the viewer's eye movements over time. The eye tracking sensor(s) may capture the infrared light reflected off of various internal and external features of the viewer's eye and thereby calculate the direction of the viewer's gaze. The eye tracking sensor(s) 136 may provide precise information on the viewer's eye focus area, i.e., x- and y-coordinates on the display screen 134 corresponding to the eye focus area. Referring now to FIG. 2, in one embodiment, the computing device 100 establishes an environment 200 during operation. The illustrative embodiment 200 includes an eye tracking module 202 and a rendering module 208. Each of the eye tracking module 202 and the rendering module 208 may be embodied as hardware, firmware, software, or a combination thereof. The eye tracking module 202 is configured to determine an area on the display screen 134 of the display 132 on which the viewer's eyes are focused, using sensor data received from the eye tracking sensor(s) 136. In some embodiments, the eye tracking module 202 may include a change filter 204. Human eye movement is characterized by short pauses, called fixations, linked by rapid movements, called saccades. Therefore, unfiltered eye tracking sensor data may generate rapid and inconsistent changes in eye focus area. Accordingly, the change filter 204 may filter the eye tracking sensor data to remove saccades from fixations. For example, in some embodiments, the change filter 204 may be a "low-pass" filter; that is, the change filter 204 may reject changes in the viewer's focus area having a focus frequency greater than a threshold focus frequency. As a corollary, the change filter 204 may reject focus area changes having a focus duration less than a threshold focus duration. In some embodiments, the eye tracking module 202 includes a heat map 206, which records viewer focus areas over time, allowing the eye tracking module 202 to determine areas on the display screen 134 that are often focused on by the viewer. The heat map 206 may be embodied as a two-dimensional representation of the display screen 134. Each element of the heat map 206 may record the number of times the viewer has fixated on a corresponding area of the display screen 134. In other embodiments each element of the heat map 206 may record the total cumulative time the viewer has fixated on the corresponding area of the display screen 134. Thus, the heat map 206 may provide feedback on multiple areas on the display screen 134 of interest to the viewer. The heat map 206 may record data for a limited period of time, for example, for the most recent fixed period of time, or during operation of a particular application. Data in the heat map 206 may be visualized as a color-coded two-dimensional representation overlaying the content rendered on the display screen 134. Such visualization appears similar to a false-color infrared image, lending the name "heat map." The rendering module 208 is configured to adjust one or more visual characteristics of rendered content as a function of the viewer's eye focus area. In some embodiments, the rendering module 208 may prioritize visual characteristics within the eye focus area. That is, the visual characteristics may be adjusted to improve visual characteristics within the eye focus area or to degrade visual characteristics outside of the eye focus area. In alternative embodiments, the rendering module 208 may prioritize visual characteristics outside of the eye focus area, for example to encourage the viewer to change the viewer's focus area. To accomplish such prioritization, the visual characteristics at the eye focus area may be degraded or the visual characteristics at a location away from the eye focus area may be improved. Some embodiments may prioritize visual characteristics both within and outside of the eye focus area, depending on the particular context. As discussed in more detail below, the visual characteristics may be embodied as any type of visual characteristic of the content that may be adjusted. Referring now to FIG. 3, in use, the computing device 100 may execute a method 300 for modifying rendered output on a display of a computing device based on a viewer's eye focus area. The method 300 begins with block 302, in which the eye tracking module 202 determines the eye focus area. For example, referring to FIG. 4, a schematic diagram 400 illustrates a viewer 402 focused on an eye focus area 404 on the display screen 134 of the display 132 of the computing device 100. The eye focus area 404 is illustrated as circular but could be any shape enclosing an area on the display screen 134. The eye focus area may be embodied as a group of pixels or other display elements on the display screen 134, or may be embodied as a single pixel or display element on the display screen 134. Referring back to FIG. 3, in block 304, the eye tracking module 202 receives eye tracking sensor data from the eye tracking sensor(s) 136. The eye focus area may be determined directly as a function of the eye tracking sensor data. Alternatively, as discussed below, the eye focus area may be determined using one or both of the change filter 204 and the heat map 206. In block 306, the eye tracking module 202 may filter the eye tracking sensor data using the change filter 204. As discussed above, the change filter 204 is embodied as a low -pass filter, which rejects rapid and inconsistent changes in the eye focus area. For example, in some embodiments, the change filter 204 may filter out eye focus area changes with focus duration lasting less than 200 milliseconds (200 ms). Such period corresponds with rejecting eye movement changes with focus frequency greater than 5 times per second (5 Hz). Of course, change filters having other filter properties may be used in other embodiments. In block 308, the eye tracking module 202 may update the heat map 206 with the eye tracking sensor data. As discussed above, the heat map 206 records eye focus area changes over time. Areas representing higher "density" in the heat map 206 correspond to areas of the display screen 134 on which the viewer has focused more often, which in turn may correspond to areas on the display screen 134 of higher interest to the viewer. The eye tracking module 202 may refer to the heat map 206 to determine the eye focus area, taking into account frequently-focused areas on the display screen 134 of the display 132. In block 310, the rendering module 208 adjusts visual characteristics of the rendered content as a function of the eye focus area determined in block 302. In some embodiments, adjusted visual characteristics may be embodied as the level of detail of rendered content. The level of detail of rendered content has many potential embodiments. For example, for three- dimensional content, the level of detail may be embodied as the number of polygons and/or the level of detail of various textures used to construct a scene. For other embodiments, the level of detail may be embodied as the number of rays traced to generate an image, as with ray-tracing rendering systems. In other embodiments, the level of detail may be embodied as the number of display elements of the display screen 134 used to render an image. For example, certain high- resolution display technologies may render groups of physical pixels (often four physical pixels) together as a single logical pixel, effectively reducing the resolution of the screen. The visual characteristics may also be embodied as visual rendering effects such as antialiasing, shaders (e.g., pixel shaders or vertex shaders), anisotropic filtering, lighting, shadowing, focusing, or blurring. Of course, the visual characteristics are not limited to three-dimensional rendered content. For example, the visual characteristics may be embodied as color saturation or display brightness. For certain display technologies, the brightness of individual display elements could be adjusted; that is, the brightness of less than the entire display screen 134 may be adjusted. The visual characteristics may also be embodied as rendering priority. For example, certain visually intensive applications render content in parts (often called "tiles"); that is, large content is split into smaller parts and the parts are rendered separately and often at different times. In some embodiments, adjusting rendering priority would control the order of rendering the various parts making up the content. For example, a graphics editing application could render the part of the image containing the eye focus area first. As another example, a graphical browser rendering content described in a markup language (e.g., HTML5) may render text or download images for the elements of the HTML 5 document containing the eye focus area first. In some embodiments, the rendering module 208 may adjust the visual characteristics of different areas of the displayed content in different ways. For example, in block 312, the rendering module 208 may improve visual characteristics of the rendered content within the eye focus area. Improving visual characteristics within the eye focus area may improve the image quality perceived by the viewer and may use hardware resources more efficiently than improving visual characteristics of the entire content. Additionally or alternatively, in block 314, the rendering module 208 may degrade visual characteristics of rendered content outside of the eye focus area. Because the visual characteristics within the eye focus area are unchanged, the image quality perceived by the viewer may remain unchanged while rendering efficiency is increased. The precise nature of "improving" or "degrading" a visual characteristic depends on the particular visual characteristic. For example, the polygon count may be improved by increasing the number of polygons and degraded by decreasing the number of polygons. The level of detail of textures may be improved by increasing the size, resolution, or quality of the textures and degraded by decreasing the size, resolution, or quality of the textures. Rendering effects may be improved by adding additional effects or by improving the quality of the effects. For example, shaders may be improved by utilizing additional or more computationally intensive shaders. Rendering effects may be degraded by removing effects or decreasing the quality of the effects. Color saturation or brightness may be improved by increasing the color saturation or brightness and degraded by decreasing the color saturation or brightness. In some embodiments, the rendering module 208 may, additionally or alternatively, improve visual characteristics of the rendered content at an area on the display screen 134 outside of the eye focus area. For example, referring to FIG. 4, the schematic diagram 400 illustrates the viewer 402 focused on the eye focus area 404 on the display screen 134 of the display 132 of the computing device 100. A hashed area 406 represents an area of the display away outside of the eye focus area 404. By improving the visual characteristics within the area 406, the computing device 100 may encourage the viewer to shift the viewer's focus to the area 406. Referring back to FIG. 3, in block 318, the rendering module 316 may, additionally or alternatively, degrade visual characteristics of the rendered content within the eye focus area. Degrading the visual characteristics in locations on the display screen 134 including the eye focus area may encourage the viewer to shift the viewer's focus to another area of the display with visual characteristics that are not degraded. Particular visual characteristics may be improved or degraded as described above. After the visual characteristics are adjusted, the method 300 loops back to block 302 in which the computing device 100 determines the eye focus area. Thus, the computing device 100 continually monitors the eye focus area and adjusts the visual characteristics appropriately. EXAMPLES Illustrative examples of the devices and methods disclosed herein are provided below. An embodiment of the devices and methods may include any one or more, and any combination of, the examples described below. Example 1 includes a computing device to modify rendered content on a display of the computing device as a function of eye focus area. The computing device includes a display having a display screen on which content can be displayed; an eye tracking sensor to generate sensor data indicative of the position of an eye of a user of the computing device; an eye tracking module to receive the sensor data from the eye tracking sensor and determine an eye focus area on the display screen as a function of the sensor data; and a rendering module to adjust a visual characteristic of the rendered content on the display as a function of the eye focus area. Example 2 includes the subject matter of Example 1, and wherein the eye tracking module further comprises a change filter to filter the sensor data to remove saccades from fixations. Example 3 includes the subject matter of any of Example 1 and 2, and wherein the eye tracking module is further to update a heat map with the sensor data and reference the heat map to determine the eye focus area. Example 4 includes the subject matter of any of Examples 1 -3, and wherein to adjust the visual characteristic of the rendered content comprises to improve the visual characteristic of the rendered content within the eye focus area. Example 5 includes the subject matter of any of Examples 1 -4, and wherein to adjust the visual characteristic of the rendered content comprises to degrade the visual characteristic of the rendered content located outside of the eye focus area. Example 6 includes the subject matter of any of Examples 1 -5, and wherein to adjust the visual characteristic of the rendered content comprises to improve the visual characteristic of the rendered content at an area on the display screen of the display outside of the eye focus area. Example 7 includes the subject matter of any of Examples 1 -6, and wherein to adjust the visual characteristic of the rendered content comprises to degrade the visual characteristic of the rendered content on the display screen of the display except for an area on the display screen outside of the eye focus area. Example 8 includes the subject matter of any of Examples 1 -7, and wherein to adjust the visual characteristic comprises to adjust a level of detail of the rendered content. Example 9 includes the subject matter of any of Examples 1 -8, and wherein to adjust the level of detail comprises to adjust a count of polygons used to render the rendered content. Example 10 includes the subject matter of any of Examples 1-9, and wherein to adjust the level of detail comprises to adjust a set of textures used to render the rendered content. Example 11 includes the subject matter of any of Examples 1 -10, and wherein to adjust the level of detail comprises to adjust a number of rays traced to render the rendered content. Example 12 includes the subject matter of any of Examples 1 -11, and wherein to adjust the level of detail comprises to adjust a number of display elements used to render the rendered content. Example 13 includes the subject matter of any of Examples 1 -12, and wherein to adjust the visual characteristic comprises to adjust at least one rendering effect selected from the group consisting of: anti-aliasing, shading, anisotropic filtering, lighting, shadowing, focusing, or blurring. Example 14 includes the subject matter of any of Examples 1 -13, and wherein to adjust the visual characteristic comprises to adjust color saturation. Example 15 includes the subject matter of any of Examples 1 -14, and wherein to adjust the visual characteristic comprises to adjust brightness of the display screen. Example 16 includes the subject matter of any of Examples 1-15, and wherein to adjust brightness of the display screen comprises to adjust brightness of an area of the display screen less than the entire display screen. Example 17 includes the subject matter of any of Examples 1 -16, and wherein to adjust the visual characteristic comprises to adjust rendering priority, wherein the rendered content comprises a plurality of parts that are rendered at different times. Example 18 includes the subject matter of any of Examples 1-17, and wherein the plurality of parts that are rendered at different times comprises a plurality of hypertext elements represented in a hypertext markup language. Example 19 includes a method for modifying rendered content on a display of a computing device as a function of eye focus area. The method includes receiving, on the computing device, sensor data indicative of the position of an eye of a user of the computing device from an eye tracking sensor of the computing device; determining, on the computing device, an eye focus area on a display screen of the display as a function of the sensor data; and adjusting, on the computing device, a visual characteristic of the rendered content on the display as a function of the eye focus area. Example 20 includes the subject matter of Example 19, and wherein determining the eye focus area further comprises filtering, on the computing device, the sensor data to remove saccades from fixations. Example 21 includes the subject matter of any of Examples 19 and 20, and wherein determining the eye focus area further comprises updating, on the computing device, a heat map with the sensor data and referencing, on the computing device, the heat map to determine the eye focus area. Example 22 includes the subject matter of any of Examples 19-21, and wherein adjusting the visual characteristic of the rendered content comprises improving the visual characteristic of the rendered content within the eye focus area. Example 23 includes the subject matter of any of Examples 19-22, and wherein adjusting the visual characteristic of the rendered content comprises degrading the visual characteristic of the rendered content located outside of the eye focus area. Example 24 includes the subject matter of any of Examples 19-23, and wherein adjusting the visual characteristic of the rendered content comprises improving the visual characteristic of the rendered content at an area on the display screen of the display outside of the eye focus area. Example 25 includes the subject matter of any of Examples 19-24, and wherein adjusting the visual characteristic of the rendered content comprises degrading the visual characteristic of the rendered content on the display screen of the display except for an area on the display screen outside of the eye focus area. Example 26 includes the subject matter of any of Examples 19-25, and wherein adjusting the visual characteristic comprises adjusting a level of detail of the rendered content. Example 27 includes the subject matter of any of Examples 19-26, and wherein adjusting the level of detail comprises adjusting a count of polygons used to render the rendered content. Example 28 includes the subject matter of any of Examples 19-27, and wherein adjusting the level of detail comprises adjusting a set of textures used to render the rendered content. Example 29 includes the subject matter of any of Examples 19-28, and wherein adjusting the level of detail comprises adjusting a number of rays traced to render the rendered content. Example 30 includes the subject matter of any of Examples 19-29, and wherein adjusting the level of detail comprises adjusting a number of display elements used to render the rendered content. Example 31 includes the subject matter of any of Examples 19-30, and wherein adjusting the visual characteristic comprises adjusting at least one rendering effect selected from the group consisting of: anti-aliasing, shading, anisotropic filtering, lighting, shadowing, focusing, or blurring. Example 32 includes the subject matter of any of Examples 19-31, and wherein adjusting the visual characteristic comprises adjusting color saturation. Example 33 includes the subject matter of any of Examples 19-32, and wherein adjusting the visual characteristic comprises adjusting brightness of the display screen. Example 34 includes the subject matter of any of Examples 19-33, and wherein adjusting brightness of the display screen comprises adjusting brightness of an area of the display screen less than the entire display screen. Example 35 includes the subject matter of any of Examples 19-34, and wherein adjusting the visual characteristic comprises adjusting rendering priority, wherein the rendered content comprises a plurality of parts that are rendered at different times. Example 36 includes the subject matter of any of Examples 19-35, and wherein the adjusting rendering priority comprises adjusting rendering priority of a plurality of hypertext elements represented in a hypertext markup language. Example 37 includes a computing device having a processor and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of claims 19-36. Example 38 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of claims 19-36. |
An interconnect adaptor may be fabricated having a substantially planar surface, to which a microelectronic package may be electrically attached, and a non-planar surface with at least one interconnect extending from the interconnect adaptor planar surface to the interconnect adaptor non-planar surface. The interconnect adaptor non-planar surface may be shaped to substantially conform to a shape of a microelectronic substrate to which it may be attached, which eliminates the need to bend or otherwise adapt the microelectronic package to conform to the microelectronic substrate. |
1.A microelectronic component, including:An interconnection adapter having a substantially flat surface and a non-flat surface, wherein at least one conductive interconnection extends from the flat surface to the non-flat surface, wherein the interconnection adapter includes the non-flat surface of the interconnection adapter At least one bonding pad formed on the surface and a solder ball formed on each bonding pad formed on the uneven surface of the interconnection adapter;A microelectronic package attached to the flat surface of the interconnect adapter body, wherein the microelectronic package includes at least one bonding pad contacting the flat surface of the interconnect adapter; andA microelectronic substrate attached to the non-planar surface, wherein at least one solder ball extends between the at least one bonding pad of the interconnect adapter and the microelectronic substrate,Wherein the interconnection adapter is configured to adapt to the shape of the microelectronic substrate, andWherein, the non-flat surface includes one of the following: an arcuate surface that is concave relative to the flat surface, and at least two flat surfaces.2.The microelectronic component according to claim 1, wherein an acute angle is formed between the at least two flat surfaces.3.The microelectronic component according to claim 1, wherein an obtuse angle is formed between the at least two flat surfaces.4.The microelectronic component of claim 1, wherein the at least two flat surfaces are in a non-flat configuration parallel to each other.5.The microelectronic component of claim 1, further comprising a microelectronic package attached to the flat surface of the interconnect adapter.6.A method of manufacturing microelectronic structures, including:Forming an interconnection adapter body having a substantially flat surface and a non-flat surface;Attach the microelectronic package to the flat surface of the interconnection adapter body, wherein the microelectronic package includes at least one bonding pad contacting the flat surface of the interconnection adapter body, and wherein the interconnection adapter is included in the interconnection adapter body. Connecting at least one bonding pad formed on the uneven surface of the interconnection adapter and a solder ball formed on each bonding pad formed on the uneven surface of the interconnection adapter;Forming at least one through hole extending from the non-planar surface of the interconnection adapter body to the flat body of the interconnection adapter body, wherein the at least one through hole exposes at least a part of at least one microelectronic package bonding pad;Filling the at least one through hole with a conductive material to form at least one interconnection through the interconnection adapter body; andAttaching a microelectronic substrate to the non-planar surface, wherein at least one solder ball extends between the at least one bonding pad of the interconnect adapter and the microelectronic substrate,Wherein the interconnection adapter is configured to adapt to the shape of the microelectronic substrate, andWherein, the non-flat surface includes one of the following: an arcuate surface that is concave relative to the flat surface, and at least two flat surfaces.7.The method of claim 6, wherein forming the interconnection adapter body includes molding the interconnection adapter body.8.7. The method of claim 6, wherein forming the at least one through hole comprises laser drilling at least one through hole extending from the non-planar surface of the interconnection adapter body to the flat body of the interconnection adapter body.9.7. The method of claim 6, wherein filling the at least one through hole with a conductive material comprises plating metal in the at least one through hole to form at least one interconnection through the interconnection adapter body.10.The method of claim 9, wherein plating metal in the at least one through hole comprises plating copper in the at least one through hole.11.The method of claim 6, wherein an acute angle is formed between the at least two flat surfaces.12.The method of claim 6, wherein an obtuse angle is formed between the at least two flat surfaces.13.The method of claim 6, wherein the at least two flat surfaces are in a non-flat configuration parallel to each other.14.An electronic system that includes:Board; andMicroelectronic components, including:An interconnection adapter having a substantially flat surface and an uneven surface, wherein at least one conductive interconnection extends from the flat surface to the uneven surface, wherein the interconnection adapter includes the uneven surface of the interconnection adapter At least one bonding pad formed on the surface, and a solder ball formed on each bonding pad formed on the uneven surface of the interconnection adapter; andA microelectronic package attached to the flat surface of the interconnect adapter, wherein the microelectronic package includes at least one bonding pad contacting the flat surface of the interconnect adapter;Wherein the board is attached to the uneven surface by at least one solder ball extending between at least one bonding pad of the interconnection adapter and the board,Wherein the interconnection adapter is configured to adapt to the shape of the board, andWherein, the non-flat surface includes one of the following: an arcuate surface that is concave relative to the flat surface, and at least two flat surfaces.15.The electronic system of claim 14, wherein the at least two flat surfaces are in a non-flat configuration parallel to each other. |
Microelectronic Interconnect AdapterTechnical fieldThe embodiments of the present description generally relate to the field of microelectronic devices, and, more specifically, to microelectronic structures that include microelectronic interconnection adapters that allow the microelectronic structures to be attached to various substrates.Background techniqueAs microelectronic devices are becoming smaller and smaller, the ability to make microelectronic devices into wearable microelectronic systems is becoming commonplace. Wearable microelectronic systems are expected to become commonplace for medical applications and to enable the Internet of Things ("IoT", that is, equipping small identification devices to multiple objects that can be connected to the Internet to connect and communicate with each other) product.In order to package such wearable devices, enhanced integration density on the one hand (such as system-in-package (SiP)) and enhanced size reduction (such as length (x), width (y)) on the other hand will be required And height (z) dimensions). It is important to reduce the length and width in order to reduce the surface area required on the printed circuit board or module to which the microelectronic package is mounted. Reducing the height is not only important for size reduction, but also the bending/flexibility of assembling the package on a flexible printed circuit board or a slightly curved printed circuit board. This bending/flexibility can be achieved by using a thin microelectronic package with very thin microelectronic slices inside. These thin microelectronic packages can be mounted on a flexible printed circuit board, which can then be bent slightly to fit into a module or chase. However, common microelectronic packages, such as fan-out wafer level package (FO WLP), wafer level chip scale package (WLCSP), or flip chip (FC) packages, are only Has very limited bending flexibility. In addition, the extremely thinning of the microelectronic package and the microelectronic slices therein reduces its mechanical stability. In addition, due to asymmetric mechanical stress on its crystal structure, the bending of microelectronic slices (such as silicon-based slices) can have a negative impact on performance. Depending on the bending direction, the performance of the integrated circuit system (e.g., transistor) formed in the microelectronic slicing can be reduced, for example, by up to about 20%. This can result in significant non-uniformity in the performance of the integrated circuit system within the bent microelectronic slice, which can require redesign of the integrated circuit system therein.Also, for highly curved printed circuit boards (such as tube-shaped surfaces, stepped surfaces, 90° z-direction angles, or for bridging purposes), it is not suitable for bending microelectronic packages. Although some of these problems can be solved by printed electronics technology, these organic-based devices still suffer from poor electrical performance.Therefore, there is a need for a microelectronic package design that does not require bending when used in a wearable microelectronic system.Description of the drawingsThe subject of the present disclosure is specifically pointed out and clearly claimed in the closing part of the specification. Taken in conjunction with the drawings, the foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims. It is understood that the drawings only depict several embodiments according to the present disclosure, and therefore are not to be considered as limiting the scope thereof. Through the use of the drawings, the present disclosure will be described with additional characteristics and details, so that the advantages of the present disclosure can be more easily determined, in which:Figure 1 illustrates a cross-sectional view of a microelectronic package attached to an interconnection adapter having a substantially flat surface and a non-flat surface, wherein the interconnection extends from the flat surface to the non-flat surface according to an embodiment of the present description.2 to 6 illustrate cross-sectional views of various configurations of an interconnection adapter attached to a microelectronic substrate according to an embodiment of the present description.7 to 16 illustrate cross-sectional views of a process of manufacturing the microelectronic component of FIG. 1 according to an embodiment of the present description.FIG. 17 illustrates a flowchart of a process of manufacturing a microelectronic component according to an embodiment of the present description.Figure 18 illustrates a computing device according to one implementation of this description.Detailed waysIn the following detailed description, reference is made to the accompanying drawings that illustrate, by way of illustration, specific embodiments in which the claimed subject matter can be implemented. These embodiments are described in sufficient detail to enable those skilled in the art to implement the subject matter. It will be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, a specific feature, structure, or characteristic described herein in combination with one embodiment can be implemented in other embodiments without departing from the spirit and scope of the claimed subject matter. Reference in this specification to "one embodiment" or "an embodiment" means that at least one implementation covered in this description includes a specific feature, structure, or characteristic described in combination with the embodiment. Thus, the use of the phrase "one embodiment" or "in an embodiment" does not necessarily refer to the same embodiment. And, it will be understood that the location and arrangement of individual elements within each disclosed embodiment can be modified without departing from the spirit and scope of the claimed subject matter. Therefore, the following detailed description should not be considered as restrictive, and the scope of the subject matter is defined only by the appended claims, together with the full scope of equivalents given to the appended claims to be appropriately interpreted. In the drawings, the same reference numerals refer to the same or similar elements or functions throughout the several views, and the element described therein is not necessarily in proportion to another. On the contrary, individual elements may be enlarged or reduced for easier understanding. Elements in the context of this description.The terms "on", "to", "between" and "above" as used herein refer to the relative position of one layer with respect to other layers. A layer may be in direct contact with the other layer or may have one or more intervening layers "on", "on", or joined "to" another layer or another layer. A layer "between" the layers can be in direct contact with the layer or can have one or more intervening layers.The embodiments of the present description may include an interconnection adapter having a substantially flat surface and a non-planar surface to which the microelectronic package may be electrically attached, wherein at least one interconnection extends from the interconnection adapter flat surface to the interconnection adapter Non-flat surface. The non-planar surface of the interconnect adapter can be shaped to substantially conform to the shape of the microelectronic substrate to which it can be attached, which eliminates the need for bending or otherwise adapting the microelectronic package to meet the needs of the microelectronic substrate.Fig. 1 illustrates a microelectronic component 100 according to an embodiment of the present invention. As shown in FIG. 1, the microelectronic package 110 may be attached to the interconnect adapter 130. The interconnection adapter 130 may include an interconnection adapter body 132 having a substantially flat surface 134 and a non-planar surface 136. The microelectronic package 110 is electrically attached to the substantially flat surface 134, wherein at least one conductive interconnection 140 is removed from the interconnection adapter body. The flat surface 134 extends to the non-flat surface 136 of the interconnection adapter body.As shown in FIG. 1, the microelectronic package 110 may include, for example, a microprocessor, a chipset, a graphics device, a wireless device, a memory device, an application specific integrated circuit, or the like that are electrically attached to the build-in layer 120 or interposer or The first surface 122 of the microelectronic device 112. The encapsulant material 114 can seal the microelectronic device 112 and abut a portion of the first surface 122 of the interposer/build-up layer. With the corresponding mimic microelectronic package bonding pads 124 formed on the second surface 126 of the interposer/built-in layer 120, the interposer/built-in layer 120 can be electrically attached to the interconnect on the flat surface 134 of the interconnect adapter body 140. The microelectronic package bond pads 124 can be electrically connected to an integrated circuit system (not shown) within the microelectronic device 112 through a conductive path extending through the interposer/built-in layer 120 (as shown by the dashed line 128).Because the microelectronic package 110 is mounted to the interconnect adapter flat surface 134, the microelectronic package 110 will remain flat and there is no need to bend or otherwise deform the microelectronic package 110. Therefore, as those skilled in the art will understand, although the microelectronic package 110 is illustrated in FIG. 1 as a fan-out wafer level package (FO WLP), as will be discussed, any suitable Packaging technology.The interconnection adapter 130 may have at least one bonding pad 138 formed on the non-planar surface 136 of the interconnection adapter body, wherein each bonding pad 138 is in electrical contact with the corresponding interconnection 140. The connectors 142 illustrated as solder balls in FIG. 1 may be formed on each of the interconnection adapter bonding pads 138.The interconnection adapter body 132 may be any suitable, substantially rigid dielectric material. The interconnect 140, the interconnect adapter bond pads 138, and the microelectronic package bond pads 124 may be formed of any suitable conductive material, including but not limited to metals and metal alloys, such as copper, silver, gold, nickel, and alloys thereof . The sealant material 114 may be any suitable sealing material, including but not limited to silica-filled epoxy and resin. In one embodiment, the interposer/built-in layer 120 may be formed of multiple layers of a dielectric material (not shown), the dielectric material includes, but is not limited to, for example, silicon dioxide (SiO2), silicon oxynitride (SiOxNy), And silicon nitride (Si3N4) and silicon carbide (SiC), liquid crystal polymer, epoxy resin, bismaleimide triazine resin, polyimide material and the like. The conductive path 128 may be formed to extend between and through the dielectric material layers (not shown) and may be made of any suitable conductive material, including but not limited to copper, silver , Gold, nickel and their alloys. The process used to form the microelectronic package 110 and its components is well known to those skilled in the art, and for the sake of brevity and brevity, the process will not be described and illustrated herein.FIGS. 2 to 6 illustrate various embodiments of the microelectronic component 100 electrically connected to various microelectronic substrates 150 through the connector 142. The microelectronic substrate 150 may include any suitable dielectric material, including but not limited to liquid crystal polymer, epoxy resin, bismaleimide triazine resin, FR4, polyimide material, and the like, and the micro The electronic substrate 150 may include conductive paths (not shown) formed therein and/or on the microelectronic substrate 150, between the microelectronic components 100, and/or with additional external components (not shown). Show) any desired electrical paths. The process used to form the microelectronic substrate 150 is well known to those skilled in the art, and for the sake of brevity and brevity, the process will not be described and illustrated herein. As illustrated in FIGS. 2 to 6, the microelectronic substrate 150 may have various shapes, wherein the interconnection adapter 130 is configured to adapt to the shape of the microelectronic substrate 150.As shown in FIG. 2, the microelectronic components 1001, 1002 may have interconnection adapters 1301, 1302, respectively, and the interconnection adapters 1301, 1302 respectively include arc-shaped or arcuate non-planar surfaces 1361, 1362 of the body of the interconnection adapter. In the upper microelectronic component 1001, the arcuate surface 1361 of the interconnection adapter body may be substantially convex relative to the flat surface 1341 of the interconnection adapter body, so that it can be attached to the inner surface 152 of the tubular or spherical microelectronic substrate 1501. In the lower microelectronic component 1002, the arcuate surface 1362 of the interconnection adapter body may be substantially concave relative to the flat surface 1342 of the interconnection adapter body so that it can be attached to the outer surface 154 of the tubular or spherical microelectronic substrate 1501.As shown in FIG. 3, both of the microelectronic components 1003, 1004 may have interconnection adapters 1303, 1304, respectively, and the interconnection adapters 1303, 1304 respectively include the non-planar surfaces 1363, 1364 of the interconnection adapter body, and the interconnection adapter body The uneven surfaces 1363, 1364 include two converging flat surfaces 136a and 136b. In the upper microelectronic member 1003, the convergent flat surfaces 136a and 136b can be made to form an acute angle A1 with each other so that they can be attached to the inner surface 156 of the substantially L-shaped microelectronic substrate 1502. In the lower microelectronic member 1004, the convergent flat surfaces 136a and 136b may form an obtuse angle A2 with each other so that they can be attached to the outer surface 158 of the substantially L-shaped microelectronic substrate 1502.As shown in FIG. 4, the microelectronic component 1005 may have an interconnection adapter 1305 that includes an interconnection adapter body non-planar surface 1365 that includes non-planar surfaces that are parallel to each other (for example, on different parallel surfaces) The two flat surfaces 136a and 136b and the connecting surface 136c between the two flat surfaces 136a and 136b. Such a configuration may allow for use as electrical attachment of the microelectronic component 1005 to the microelectronic substrate 1503 having a stepped surface 162 that substantially mimics the non-planar surface 1365 of the interconnection adapter body. As shown in FIG. 5, the microelectronic substrate 1504 does not need to have a stepped surface 162 (see FIG. 4) for the uneven surface 1365 shown in FIG. 4 to be used. Conversely, the active surface 172 of the submicroelectronic device 170, active or passive, can be electrically attached to the flat microelectronic substrate 1504, where one of the adapters 1305 can be interconnected by, for example, through silicon vias (not shown). The flat surface 136 a is electrically connected to the microelectronic substrate 1504 and can electrically connect the other flat surface 136 b of the interconnect adapter 1305 to the back surface 174 of the sub-microelectronic device 170.As shown in FIG. 6, the microelectronic component 1006 may have an interconnection adapter 1306, the interconnection adapter 1306 includes an interconnection adapter body non-planar surface 1366, the interconnection adapter body non-planar surface 1366 includes at least one flat surface 1368 and an extension To the recess 138 in the interconnect adapter 1306 so that the recess 138 can span the microelectronic device 170.As can be seen in FIGS. 2 to 6, the embodiments of the present description may allow the microelectronic component 100 to be attached to a curved or non-planar substrate 150 without the need to bend/apply to the microelectronic package 110 and the microelectronic device 112 therein. stress. The microelectronic component 100 can allow attachment to the microelectronic substrate 150 at a location where there is currently no placement possible, which can reduce the form factor (such as size) of the resulting system or module as a whole and allow it to be used in small tubes or Attachment within wearable objects such as rings or bracelets. In addition, the described embodiments may not require any microelectronic package or microelectronic die thinning, which, as those skilled in the art will understand, may result in performance degradation due to mechanical stress. Therefore, a fan-out wafer-level package (FO WLP) with many features such as system-in-package (SiP), package-on-package (PoP), three-dimensional stacking, and the like can be used (as shown in Figures 1 to 6 Show), wafer-level chip scale package (WLCSP), flip chip (FC) package, quad flat no-lead (QFN) package, dual-planar no-lead (DFN) package and the like for standard high-performance Packaging technology.7 to 16 illustrate one embodiment of manufacturing the microelectronic component 100 illustrated in FIG. 1. As shown in FIG. 7, a mold chase 210 having at least one recess 212 therein may be formed. As those skilled in the art will understand, the molded frame groove recess 212 may have a female mold of a desired shape to interconnect the non-planar surface 136 of the adapter (see FIG. 1). Those skilled in the art can also understand that the configuration and number of the molded frame groove recesses 212 can be determined by the package body technology used, such as panels, wafers, strips, and the like. As further shown in FIG. 7, the liquid dielectric molding compound 220 may be deposited into the mold frame groove recess 212 and fill the mold frame groove recess 212. Once filled, the excess dielectric molding compound 220 can be removed by, for example, a removal tool or squeegee 222 pulled across the molding frame groove 210 (arrow 224) to form an interconnection adapter as shown in FIG. 8 Flat surface 134. As shown in FIG. 9, the dielectric molding compound 220 (see FIG. 8) may be cured or partially cured to form the interconnection adapter body 132 of the interconnection adapter 130 and the microelectronic package 110 may be attached to the interconnection adapter body A flat surface 134, which can occur before or after the dielectric molding compound 220 is removed from the molding frame slot 210 (see FIGS. 7 and 8). Although the molding process is illustrated in FIGS. 7 and 8, it is understood that the bulk dielectric material can be mechanically polished, laser ablated, or the like to form the interconnect adapter body 132, or can be used in a three-dimensional printing process or the like Manufacture the main body 132 of the adapter between connectors.After the interconnection adapter body 132 is formed, as shown in FIG. 10, at least one through hole 226 may be formed to extend from the interconnection adapter body non-planar surface 136 to the interconnection adapter body flat surface 134. Wherein, a part of the bonding pad 124 of each microelectronic package may be exposed. In one embodiment, the through hole 226 may be formed by laser drilling. The use of laser drilling can result in no taper or very low taper to the through hole 226 and can cause the through hole 226 to have a diameter between about 20 μm and 25 μm, which is the same as the non-flat surface 136 of the interconnect adapter body to the interconnect adapter. The distance of the main body flat surface 134 is irrelevant. If an electroless plating process is to be used to form the interconnect 140 (see FIG. 1), the exposed portion of each microelectronic package bonding pad 124, the sidewall of the via 226, and the non-planar surface 136 of the interconnect adapter body On top, a seed layer (not shown) can be formed, for example, by sputtering deposition or electroless plating.After the through hole 226 (see FIG. 10) is formed, as shown in FIG. 11, it may be non-conformally, for example by spin coating or as shown in FIG. 12, for example, by spraying conformally on the non-planar surface 136 of the interconnection adapter body. A resist material layer 232 is formed. In one embodiment, the thickness of the resist may be between about 50 μm to 100 μm. As shown in FIG. 13, an opening 234 may be formed through the resist material layer 232 to the through hole 226, and the material in the through hole 226 may be removed, for example, by photolithography and/or laser drilling. In one embodiment, the negative tone resist material layer 232 may be used to allow removal from the through hole 226.As shown in FIG. 14, an interconnection 140 may be formed in the through hole 226 (see FIG. 13). In one embodiment, it may be formed by filling the through hole 226 (see FIG. 13) with a deposition seed layer (not shown) after the through hole 226 is formed, as discussed in relation to FIG. 10, using an electroless plating process Interconnect 140. As shown in FIG. 14, the plating process may result in the formation of interconnect adapter bond pads 138 integrated with their respective interconnects 140. In one embodiment, the interconnect 140 may be formed of any suitable metal. In a specific embodiment, the interconnect 140 may be formed of copper. In another embodiment, for the through hole 226 having a diameter between about 20 μm and 25 μm (see FIG. 13), the thickness of the plating material should be between about 15 μm and 20 μm to fill the through hole 226 (see FIG. 13). ). As further shown in FIG. 14, after the interconnect 140 and the interconnect adapter pads 138 are formed, a lower bump metallization structure 144 (such as a nickel barrier layer and a tin/ Silver wetting layer). As further shown in FIG. 14, a solder material 242 may be deposited on each lower bump metallization structure 144. The solder material 242 can be any suitable material, including but not limited to lead/tin alloys (such as 63% tin/37% lead solder), high tin content alloys (such as 90% or more tin, such as tin/bismuth, eutectic Tin/silver, ternary tin/silver/copper, eutectic tin/copper and similar alloys).As shown in FIG. 15, the resist material layer 232 (see FIG. 14) and any remaining seed layer material (not shown) can be removed. As shown in FIG. 16, the solder material 242 (see FIG. 15) may be reflowed (heated) to form the connector 142 (e.g., solder balls) and the resulting microelectronic component 100. It can be understood that if a plurality of microelectronic components 100 are formed simultaneously and integrally, after the connectors 142 are formed, they will be singulated, for example, by mechanical cutting.It can be understood that similar process methods can be used to form the non-flat surface 136 of the interconnect adapter body of different shapes, and it can also be understood that process adjustments may be necessary.FIG. 17 is a flowchart of a process 300 of manufacturing a flexible microelectronic system according to an embodiment of the present description. As set forth in block 302, an interconnect adapter body having a substantially flat surface and a non-flat surface may be formed. The microelectronic package may be attached to the flat surface of the interconnect adapter body, as set forth in block 304, wherein the microelectronic package includes at least one bonding pad that contacts the flat surface of the interconnect adapter body. As set forth in block 306, at least one through hole may be formed by extending from the non-planar surface of the interconnection adapter body to the flat surface of the interconnection adapter body, wherein the at least one through hole exposes at least a portion of the at least one microelectronic package bonding pad. As set forth in block 308, at least one through hole may be filled with a conductive material to form at least one interconnection through the body of the interconnection adapter.Figure 18 illustrates a computing device 400 according to one embodiment of the present description. The computing device 400 houses a board 402. The board may include multiple microelectronic components, including but not limited to a processor 404, at least one communication chip 406A, 406B, volatile memory 408 (such as DRAM), non-volatile memory 410 (such as ROM), flash memory 412 , Graphics processor or CPU414, digital signal processor (not shown), cryptographic processor (not shown), chipset 416, antenna, display (touch screen display), touch screen controller, battery, audio codec (not shown) (Shown), video codec (not shown), power amplifier (AMP), global positioning system (GPS) device, compass, accelerometer (not shown), gyroscope (not shown), speaker (not shown) Out), cameras and mass storage devices (not shown) (such as hard drives, compact discs (CD), digital versatile discs (DVD), etc.). Any of the microelectronic components can be attached to the board 402 physically and electrically. In some embodiments, at least one of the microelectronic components may be part of the processor 404.The communication chip enables wireless communication to be used to transfer data to/from the computing device. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc., that can transmit data via non-solid media through the use of modulated electromagnetic radiation. The term does not mean that the associated devices do not contain any wires, although in some embodiments they may not contain any wires. The communication chip can implement any of a variety of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+ , HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, and its derivatives, and other wireless protocols designated as 3G, 4G, 5G and beyond. The computing device may include multiple communication chips. For example, the first communication chip may be dedicated to shorter distance wireless communication (such as Wi-Fi and Bluetooth) and the second communication chip may be dedicated to longer distance wireless communication (such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE). , Ev-DO and others).The term "processor" may refer to any device or part of a device that processes electronic data from a register and/or memory to convert that electronic data into other electronic data that can be stored in the register and/or memory.Any of the microelectronic components within the computing device 400 may include a microelectronic structure having an interconnection adapter as described above.In various implementations, the computing device may be a laptop computer, netbook, notebook computer, ultrabook, smart phone, tablet, personal digital assistant (PDA), ultra-mobile PC, mobile phone, desktop computer, server, Printers, scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In a further implementation, the computing device may be any other electronic device that processes data.It can be understood that the subject matter of this description is not necessarily limited to the specific applications illustrated in FIGS. 1-18. As those skilled in the art will understand, the subject matter can be applied to other microelectronic device and component applications, as well as any suitable electronic applications.The following examples relate to further embodiments. The details in the examples can be used anywhere in one or more embodiments.In Example 1, the microelectronic component may include: an interconnection adapter having a substantially flat surface and a non-flat surface, wherein at least one conductive interconnection extends from the flat surface to the non-flat surface.In Example 2, the subject matter of Example 1 can optionally include: the non-planar surface includes an arcuate surface.In Example 3, the subject matter of Example 2 can optionally include: the arcuate non-planar surface is concave with respect to the flat surface.In Example 4, the subject matter of Example 2 can optionally include: the arcuate non-planar surface is convex relative to the flat surface.In Example 5, the subject matter of Example 1 can optionally include: the non-flat surface includes at least two flat surfaces.In Example 6, the subject matter of Example 5 can optionally include forming an acute angle between at least two flat surfaces.In Example 7, the subject matter of Example 5 can optionally include forming an obtuse angle between at least two flat surfaces.In Example 8, the subject matter of Example 5 can optionally include: at least two flat surfaces are in a non-flat configuration parallel to each other.In Example 9, the subject matter of Example 1 can optionally include: a microelectronic package attached to the flat surface of the interconnect adapter.In Example 10, a method of manufacturing a microelectronic structure may include: forming an interconnection adapter body having a substantially flat surface and a non-planar surface; attaching a microelectronic package to the flat surface of the interconnection adapter body, wherein the microelectronics The package includes at least one bonding pad contacting the flat surface of the interconnection adapter body; at least one through hole extending from the uneven surface of the interconnection adapter body to the flat body of the interconnection adapter body is formed, wherein the at least one through hole exposes at least one microelectronics At least a portion of the package bonding pad; and at least one through hole is filled with a conductive material to form at least one interconnection through the interconnection adapter body.In Example 11, the subject matter of Example 10 can optionally include: forming the interconnection adapter body includes molding the interconnection adapter body.In Example 12, the subject matter of Example 10 can optionally include forming at least one through hole including laser drilling at least one through hole extending from the non-planar surface of the interconnection adapter body to the flat body of the interconnection adapter body.In Example 13, the subject matter of Example 10 can optionally include: filling the at least one through hole with a conductive material including plating metal in the at least one through hole to form at least one interconnection through the interconnect adapter body.In Example 14, the subject matter of Example 13 can optionally include: plating metal in at least one through hole includes plating copper in at least one through hole.In Example 15, the subject matter of any one of Examples 10 to 14 can optionally include: forming an interconnection adapter body includes forming an interconnection adapter body having a substantially flat surface and a non-flat surface, wherein the non-flat surface includes at least two Flat surface.In Example 16, the subject matter of Example 15 can optionally include: the arcuate non-planar surface is concave relative to the flat surface.In Example 17, the subject matter of Example 15 can optionally include: the arcuate non-flat surface is convex relative to the flat surfaceIn Example 18, the subject matter of any one of Examples 10 to 14 can optionally include: forming an interconnection adapter body includes forming an interconnection adapter body having a substantially flat surface and a non-flat surface, wherein the non-flat surface includes at least two Flat surface.In Example 19, the subject matter of Example 18 can optionally include forming an acute angle between at least two flat surfaces.In Example 20, the subject matter of Example 18 can optionally include forming an obtuse angle between at least two flat surfaces.In Example 21, the subject matter of Example 18 can optionally include that at least two flat surfaces are in a non-flat configuration parallel to each other.In Example 22, the electronic system may include a board and a microelectronic component including: an interconnection adapter having a substantially flat surface and a non-flat surface, wherein at least one conductive interconnection extends from the flat surface to the non-flat surface; and The microelectronic package is attached to the flat surface of the interconnect adapter; wherein the microelectronic component is electrically attached to the board through a connector extending from the non-flat surface of the interconnect adapter.In Example 23, the subject matter of Example 22 can optionally include: the non-planar surface includes an arcuate surface.In Example 24, the subject matter of Example 23 can optionally include: the arcuate non-planar surface is concave relative to the flat surface.In Example 25, the subject matter of Example 23 can optionally include: the arcuate non-planar surface is convex relative to the flat surface.In Example 26, the subject matter of Example 22 can optionally include: the non-flat surface includes at least two flat surfaces.In Example 27, the subject matter of Example 26 can optionally include forming an acute angle between at least two flat surfaces.In Example 28, the subject matter of Example 26 can optionally include forming an obtuse angle between at least two flat surfaces.In Example 29, the subject matter of Example 26 can optionally include: two flat surfaces in a non-flat configuration parallel to each other.Having described the embodiments of the present description in detail in this way, it can be understood that the description defined by the appended claims is not limited to the specific details set forth in the above description, and without departing from the spirit or scope of the description, Many obvious variations of this description are possible. |
The present invention generally relates to a method of forming a graded junction within a semiconductor substrate. A first masking pattern having a first opening characterized by a first lateral dimension is formed over the semiconductor substrate. The semiconductor substrate is doped with a first dopant, using the first masking pattern as a doping mask, thereby forming a first dopant region in the semiconductor substrate underlying the first opening. The first masking pattern is swelled to decrease the first lateral dimension of the first opening to a second lateral dimension. The semiconductor substrate is then doped with a second dopant, using the swelled first masking pattern as a doping mask, thereby forming a second dopant region in the semiconductor substrate, and furthermore defining a graded junction within the semiconductor substrate. |
What is claimed is: 1. A method of forming a graded junction for a device on a semiconductor substrate, the method comprising the steps of:forming a dielectric layer overlying a surface of the semiconductor substrate; forming a first masking pattern overlying the dielectric layer overlying the surface of the semiconductor substrate, the first masking pattern having a first opening associated with a first region of the semiconductor substrate, wherein the first opening has a first lateral dimension: doping the semiconductor substrate with a first dopant of a first conductivity type, wherein the first masking pattern is used as a doping mask to form a first dopant region in the semiconductor substrate; swelling the first masking pattern to decrease the first lateral dimension of the first opening to a second lateral dimension, wherein the second lateral dimension is smaller than the first lateral dimension, and wherein the swelled doping mask overlies one or more portions of the first dopant region; and doping the semiconductor substrate with a second dopant of a second conductivity type, wherein the swelled first masking pattern is used as a doping mask to form a second dopant region in the semiconductor substrate, wherein the second dopant region defines a buried bitline region, the method further comprising the steps of: selectively etching the dielectric layer overlying the first dopant region prior to swelling the first masking pattern; removing the first masking pattern after doping the semiconductor substrate with the second dopant; and oxidizing the semiconductor substrate to form a bitline oxide region overlying the first region. 2. The method of claim 1, wherein the step of forming a dielectric layer comprises forming an oxide-nitride-oxide (ONO) layer.3. The method of claim 1, wherein the step of forming a dielectric layer comprises forming an oxide-nitride (ON) layer.4. The method of claim 1, wherein the step of doping the semiconductor substrate to form a first dopant region comprises performing a boron ion implantation.5. The method of claim 1, wherein the step of doping the semiconductor substrate to form the second dopant region comprises performing an arsenic, phosphorus, or antimony ion implantation.6. The method of claim 1, further comprising the step of removing the first masking pattern after doping the semiconductor substrate with the first dopant and the second dopant.7. The method of claim 1, wherein the step of swelling the first masking layer comprises performing a thermal flow process.8. The method of claim 1, wherein the one or more portions of the first dopant region masked by the swelled doping mask generally maintain the first conductivity type, thereby defining pocket regions in the semiconductor substrate adjacent to the second dopant region.9. The method of claim 1, wherein the first and second conductivity types are different.10. The method of claim 1, wherein the step of forming a first masking pattern comprises forming a photoresist pattern.11. The method of claim 10, wherein the step of swelling the first masking pattern to decrease the first lateral dimension of the first opening to a second lateral dimension comprises wetting the photoresist with a solvent.12. The method of claim 10, wherein the step of swelling the first masking pattern to decrease the first lateral dimension of the first opening to a second lateral dimension comprises treating the photoresist with an organic material.13. The method of claim 10, wherein the step of swelling the first masking pattern to decrease the first lateral dimension of the first opening to a second lateral dimension comprises using a hydrophilic resin with swelling properties.14. A method of forming a graded unction for a device on a semiconductor substrate, the method comprising the steps of:forming a first masking pattern overlying a surface of the semiconductor substrate, the first masking pattern having a first opening associated with a first region of the semiconductor substrate, wherein the first opening has a first lateral dimension; doing the semiconductor substrate with a first dopant of a first conductivity type, wherein the first masking pattern is used as a doping mask to form a first dopant region in the semiconductor substrate; swelling the first masking pattern to decrease the first lateral dimension of the first opening to a second lateral dimension, wherein the second lateral dimension is smaller than the first lateral dimension, and wherein the swelled doping mask overlies one or more portions of the first dopant region; doping the semiconductor substrate with a second dopant of a second conductivity type, wherein the swelled first masking pattern is used as a doping mask to form a second dopant region in the semiconductor substrate; and removing the first masking pattern after doping the semiconductor substrate with the first dopant and the second dopant; forming a dielectric layer overlying the surface of the semiconductor substrate; forming polysilicon layer overlying the dielectric layer; forming a second masking pattern overlying the first region of the semiconductor substrate; etching the polysilicon layer and the dielectric layer using the second masking pattern and the semiconductor substrate as an etch stop, thereby forming a gate of a lightly-doped-drain (LDD) transistor; and removing the second masking pattern. 15. The method of claim 14, wherein the step of forming a dielectric layer comprises forming an oxide-nitride-oxide (ONO) layer.16. The method of claim 14, wherein the step of forming a dielectric layer comprises forming a field oxide layer. |
FIELD OF THE INVENTIONThe present invention relates generally to the fabrication of a semiconductor device and more particularly to a method of forming a graded junction comprising multiple doped regions in the semiconductor device.BACKGROUND OF THE INVENTIONDuring semiconductor fabrication, numerous doped regions are formed in a semiconductor substrate. These doped regions perform various functions, such as source and drain regions for metal-oxide-semiconductor (MOS) transistors, buried electrical signal lines, substrate resistors and the like. Often, it is necessary to form doped regions having varying junction depths in order to meet different electrical resistance requirements and current handling requirements of a semiconductor device. Because of the electrical field created by a buried junction, the geometric profile of the junction can be important where electric components having extremely small feature sizes are being fabricated. For example, a lightly-doped-drain (LDD) structure in a channel region of an MOS transistor is necessary to insure proper functioning of a sub-micron transistor. Additionally, in advanced electrically-erasable-programmable-read-only-memory (EEPROM) devices, pocket regions are fabricated in a semiconductor substrate having a precise junction profile within the substrate.Product development efforts in EEPROM device technology have focused on increasing programming speed, lowering programming and reading voltages, increasing data retention time, reducing cell erasure times and reducing cell dimensions. EEPROM device designers have taken advantage of the ability of silicon nitride to store charge in localized regions and have designed memory circuits that utilize two regions of stored charge within an oxide-nitride-oxide (ONO) layer. This type of non-volatile memory device is known as a two-bit EEPROM. The two-bit EEPROM is capable of storing twice as much information as a conventional EEPROM in a memory array of approximately equal size. A left and right bit is stored in physically different areas of the silicon nitride layer, near left and right regions of each memory cell. Programming methods are then used that enable two bits to be programmed and read concurrently. The two bits of the memory cell can be erased individually by applying suitable erase voltages to the gate and to either the source or drain regions. The two-bit memory cell utilizes pocket regions adjacent to a buried bit-line region. Electrons are sourced from the pocket regions and injected into the silicon nitride layer.As advanced MOS and EEPROM devices are scaled to smaller dimensions, it becomes more difficult to form the doped regions at precise locations in the substrate. In particular, the pocket regions of EEPROM arrays using two-bit data storage and the LDD regions of MOS transistors must be carefully fabricated to avoid excessive overlap with the source and drain regions. Accordingly, as device dimensions are scaled to smaller values, advances in fabrication technology are necessary to insure proper functioning devices. Memory devices for non-volatile storage of information are currently in widespread use today, being used in a myriad of applications. A few examples of non-volatile semiconductor memory include read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM) and flash EEPROM.SUMMARY OF THE INVENTIONThe following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is intended to neither identify key or critical elements of the invention nor delineate the scope of the invention. Its primary purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.The present invention relates generally to a method of forming a graded junction in a semiconductor substrate. In particular, the method can be utilized in a formation of pocket regions in an EEPROM device by performing a step of doping the semiconductor substrate at an angle of incidence substantially normal to a surface of the semiconductor substrate.According to one aspect of the present invention, a method of forming a graded junction is disclosed, wherein a first masking pattern is formed over a surface of a semiconductor substrate, wherein the first masking has a first opening associated therewith. The first opening is associated with a first region of the semiconductor substrate, and is characterized by a first lateral dimension. The semiconductor substrate is doped with a first dopant, wherein the first masking pattern is generally used as a doping mask, thereby doping the first region of the semiconductor substrate with a first conductivity associated with the first dopant.According to another aspect of the present invention, the first masking pattern is swelled to decrease the first lateral dimension of the first opening. The swelled first masking pattern therefore defines a second lateral dimension of the first opening, wherein the second lateral dimension is smaller than the first lateral dimension. Furthermore, the swelled first masking pattern generally overlies one or more portions of the first dopant region. According to one exemplary aspect of the invention, a RELACS process is performed to swell the first masking pattern.In accordance with yet another aspect of the present invention, the semiconductor substrate is doped with a second dopant, wherein the swelled first masking pattern is used as a doping mask. Doping the semiconductor substrate with the second dopant defines a second dopant region. Portions of the first dopant region, however, generally retain the characteristics and conductivity of the first dopant, thereby defining pocket regions in the semiconductor substrate adjacent to the second dopant region, and furthermore defining a graded junction within the semiconductor substrate.To the accomplishment of the foregoing and related ends, the invention comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative embodiments of the invention. These embodiments are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a perspective view of a portion of a conventional semiconductor device comprising a graded junction.FIG. 2 illustrates a flow chart diagram representation of a conventional methodology of forming a graded junction.FIGS. 3a-3c illustrate side cross-sectional views of conventional processing steps for forming a graded junction.FIG. 4 illustrates a flow chart diagram representation of a method in accordance with one aspect of the present invention.FIGS. 5-10 illustrate side cross-sectional views of processing steps to form a graded junction in accordance with one aspect of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will now be described with reference to the drawings wherein like reference numerals are used to refer to like elements throughout. It should be understood that the description of these aspects are merely illustrative and that they should not be taken in a limiting sense. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident to one skilled in the art, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate description of the present invention. It is also to be understood that like reference numerals throughout the description imply similar functionality, however the use of like reference numerals does not necessarily imply the same device.FIG. 1 illustrates a perspective view of an exemplary conventional semiconductor device 10 comprising a graded junction 12 in a semiconductor substrate 14. Graded junctions typically comprise doped regions having varying junction depths in-order to meet different electrical resistance requirements and current handling requirements of the semiconductor device 10 (e.g., a buried bitline). The graded junction 12 comprises a first doped region 16 and a second doped region 18, wherein each of the first doped region and the second doped region have a specified conductivity to meet the requirements of the semiconductor device 10. For example, such a graded junction can be utilized in a buried bitline or a lightly-doped-drain (LDD) transistor. Further processes are also typically performed in the manufacture the semiconductor device 10, such as in the case of a buried bitline, wherein a dielectric 20 is formed over the semiconductor substrate 14, and a bitline oxide 22 is formed over the first doped region 16 and second doped region 18 of the substrate.A conventional method for forming a graded junction is illustrated in FIG. 2 and FIGS. 3A-3C, respectively. Method 30 begins at step 32, wherein a masking pattern is formed over a silicon substrate. Referring now to FIG. 3A, an exemplary portion 40 of a silicon substrate 42 is illustrated, wherein a dielectric layer 44 and a resist 46 have been formed over the substrate. An opening 48 in the resist 46 generally exposes a predetermined portion of the dielectric layer 44 overlying the substrate 42. As illustrated in FIG. 2, step 34 comprises doping the substrate 42 with a first dopant, thereby forming a buried bitline region 52, as illustrated in FIG. 3A. The method 30 of FIG. 2 continues with etching the resist at step 36, thereby increasing a lateral dimension of the opening 48, as illustrated in FIG. 3B. After etching the resist 46, a second dopant is utilized at step 38 of FIG. 2 to dope the substrate 42 with a second dopant, wherein pocket regions 54 are formed, thereby defining a graded junction 56.Such a method 30 has several difficulties. For example, altering the resist opening (step 36) in a controlled, reliable manner is difficult, and may result in non-uniformities. In addition, a minimum geometry associated with at least one region 52 of the graded junction 56 is dictated by the capability of a lithography system which exposes the substrate 14. Therefore, one heretofore has not been capable of making implanted regions 52 using substantially normal incidence of exposure, wherein the geometry of the region 52 was smaller than the capability of the lithography system.FIG. 4 illustrates an exemplary method 100 of forming a graded junction according to the present invention. While exemplary methods are illustrated and described herein as a series of acts or events, it will be appreciated that the present invention is not limited by the illustrated ordering of such acts or events, as some steps may occur in different orders and/or concurrently with other steps apart from that shown and described herein, in accordance with the invention. In addition, not all illustrated steps may be required to implement a methodology in accordance with the present invention. Moreover, it will be appreciated that the methods may be implemented in association with the apparatus and systems illustrated and described herein as well as in association with other systems not illustrated.The method 100 begins with forming a first masking pattern over a semiconductor substrate at 110, wherein the first masking pattern comprises a first opening. An exemplary result of performing act 10 is illustrated in FIG. 5. FIG. 5 illustrates, in cross-section, a portion 200 of an exemplary semiconductor substrate 205 having a surface 210. A first masking pattern 215 is formed over the surface 210 of the substrate 205 at 110, wherein the first masking pattern has a first opening 220 associated therewith. As will be understood by one of ordinary skill in the art, the first masking pattern 215 is formed, for example, by depositing a layer of photoresist over the surface 210 of the substrate 205. The photoresist is then exposed to a predetermined wavelength of radiation through a masking reticle (not shown), and developed in a conventional developing solution to form the first opening 220. The first opening 220 is further characterized by a first lateral dimension D1, wherein the first lateral dimension is measured between a first sidewall 230 and a second sidewall 235 of the first opening.Referring again to FIG. 4, a first doping process is performed on the semiconductor substrate 205 at 120, wherein the first masking layer 215 of FIG. 5 is used as a doping mask. The first doping process of act 120 is carried out to form, as illustrated in FIG. 5, a first dopant region 225 in the semiconductor substrate 205 associated with the first opening 220. The first dopant region 225 generally has a first junction profile 240 that is substantially continuous with the first sidewall 230 and second sidewall 235, respectively. The first dopant region 225 is furthermore characterized by the first lateral dimension D1, as well as by a first junction depth X1.In accordance with one aspect of the present invention, the first dopant region 225 is formed by ion implantation of a first conductivity type. Alternatively, other exemplary doping processes can be carried out to form the first dopant region 225, such as molecular beam ion implantation or plasma induced ion deposition. According to one exemplary aspect of the invention, a p-type dopant, such as boron, is utilized in doping the semiconductor substrate 205 to form the first dopant region 225.After forming the first dopant region 225, further processing is carried out at 130 of FIG. 4 to swell the first masking pattern 215. For example, a RELACS process is performed, wherein the first masking pattern 215 swells by a predetermined amount. According to one exemplary aspect of the present invention, the first masking pattern 215 comprises a photoresist, and the photoresist is swelled by wetting the photoresist with a solvent, or by treating the resist with an organic chemical, such as a saturated hydrocarbon (e.g., an aliphatic or aromatic hydrocarbon). Alternatively, the first masking pattern 215 comprises a hydrophilic resin with swelling properties. Those skilled in the art will appreciate that various methods (e.g., a thermal flow process) for swelling the first masking pattern 215 exist, depending upon the particular material composition of the first masking pattern, and all such swelling methods are contemplated as falling within the scope of the present invention.FIG. 6 illustrates the result of act 130, wherein a swelled portion 245 of the first masking pattern 215 decreases the magnitude of first lateral dimension D1 of the first opening 220 to a second lateral dimension D2. Accordingly, after swelling the first masking pattern 215, the swelled portion 245 overlies one or more portions 260 of the first dopant region 225. According to another aspect of the present invention, substantially vertical edge surfaces 250 and 255 are formed by swelling the first sidewall 230 and second sidewall 235, respectively, of the first masking pattern 215 at act 130. During the swelling process of the present invention, all surfaces of the first masking pattern 215 are furthermore swelled at approximately the same rate, however, a selective swelling of one or more sidewalls is contemplated as falling within the scope of the present invention.Referring again to FIG. 4, a second doping process is performed on the semiconductor substrate 205 at 140, wherein the first masking pattern 215 is again used as a doping mask after the swelling thereof. The second doping process is carried out to form, as illustrated in FIG. 7, a second dopant region 265 associated with the first, reduced opening 220. The second dopant region 265 generally has a second junction profile 275 that is substantially continuous with the respective edge surfaces 250 and 255 of the swelled portion 245 of the first masking pattern 215. The second dopant region 265 is furthermore characterized by the second lateral dimension D2, as well as by a second junction depth X2. In accordance with one exemplary aspect of the invention, the second junction depth X2 of the second dopant region 265 is greater than the first junction depth X1 of the first dopant region 225.In accordance with another aspect of the present invention, the second dopant region 265 is formed by ion implantation of a second conductivity type. Alternatively, other exemplary doping processes can be carried out to form the second dopant region 265, such as molecular beam ion implantation or plasma induced ion deposition. According to one exemplary aspect of the invention, an n-type dopant, such as arsenic, phosphorus, or antimony, is utilized in doping the semiconductor substrate 205 to form the second dopant region 265.After the second doping process is performed, the one or more portions 260 of the first dopant region 225 generally maintain the first dopant conductivity, thereby defining pocket regions 270 within the semiconductor substrate 205. One particular advantage of the present invention includes the ability to define the pocket regions 270 by implanting ions at a normal angle of incidence with respect to the surface 210 of the semiconductor substrate 205. By carrying out the ion implantation step at a normal angle of incidence, the first junction profile 240 of FIG. 5 can be precisely formed in the semiconductor substrate 205 relative to second junction profile 275 of second dopant region 265, as illustrated in FIG. 7. Additionally, the junction depth X1 of the pocket regions 270 can be precisely controlled. Those skilled in the art will recognize that a particular advantage exists in the present invention as compared to large angles of incidence used by prior art methods for the formation of pocket implant regions.Those skilled in the art will also recognize that the first junction profile 240 of FIG. 5, in combination with the second junction profile 275 of FIG. 7, can be characterized as a graded junction within the semiconductor substrate 205. It is also apparent from the foregoing description that additional processing of the first masking pattern 215 can be carried out to further decrease the lateral dimension of the first opening 220, followed by the formation of additional doped regions within semiconductor substrate 205. Depending upon the particular junction depth of the additional doped regions, various graded junction profiles can be formed by the method of the present invention. In addition, the swelling process and subsequent doping actions can be repeated multiple times to form various other types of junctions. Furthermore, the first doped region 225 and the second doped region 265 may have differing conductivity types, as in the case of an exemplary buried bitline, or the first doped region and the second doped region may have substantially the same conductivity type with varying dopant concentrations, as in the case of an exemplary LDD transistor. Accordingly, all such variations in graded junction profiles fall within the scope of the present invention. Also, it should be noted that in FIGS. 5-7, no oxide is illustrated overlying the substrate and underlying the patterned resist, however, it should be understood that such an oxide or ONO layer may exist thereat and the implantation discussed herein can be performed through such layer; such implementations are contemplated as falling within the scope of the present invention.According to yet another exemplary aspect of the present invention, as illustrated in FIG. 8, a dielectric layer 280 is formed over the surface 210 of the semiconductor substrate 205. One exemplary benefit to forming the dielectric layer 280 is to generally provide resistance to thermal oxidation of the semiconductor substrate 205. The dielectric layer 280, for example, comprises a composite dielectric layer, such as silicon dioxide and silicon nitride. The dielectric layer can furthermore comprise an oxide-nitride (ON) layer, or an oxide-nitride-oxide (ONO) layer. The dielectric layer 280 is formed, for example, prior to forming the first masking layer 215.In accordance with still another aspect of the present invention, one or more portions 285 of the dielectric layer 280 which overlie the second dopant region 265 are removed, thereby exposing the semiconductor substrate 205, as illustrated in FIG. 9. For example, the dielectric layer 280 is anisotropically etched using the first masking pattern 215 as an etching mask. The anisotropically etching process selectively removes the material of dielectric layer 280 while not substantially etching the surface 210 of the semiconductor substrate 205. Those skilled in the art will recognize that, depending upon the particular material forming dielectric layer 280, various etching methods can be used to anisotropically etch the dielectric layer. For example, wherein the dielectric layer 280 comprises a layer of silicon nitride, fluorine based etching chemistry can be used in a reactive-ion-etching (RIE) etching apparatus. Accordingly, wherein the dielectric layer 280 is a composite material, such as ONO, sequential silicon oxide and silicon nitride etching processes can be used to anisotropically etch the dielectric layer. In accordance with an alternative aspect of the present invention, the dielectric layer 280 overlying the second dopant region 265 is not removed and the implantation occurs through the dielectric layer 280. In such case the discussion below in conjunction with FIG. 10 is not performed.Referring now to FIG. 10, another exemplary aspect of the invention illustrates removing the first masking pattern 215 after etching the dielectric layer 280, and performing an oxidation process to form a bit-line oxide region 290. The bit-line oxide region 290 generally overlies the second dopant region 265 (e.g., a buried bitline region). The bit-line oxide region 290 is formed, for example, by thermally oxidizing the semiconductor substrate 205 using the dielectric layer 280 as an oxidation mask. Because the dielectric layer 280 is generally resistant to thermal oxidation, the portions of the surface 210 of the semiconductor substrate 205 underlying the dielectric layer 280 are not oxidized.According to another aspect of the present invention, further processing steps can be carried out, including the formation of a control gate, electrical contacts, and other components as will be understood by one of ordinary skill in the art, to form a complete EEPROM memory cell. Those skilled in the art will appreciate that various structures can be formed by the method of the present invention. For example, LDD regions can be formed in a MOS transistor following substantially the same procedures described above. Additionally, other substrate structures, such as buried resistors, and the like, can also be formed by the method of the present invention.The example of FIGS. 5-10 illustrate employing the method 100 of FIG. 4 to form a graded junction in a buried bit line structure, wherein regions 260 form pocket type implants, and region 265 comprises the buried bit line. The method 100 of FIG. 4 may also be employed to form LDD type graded junctions, wherein a region 260 may comprise a lightly doped source/drain region, and region 265 comprises a source/drain region. In such an example, multiple resist mask openings could be formed if a symmetric LDD device was desired, and successive doping of the same dopant conductivity type (but different doses) would be employed in conjunction with the resist swelling to form graded LDD regions in the substrate. Subsequently, the resist mask could be removed, and gate oxide and polysilicon layers could be formed or deposited and patterned to form an LDD transistor over the graded LDD regions in the substrate.Although the invention has been shown and described with respect to certain aspects, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In particular regard to the various functions performed by the above described components (systems, devices, assemblies, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (i.e., that is functionally equivalent), even though not structurally equivalent to the disclosed structure that performs the function in the herein illustrated exemplary aspects of the invention. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several aspects, such feature may be combined with one or more other features of the other aspects as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the term "includes" is used in either the detailed description and the claims, such term is intended to be inclusive in a manner similar to the term "comprising." |
A mobile communication device is disclosed comprising a touch sensitive display, and a processor to display, in a first window of the touch sensitive display, first information to a user, display, in a second, different window of the touch sensitive display, enlarged, i.e. zoomed, information based on a determined first position of input on the touch sensitive display, the displayed enlarged information corresponding to a first portion of the displayed first information displayed in the first window and including at least one selectable icon, the second window, which acts as a magnification tool, is displayed simultaneously with the first window, and wherein the second window overlays or covers a portion of the first information, and detect, via the second window, selection of second information, from the displayed enlarged information, based on a determined second position of input within the second window. A corresponding method is also disclosed. |
A mobile communication device, comprising:a touch sensitive display; anda processor to:display, in a first window of the touch sensitive display, first information to a user,display, in a second, different window of the touch sensitive display, enlarged information based on a determined first position of input on the touch sensitive display, the displayed enlarged information corresponding to a first portion of the displayed first information displayed in the first window and including at least one selectable icon, the second window being displayed simultaneously with the first window, and wherein the second window overlays or covers a portion of the first information, anddetect, via the second window, selection of second information, from the displayed enlarged information, based on a determined second position of input within the second window.The mobile communication device of claim 1, where the displayed first information includes a plurality of selectable information.The mobile communication device of claim 1, wherein the at least one selectable icon further comprises a plurality of selectable icons.The mobile communication device of claim 1, wherein the processor is further configured to display, in the first window, third information different from the at least one selectable icon, based on the selection of second information via the second window.The mobile communication device of claim 1, wherein the processor is further configured to display, in a third window, third information different from the at least one selectable icon, based on the selection of second information via the second window.The mobile communication device of claim 1, wherein the displayed enlarged information includes information relating to at least one of an incoming call, an outgoing call, a game, a phone book, a text message, a current date, a current time, and a volume setting of the mobile communication device.The mobile communication device of claim 1, wherein the processor is further configured to:remove the second window, responsive to detecting the selection of the second information; andsimulate the selection of the second information in the first window.The mobile communication device of claim 1, wherein the displayed enlarged information further includes non-selectable information.The mobile communication device of claim 1, wherein the processor is further configured to display, in the first window, third selectable information different from the first selectable information, based on the selection of the second selectable information.The mobile communication device of claim 9, wherein:the at least one selectable icon represents a computer application capable of executing on the mobile communication device;selection of the at least one selectable icon causes the computer application to be executed; andthe third selectable information includes information corresponding to the computer application.The mobile communication device of claim 1, wherein the displayed enlarged information does not include a keyboard interface.A method comprising:displaying, in a first window of a touch sensitive display, first information to a user;determining a first position of input on the touch sensitive display;displaying, in a second, different window of the touch sensitive display, enlarged information based on the determined first position of input on the touch sensitive display, the displayed enlarged information corresponding to a first portion of the displayed first information displayed in the first window and including at least one selectable icon, the second window being displayed simultaneously with the first window, and wherein the second window overlays or covers a portion of the first information;determining second position of input within the second window; anddetecting, via the second window, selection of second information, from the displayed enlarged information, based on the determined second position of input within the second window.The method of claim 12, wherein the at least one selectable icon comprises a plurality of selectable icons.The method of claim 12, further comprising displaying, in the first window, third information different from the at least one selectable icon, based on the selection of second information via the second window.The method of claim 12, further comprising displaying, in a third window, third information different from the at least one selectable icon, based on the selection of second information via the second window.The method of claim 12, wherein the displayed enlarged information includes information relating to at least one of an incoming call, an outgoing call, a game, a phone book, a text message, a current date, a current time, and a volume setting of the mobile communication device.The method of claim 12, further comprising:removing the second window, responsive to detecting the selection of the second information; andsimulating the selection of the second information in the first window.The method of claim 12, wherein the displayed enlarged information further includes non-selectable information.The method of claim 12, further comprising displaying, in the first window, third selectable information different from the first selectable information, based on the selection of the second selectable information.The method of claim 12, wherein the displayed enlarged information does not include a keyboard interface. |
TECHNICAL FIELD OF THE INVENTIONImplementations described herein relate generally to input devices, and more particularly, to input devices that can be used in handheld devices.DESCRIPTION OF RELATED ARTDevices, such as mobile communication devices usually include a display and keys to enter information into the device. Generally, both the display and keys contained in mobile devices are small. The restricted size of the display and keys inhibit the speed at which an operator may interact with the mobile device, as entering information via the keys and/or interacting with the display must be done in a very slow and precise manner.SUMMARYAccording to one aspect, a mobile communication device is provided. The mobile communication device may comprise a touch sensitive display and logic configured to control the touch sensitive display to display information to a user; provide a window of enlarged information via the touch sensitive display based on a determined position of input on the touch sensitive display; and receive a selection via the window of enlarged information based on a determined position of input within the provided window.Additionally, the displayed information includes an interface screen with a plurality of selections.Additionally, the enlarged information includes at least one of the plurality of selections.Additionally, the determined position of input within the provided window is determined by the position of a finger of a user or a stylus on the touch sensitive display.Additionally, the determined position of input within the provided window is determined by the position where the user lifts a finger or the stylus off the touch sensitive display.According to another aspect, a method may be provided. The method may comprise displaying a plurality of groups of characters via a touch sensitive display; determining a position of input on the touch sensitive display; displaying an enlarged window of one of the groups of characters based on the determined position of input; and selecting one of the characters from the group of characters within the enlarged window based on at least one of a determined position of input within the enlarged window or a determined position of input outside the enlarged window.Additionally, each of the groups of characters includes a plurality of letters.Additionally, the displaying a plurality of groups of characters comprises displaying a "QWERTY" type of keyboard by displaying the plurality of letters in the groups of characters.Additionally, the displayed enlarged window of one of the groups of characters includes a central letter surrounded by the other letters in the selected group.Additionally, the selecting one of the characters from the group of characters within the enlarged window based on a determined position of input is determined by determining a position where a user lifted a finger off the surface of the touch sensitive display.According to another aspect, a method may be provided. The method may comprise displaying an interface screen via a touch sensitive display; determining a position of input on the touch sensitive display; displaying a cursor on the interface screen based on the determined position of input on the touch sensitive display; and selecting a choice displayed on the interface screen based on a position of the cursor.Additionally, the displayed cursor on the interface screen is displayed on the touch sensitive display at a position at or offset from the determined position of input.Additionally, the determined position of input is determined by a sensing a position of a user's finger or a stylus on the touch sensitive display.Additionally, the selected choice displayed on the interface screen based on a position of the cursor is selected when a user lifts a finger off the surface of the touch sensitive display.Additionally, the offset position of the displayed cursor may be changed based on a user defined preference.According to yet another aspect, a mobile communication device is provided. The mobile communication device may comprise a plurality of keys; a display; and logic configured to: control the display to display groups of characters, wherein a position of the displayed groups of characters correspond to physical locations of the plurality of keys; select one of the displayed groups of characters based on a first key input; and select one character from the selected displayed group of characters based on a second key input.Additionally, the logic may be further configured to control the display to display the selected group of characters in an enlarged manner.Additionally, a displayed position of enlarged characters within a group correspond to physical locations of the plurality of keys.Additionally, the displayed groups of characters form a "QWERTY" type of keyboard.Additionally, the at least some of the displayed groups of characters include nine letters.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, which arc incorporated in and constitute a part of this specification, illustrate a number of embodiments and, together with the description, explain the embodiments. In the drawings,Fig. 1 is a diagram of an exemplary implementation of a mobile terminal;Fig. 2 illustrates an exemplary functional diagram of a mobile terminal;Fig. 3 illustrates an exemplary functional diagram of the user interface logic of Fig. 2 ;Fig. 4 is a flowchart illustrating an exemplary process;Figs. 5 illustrates an example of the process of Fig. 4 ;Figs. 6A-6B illustrate other examples of the process of Fig. 4 ;Fig. 7 is a flowchart illustrating another exemplary process;Fig. 8 illustrates an example of the process of Fig. 7 ;Fig. 9 is a flowchart illustrating another exemplary process; andFigs. 10A-10B illustrate examples of the process of Fig. 9 .DETAILED DESCRIPTION OF THE INVENTIONThe following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the embodiments.Implementations of the invention can be used to improve a user interface, such as a display and keypad, on a device (e.g., a communications device). Implementations described herein may change the appearance and/or configuration of the user interface using logic, such as machine-readable instructions executed by a processing device. In some instances, the changing of the appearance and/or configuration of the user interface may be application controlled. That is, when a particular application is launched or being executed or a function associated with a particular application is being executed, the user interface may change based on the particular application. Implementations of the user interface may receive user inputs via touch, e.g., via a user's finger, via input devices, e.g., a stylus, via speech, and/or via other techniques and/or devices.Exemplary implementations will be described in the context of a mobile terminal. It should be understood that a mobile terminal is an example of a device that can employ a user interface consistent with the principles of the embodiments and should not be construed as limiting the types or sizes of devices or applications that can employ the user interface described herein. For example, user interfaces described herein may be used on desktop communication devices, household appliances, such as microwave ovens and/or appliance remote controls, automobile radio faceplates, industrial devices, such as testing equipment, etc.Fig. 1 is a diagram of an exemplary implementation of a mobile terminal consistent with the principles of the embodiments. Mobile terminal 100 (hereinafter terminal 100) may be a mobile communication device. As used herein, a "mobile communication device" and/or "mobile terminal" may include a radiotelephone; a personal communications system (PCS) terminal that may combine a cellular radiotelephone with data processing, a facsimile, and data communications capabilities; a personal digital assistant (PDA) that can include a radiotelephone, pager, Internet/intranet access, web browser, organizer, calendar, and/or global positioning system (GPS) receiver; and a laptop and/or palmtop receiver or other appliance that includes a radiotelephone transceiver.Terminal 100 may include housing 101, keypad 110 containing keys 112A-L, control keys 120, speaker 130, display 140, and microphones 150 and 150A. Housing 101 may include a structure configured to hold devices and components used in terminal 100. For example, housing 101 may be formed from plastic, metal, or composite and may be configured to support keypad 110, control keys 120, speaker 130, display 140 and microphones 150 and/or 150A.Keypad 110 may include a plurality of keys112A-L (collectively keys 112) that may be activated by a user to input information into terminal 100. Implementations of keys 112 may have key information associated therewith, such as numbers, letters, symbols, etc. A user may interact with keys 112 to input key information into terminal 100. For example, a user may operate keys 112 to enter digits, commands, and/or text, into terminal 100.Control keys 120 may include buttons that permit a user to interact with terminal 100 to cause terminal 100 to perform an action, such as to display a text message via display 140, raise or lower a volume setting for speaker 130, etc.Speaker 130 may include a device that provides audible information to a user of terminal 100. Speaker 130 may be located in an upper portion of terminal 100 and may function as an ear piece when a user is engaged in a communication session using terminal 100. Speaker 130 may also function as an output device for music and/or audio information associated with games and/or video images played on terminal 100.Display 140 may include a device that provides visual information to a user. For example, display 140 may provide information regarding incoming or outgoing calls, text messages, games, phone books, the current date/time, volume settings, etc., to a user of terminal 100. Implementations of display 140 may be implemented as black and white or color displays, such as liquid crystal displays (LCDs). Display 140 may also include devices and/or logic that can be used to display images to a user of terminal 100 and to receive user inputs in association with the displayed images. For example, display 140 may be configured as a touch sensitive device that may display an image of a keyboard. Implementations of display 140 may be configured to receive a user input when the user interacts with the displayed image. For example, the user may provide an input to display 140 directly, such as via the user's finger, or via other devices, such as a stylus. User inputs received via display 140 may be processed by components or devices operating in terminal 100.Microphones 150 and/or 150A may, each, include a device that converts speech or other acoustic signals into electrical signals for use by terminal 100. Microphone 150 may be located proximate to a lower side of terminal 100 and may be configured to convert spoken words or phrases into electrical signals for use by terminal 100. Microphone 150A may be located proximate to speaker 130 and may be configured to receive acoustic signals proximate to a user's ear while the user is engaged in a communications session using terminal 100. For example, microphone 150A may be configured to receive background noise as an input signal for performing background noise cancellation using processing logic in terminal 100.Fig. 2 illustrates an exemplary functional diagram of mobile terminal 100 consistent with the principles of the embodiments. As shown in Fig. 2 , terminal 100 may include processing logic 210, storage 220, user interface logic 230, communication interface 240, antenna assembly 250, and power supply 260.Processing logic 210 may include a processor, microprocessor, an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or the like. Processing logic 210 may include data structures or software programs to control operation of terminal 100 and its components. Implementations of terminal 100 may use an individual processing logic component or multiple processing logic components, such as processing logic components operating in parallel. Storage 220 may include a random access memory (RAM), a read only memory (ROM), a magnetic or optical disk and its corresponding drive, and/or another type of memory to store data and instructions that may be used by processing logic 210.User interface logic 230 may include mechanisms, such as hardware and/or software, for inputting information to terminal 100 and/or for outputting information from terminal 100. User interface logic 230 may include mechanisms, such as hardware and/or software, used to configure an appearance of display 140 and/or to receive user inputs via display 140 and keypad 110. For example, user interface logic 230 may control display 140 to display a keyboard of characters such as a "QWERTY" type of keyboard, or another type of keyboard. User interface logic 230 may also include hardware or software to accept user inputs to make information available to a user of terminal 100. For example, a keyboard may be displayed via display 140 and a user may use a finger or stylus to exert pressure on the display 140 indicating selection of a displayed key within the keyboard. Further examples of input and/or output mechanisms associated with user interface logic 230 may include a speaker (e.g., speaker 130) to receive electrical signals and output audio signals, a microphone (e.g., microphone 150 or 150A) to receive audio signals and output electrical signals, buttons (e.g., control keys 120) to permit data and control commands to be input into terminal 100, and/or a display (e.g., display 140) to output visual information.Communication interface 240 may include, for example, a transmitter that may convert base band signals from processing logic 210 to radio frequency (RF) signals and/or a receiver that may convert RF signals to base band signals. Alternatively, communication interface 240 may include a transceiver to perform functions of both a transmitter and a receiver. Communication interface 240 may connect to antenna assembly 250 for transmission and reception of the RF signals. Antenna assembly 250 may include one or more antennas to transmit and receive RF signals over the air. Antenna assembly 250 may receive RF signals from communication interface 240 and transmit them over the air and receive RF signals over the air and provide them to communication interface 240.Power supply 260 may include one or more power supplies that provide power to components of terminal 100. For example, power supply 260 may include one or more batteries and/or connections to receive power from other devices, such as an accessory outlet in an automobile, an external battery, or a wall outlet. Power supply 260 may also include metering logic to provide the user and components of terminal 100 with information about battery charge levels, output levels, power faults, etc.As will be described in detail below, terminal 100, consistent with the principles of the embodiments, may perform certain operations relating to adaptively configuring display 140 in response to user inputs or in response to instructions associated with processing logic 210. Terminal 100 may perform these operations in response to processing logic 210 executing software instructions of a keypad configuration/programming application contained in a computer-readable medium, such as storage 220. A computer-readable medium may be defined as a physical or logical memory device and/or carrier wave.The software instructions may be read into storage 220 from another computer-readable medium or from another device via communication interface 240. The software instructions contained in storage 220 may cause processing logic 210 to perform processes that will be described later. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes consistent with the principles of the embodiments. Thus, implementations described herein arc not limited to any specific combination of hardware circuitry and software.Fig. 3 illustrates an exemplary functional diagram of the user interface logic 230 of Fig. 2 . User interface logic 230 may include control logic 310, display logic 320, position sensing logic 330 and zoom window logic 340.Control logic 310 may include logic that controls the operation of display logic 320, logic operating with display logic 320, and/or processes involved with display logic 320. Control logic 310 may be implemented as standalone logic or as part of processing logic 210. Moreover, control logic 310 may be implemented in hardware or software. Control logic 310 may receive inputs via keys 112 and may receive signals from processing logic 210 to provide images to be displayed via display 140 and/or send signals to display logic 320.Display logic 320 may include logic to present information to a user of terminal 100. Display logic 320 may include processing logic to interpret signals and instructions and a display device (such as display 140) having a display area to provide information to a user of terminal 100. For example, display logic 320 may receive image signals from control logic 310, such as a user interface screen displaying a plurality of choices to be displayed. Display logic 320 may also receive signals from position sensing logic 330 and provide a cursor on display 140 based on the received position signals. Display logic 320 may also determine selections of information displayed, by comparing locations of displayed information and input position signals received from position sensing logic 330 relating to a position on display 140 that may be touched by a user.Implementations of display logic 320 may also include mediums that change properties as light passes through the mediums, or display logic 320 may include mediums that reflect light. For example, one implementation of display logic 320 may include a liquid crystal display (LCD) technology that includes, for example, biphenyl or another stable liquid crystal material. LCD based implementations of display logic 320 may include thin film transistor (TFT) LCDs that may include a liquid crystal structure placed between two glass plates that can be charged to cause changes in the liquid crystal structure so as to change color characteristics of light passing through the liquid crystal structure. Implementations employing LCD based technologies may use back lighting or front lighting to enhance the appearance of images produced by display logic 320.Display logic 320 may also include logic to provide illumination to an upper surface of a display device or a lower surface of a display device. For example, display logic 320 may be used to provide front lighting to an upper surface of a display device (such as display 140) that faces a user. Front lighting may enhance the appearance of a display device by making information on the display device more visible in high ambient lighting environments, such as viewing a display device outdoors. Display logic 320 may also be used to provide backlighting to a lower surface, or rear surface, of a display device, such as a surface of a display device that faces away from a user. Backlighting may be used with LCD based implementations of a display device to make images brighter and to enhance the contrast of displayed images. Implementations of display logic 320 may employ light emitting diodes (LEDs) or other types of devices to illuminate portions of a display device.Position sensing logic 330 may include logic that senses the position of an object. For example, position sensing logic 330 may be configured to determine the location on display 140 where a user places his/her finger regardless of how much pressure the user exerts on display 140. In one implementation, position sensing logic 330 may include a transparent or semitransparent film that can be placed over display 140. The film may be adapted to change an output, such as a voltage or current, as a function of an amount of pressure exerted on the film and/or based on a location where pressure is exerted on the film. For example, assume that a user presses on the film in an upper left hand corner of the film. The film may produce an output that represents the location at which the pressure was detected. Implementations of position sensing logic 330 may use thermal, pressure, vibration, location, etc., sensing techniques to identify and receive inputs. Position sensing logic 330 may also use capacitive, resistive, inductive, optic, etc., based sensing devices to identify the presence of an object and to receive an input via the object. Position sensing logic 330 may send a signal to display logic 320 indicating the determined position of input, for example.Zoom window logic 340 may include hardware and/or software to provide a window of enlarged information via display 140. For example, zoom window logic 340 may receive a signal from position sensing logic 330 that identifies or determines a place on display 140 where an input may be received. Zoom window logic 340 may also receive signals from display logic 320 related to an image or information currently being displayed via display 140. Zoom window logic 340 may then use the received position and image signals to provide a window for enlarged images that may be located at the position of input on display 140 that may have been touched by a user. For example, if a displayed image on display 140 is a menu of choices, zoom window logic 340 may provide a zoom window that contains an enlarged or magnified view of the choices in the menu.Fig. 4 is a flowchart of exemplary processing consistent with the principles described herein. Process 400 may begin when information is displayed to a user of terminal 100 (block 410). For example, terminal 100 may be running an application, such as an email or text messaging application, where processing logic 210 and/or user interface logic 230 may generate a user interface screen that includes information and/or menus to be displayed via display 140 in order to allow a user to create and send an email or text message.While displaying a user interface and/or information, terminal 100 may sense and determine a position of input (block 420). For example, a user may press down with his/her finger on a specific location on the surface of display 140 which may be determined by position sensing logic 330. As described above, position sensing logic 330 may determine the exact position on the surface of display 140 that is being contacted. Position sensing logic 330 may then send a signal to display logic 320 and zoom window logic 340 indicating the determined position of input. Based on the determined input position, zoom window logic 340 may provide a window of enlarged information based on the determined position (block 430). An example of providing a zoom window of enlarged information (block 430) is shown in Fig. 5 .As shown in Fig. 5 for example, a user of terminal 100 may be presented with an interface screen via display 140. When a user touches display 140, the location or position of input, shown as circle 510, is determined by position sensing logic 330. The determined input position signal may then be sent to zoom window logic 340 in order to provide zoom window 520. For example, the information contained in zoom window 520 is enlarged information (e.g., magnified in size by two to three or more times from the originally displayed interface screen) that is in close proximity (e.g., at or slightly offset) to the determined input position determined by position sensing logic 330. A user preference setting may allow zoom window 520 to be displayed to the left of the determined input position for right-handed users and to the right of the determined input position for left-handed users. Specifically in this example, zoom window 520 contains some text from the originally displayed interface screen and three icons indicating storage areas, where the position of input, shown as circle 510, is directly above (covering) one of the storage area icons. It should be understood that circle 510 is shown for illustrative purposes only and may not be shown on display 140.Continuing with this example, an input position may continue to be monitored and determined within the zoom window 520. For example, position sensing logic 330 may continue to monitor and determine the (input) position of a user's finger while it moves across the surface of display 140. A user may then move their finger within zoom window 520 to directly cover a desired input selection, such as an icon or choice within a displayed menu, for example. Once a user's finger is directly over the input selection, by lifting his/her finger up off the surface of display 140, an input selection may be determined by using the monitored lift off point within the zoom window 520 (block 440). Alternatively, an input selection may be determined when the user presses more firmly on a particular part of zoom window 520 or taps zoom window.For example, zoom window logic 340 may use input position signals received from position sensing logic 330 to determine what information (desired input selection) was displayed in the zoom window 520 that directly corresponded to the monitored lift off point within zoom window 520. In this example, if the user lifted his/her finger off the surface of display 140 over the first storage icon (shown as circle 510) this first icon may be received as a desired input selection by terminal 100 (block 440). In another example of receiving a desired input selection, a matching point (selection) on the original screen may be calculated using knowledge of the position and scaling of zoom window 520, without referring to information displayed in zoom window 520 (block 440). Additionally, after receiving the input selection based on the determined position of lift off, terminal 100 may remove zoom window from display 140 and simulate the input selection on the original interface screen, for example.In other examples, the input selection may be received (block 440) with a determined input position that is outside the zoom window 520. For example, if zoom window 520 displays three icons and a user moves his/her finger horizontally to the right and beyond zoom window 520, display logic 320 may determine that the rightmost icon is the desired selection (block 440). In another example, if a user moves his/her finger outside the zoom window 520, a new zoom window may be created that contains information (from the original display screen) based on the user's new finger position (block 430). In this example, with a moving, or dragged zoom window, an input selection may be received when a user taps a desired input selection within the zoom window (block 440). In other examples, a stylus or input pen may also be used (in place of or in addition to) a user's finger to select inputs from displayed zoom windows. For example, a user's finger may be used in blocks 420-430 and an input selection may be made by tapping a stylus or pen in a provided zoom window (block 440).In still further examples, additional icons and/or information may be added to zoom window 520. For example, additional icons may be added around the edges of zoom window 520. Some examples of additional icons may be a "page-up" and "page-down" icons. A user may select one of these icons in any manner as described above, for example, touching on, pausing on, or lifting off the icon. In other examples, a "page-up" icon may be selected when the position of a users' finger is determined (by position sensing logic 330) to leave the top of zoom window 520. Similarly, a "page-down" icon may be selected when the position of a user's finger is determined to leave the bottom of zoom window 520.Another example of process 400 may be described with reference to Figs. 6A-6B . In this example, terminal 100 includes a display 140 that may provide a number of groups of characters 610-01 to 610-11 (collectively referred to as character groups 610). In this example, when displayed in the manner shown, characters groups 610 form upper case and lower case "QWERTY" type keyboards (block 410). This displayed user interface that contains character groups 610 may be presented to a user while terminal 100 is running an email or text messaging application, for example. In order to select a character, a user may touch a character group 610 currently displayed via display 140. When the user's finger contacts the surface of display 140, the position of input is determined by position sensing logic 330 (block 420). Position sensing logic 330 may send a signal indicating the input position to zoom window logic 340 and a window containing enlarged information may be provided (block 430). As shown in Fig. 6B for example, a user may have touched character group 610-09 (as shown in Fig. 6A ) and zoom window 620 may be provided by zoom window logic 340 based on the signal from position sensing logic 330 indicating that the position of input corresponded to character group 610-09. In this example, zoom window 620 contains enlarged characters (r, t, y, f, g, h, c, v and b) that are contained in character group 610-09.After zoom window 620 is provided, a user may move his/her finger over a desired selection displayed within the zoom window 620. For example, position sensing logic 330 may continue to monitor and determine the (input) position of a user's finger while it moves across the surface of display 140. A user may then move his/her finger within zoom window 620 to directly cover a desired input selection, such as one of characters r, t, y, f, g, h, c, v and b, for example. Once a user's finger is directly over the desired character, by lifting his/her finger up off the surface of display 140, an input selection may be determined by using the monitored lift off point within the zoom window 620 (block 440).For example, zoom window logic 340 may use input position signals received from position sensing logic 330 to determine what character (desired input selection) was displayed in the zoom window 620 that directly corresponded to the monitored lift off point within zoom window 620. In this example, if the user lifted his/her finger off the surface of display 140 over the "t" character, a "t" may be received as the input selection by terminal 100 (block 440).In further embodiments, the selection from the zoom window 620 may be determined from a point of lift off which may not be within zoom window 620 (block 440). For example, if zoom window 620 is displayed, a user may select the "t" character by moving his/her finger straight up from the center of zoom window 620 and lift his/her finger off the surface of display 140 at some point directly above the "t" character. In a similar manner, if zoom window 620 is displayed, a user may select the "h" character by moving his/her finger horizontally from the center of zoom window 620 and lift his/her finger off the surface of display 140 at some point to the right of the "h" character. In this manner, zoom window 620 may divide display 140 into angular sections, where each character may be associated with an angular section of display 140. In another example, an input selection may be received when a user's finger leaves the zoom window area. It should be understood that the number of characters or letters shown in character groups 610 is exemplary only. More or less characters may be contained and displayed in a character group. In addition, other shapes such as rectangular or triangular shapes may be used to segment individual characters in character groups 610.In another example, zoom windows may not be displayed and a user may select a character within a character group 610 by moving his/her finger in a manner as described above. For example, with the character groups displayed as shown in Fig. 6A , if a user touches character group 610-04 and moves his/her finger horizontally to the left across the surface of display 140, the "A" character may be selected without enacting block 430 (i.e., without providing zoom a window).In another embodiment, if a user touches a character group on display 140 (blocks 410-420), a zoom window of candidate next words may be provided (block 430). In this example, process 400 may be enacted for word prediction purposes, where the most frequently used words that start with characters within the selected group may be displayed as input selections. In another example, if a user selects character group 610-10, and then selects the character "k," in a manner as described above, another zoom window of frequently used words that begin with "k" may be provided as input selections.In still further embodiments, after determining an input selection in block 440, process 400 may continue with block 410 if the input selected in block 440 requires or produces further selections of choices. For example, a user may be presented with a number of cascaded interface screens in order to perform an operation, where process 400 may perform processing associated with blocks 410-440 for each of the number of interface screens and zoom windows may be provided as appropriate. It should be understood that with cascaded interface screens, methods of input selection (block 440) may also include pausing on an input selection and determining an input selection immediately upon detecting the presence of a user's finger on a selection.An example of multiple iterations of process 400 may be used for entering Chinese characters. Using a Wubizixing input method for Chinese characters, a user may first select from five root character groups, "left-falling," "right-falling," "horizontal," "vertical" and "hook," where each of these five root character groups may be choices provided on an interface screen (block 410). Once a user touches display 140 selecting information or an icon representing one of the five character groups, a zoom window may be provided that includes further choices to be made (blocks 420-430). For example, if a user selects the "horizontal" character group, a zoom window containing five classifications of horizontal root characters may be displayed. One of the five classifications of horizontal root characters may include a choice of characters that contain one horizontal stroke, for example. If a user selects characters that contain one horizontal stroke (block 440), another user interface screen (or zoom window) may be provided with further information and/or selections to be made, such as an interface window displaying four brush stroke type groups. Continuing with this example, another menu containing further (additional) strokes may be provided based on the previous selections related to the root character selected and one horizontal stroke. In this manner, additional choices (provided via additional interface screens and/or zoom windows) may be provided until a Chinese character may be determined and selected. In this manner, process 400 may be enacted as many times as is necessary (based on the amount of selections to be made) in order to allow a user to select a desired character (or other information) from a user interface screen. As described above, the input selection (block 440) may be received using any of the above examples of selection, for example, such as touching or pausing on the input selection, in addition to lifting off the selection.Fig. 7 is a flowchart of exemplary processing consistent with the principles described herein. Process 700 may begin when information is displayed to a user of terminal 100 (block 710). For example, terminal 100 may be running an application, such as an email or text messaging application, where processing logic 210 and/or user interface logic 230 may generate a user interface screen that includes information and/or menus to be displayed via display 140 in order to allow a user to create and send an email or text message.While displaying a user interface and/or information, terminal 100 may display and move a cursor using a monitored position of input (block 720). For example, a user may press down with his/her finger on a specific location on the surface of display 140 which may be determined by position sensing logic 330. As described above, position sensing logic 330 may determine the exact position on the surface of display 140 that is being contacted. Position sensing logic 330 may then send a signal to display logic 320 indicating the determined position of input. Based on the determined input position, display logic 320 may display a cursor based on the determined position of input (block 720). An example of displaying a cursor is shown in Fig. 8 .As shown in Fig. 8 for example, a user of terminal 100 may be presented with an interface screen via display 140. When a user touches display 140, the location or position of input, shown as circle 810, is determined by position sensing logic 330. The determined input position signal may then be sent to display logic 320 in order to provide a cursor 820. For example, cursor 820 may be displayed adjacent (offset from) the input position (810) so that the user may clearly see cursor 820. In this example, a user may be storing a document using an interface screen (similar to Fig. 5 ) that includes three icons indicating storage areas. It should be understood that circle 810 is shown for illustrative purposes only and may not be shown on display 140.Continuing with this example, an input position (810) may continue to be monitored and determined. For example, position sensing logic 330 may continue to monitor, determine and follow the (input) position of a user's finger while it moves across the surface of display 140. A user may then move his/her finger such that cursor 820 is directly over a desired input selection, such as an icon or choice within a displayed menu, for example. Once cursor 820 is directly over a desired input selection, a user's finger may be lifted off the surface of display 140 to indicate an input selection (block 730). In this manner, terminal 100 may display a cursor adjacent to a position of input in order to allow a user to select information presented on an interface screen. It should be understood that the offset position of the cursor shown in Fig. 8 is exemplary only and that cursor 820 may be below, left, or right of the position of input (810). In other examples, additional icons and/or information may be provided when a user touches display 140 and these additional icons and/or information may also be selected with cursor 820.In further examples, process 400 or 700 may be employed for dragging events. For example, a user may use a zoom window or cursor and process 400 or 700 (as previously described) to select a scroll bar on display 140. If a user quickly retouches display 140, this may be received by position sensing logic 330 and then a signal may be sent to display logic 320 to instigate a drag mode. User finger drag events may be received by position sensing logic 330 and mapped by display logic 320 into signals used to drag the scroll bar so as to follow the finger. When the scroll bar is in the desired position (as determined by the user), a user may lift their finger off the surface of display 140, where the position of the scroll bar may be received as an input.Fig. 9 is a flowchart of exemplary processing consistent with the principles described herein. Process 900 may begin when groups of characters arc displayed to correspond to keys on terminal 100 (block 910). As shown in Fig. 10A , terminal 100 includes a display 140 that may provide a number of groups of characters 1010-01 to 1010-5 (collectively referred to as character groups 1010). In this example, when displayed in the manner shown, characters groups 1010-2 to 1010-5 form a "QWERTY" type keyboard. As described above, this displayed user interface that contains character groups 1010 may be presented to a user while terminal 100 is running an email or text messaging application, for example.In this exemplary embodiment, a total of nine character groups 1010 may be displayed at any one time, where the displayed locations of each character group 1010 corresponds to physical locations of the keys labeled "1" to "9" in keys 112. In the example shown in Fig. 10A , the displayed location of character group 1010-1 corresponds to the "4" key, the location of character group 1010-2 corresponds to the "6" key, the location of character group 1010-3 corresponds to the "7" key, the location of character group 1010-4 corresponds to the "8" key and the location of character group 1010-5 corresponds to the "9" key. Other numbers of character groups may be displayed in alternative implementations.In order to select a letter, a user may depress the key associated with the displayed character group 1010 that contains the desired letter. When the user depresses a key, this input may be received as a selection of a displayed character group (block 920). For example, if a user desires to input a "s," the "7" key may be depressed. In response to terminal 100 receiving this input, the selected character group 1010-3 is enlarged (block 930). Continuing with this example, Fig 10B shows selected character group (1010-3) displayed as enlarged text within zoom window 1020. In this example, control logic 310 may send a signal indicating that the "7" key has been depressed to zoom window logic 340 and a window containing enlarged letters (associated with the "7" key) may be provided (block 930).As shown in Fig. 10B for example, zoom window 1020 contains enlarged characters q, w, e, a, s, d, \, z and x that are contained in character group 1010-3. After a zoom window is provided, a user may depress a key to select a particular letter within zoom window 1020. When a user depresses a key, this may be received as an input selection of a letter within the display character group (block 940). In this example, the displayed location of the letters within zoom window 1020 also corresponds to the physical locations of keys 112. For example, the "1" key corresponds with "q," the "2" key corresponds with "w," the "3" key corresponds with "e," the "4" key corresponds with "a," the "5" key corresponds with "s," the "6" key corresponds with "d," the "7" key corresponds with "\," the "8" key corresponds with "z," and the "9" key corresponds with "x." If, for example, a user depresses the "5" key, control logic 310 may determine that the "5" key of keys 112 has been depressed and control display 140 to display an "s."In other examples, process 900 may be enacted without block 930. For example, a user may depress a first key to select a character group (block 920) and then may depress a second key to select a letter from the selected character group (block 940) without providing a zoom window of the character group 1010 selected in block 920. In further examples, process 900 may continue with block 920 (after block 940) when cascaded character groups may be required.It should be understood that the exemplary embodiments and user interface screen shown and described above are for illustrative purposes and should not be limited to those examples described. Additionally, terminal 100 may control and may automatically reconfigure the appearance of display 140 based on an application being launched by the use of terminal 100, the execution of a function associated with a particular application/device included in terminal 100 or some other application specific event. For example, if terminal 100 includes a media player and the user begins using the media player, user interface logic 230 may change the appearance of display 140 to provide inputs related to the media player. In another instance, terminal 100 may include a camera function. If the user of terminal 100 presses a shutter button associated with the camera, terminal 100 may change the appearance of display 140 to tailor the display for the camera functionality.CONCLUSIONImplementations consistent with the principles of the embodiments may facilitate providing a number of user interface systems and methods for user input.The foregoing description of the preferred embodiments provides illustration and description, but is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the embodiments.While series of acts have been described with regard to Figs. 4 , 7 and 9 , the order of the acts may be modified in other implementations consistent with the principles of the embodiments. Further, non-dependent acts may be performed in parallel.It will be apparent to one of ordinary skill in the art that aspects of the embodiments, as described above, may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement aspects consistent with the principles of the embodiments is not limiting. Thus, the operation and behavior of the aspects were described without reference to the specific software code--it being understood that one of ordinary skill in the art would be able to design software and control hardware to implement the aspects based on the description herein.Further, certain portions of the invention may be implemented as "logic" that performs one or more functions. This logic may include hardware, such as hardwired logic, an application specific integrated circuit, a field programmable gate array, a processor or a microprocessor, software, or a combination of hardware and software.It should be emphasized that the term "comprises/comprising" when used in this specification and/or claims is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.No element, act, or instruction used in the present application should be construed as critical or essential to the embodiments unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more items. Where only one item is intended, the term "one" or similar language is used. Further, the phrase "based on" is intended to mean "based, at least in part, on" unless explicitly stated otherwise.Examples:1. A mobile communication device, comprising:a touch sensitive display; andlogic configured to:control the touch sensitive display to display information to a user,provide a window of enlarged information via the touch sensitive display based on a determined position of input on the touch sensitive display, andreceive a selection via the window of enlarged information based on a determined position of input within the provided window.2. The mobile communication device of example 1, wherein the displayed information includes a user interface screen with a plurality of selections.3. The mobile communication device of example 2, wherein the enlarged information includes at least one of the plurality of selections.4. The mobile communication device of example 1, wherein the determined position of input within the provided window is determined by the position of a finger of a user or a stylus on the touch sensitive display.5. The mobile communication device of example 4, wherein the determined position of input within the provided window is determined by the position where the user lifts the finger or stylus off the touch sensitive display.6. A method comprising:displaying a plurality of groups of characters via a touch sensitive display;determining a position of input on the touch sensitive display;displaying an enlarged window of one of the groups of characters based on the determined position of input; andselecting one of the characters from the group of characters within the enlarged window based on at least one of a determined position of input within the enlarged window or a determined position of input outside the enlarged window.7. The method of example 6, wherein each of the groups of characters include a plurality of letters.8. The method of example 7, wherein the displaying a plurality of groups of characters comprises: displaying a "QWERTY" type of keyboard by displaying the plurality of letters in the groups of characters.9. The method of example 6, wherein the displayed enlarged window of one of the groups of characters includes a central letter surrounded by other letters in the selected group of characters.10. The method of example 6, wherein the selecting one of the characters from the group of characters within the enlarged window based on a determined position of input is determined by determining a position where a user lifted a finger off the surface of the touch sensitive display.11. A method comprising:displaying an interface screen via a touch sensitive display;determining a position of input on the touch sensitive display;displaying a cursor on the interface screen based on the determined position of input on the touch sensitive display; andselecting a choice displayed on the interface screen based on a position of the cursor.12. The method of example 11, wherein the displayed cursor on the interface screen is displayed on the touch sensitive display at a position at or offset from the determined position of input.13. The method of example 12, wherein the determined position of input is determined by a sensing a position of a user's finger or a stylus on the touch sensitive display.14. The method of example 13, wherein the selected choice displayed on the interface screen based on a position of the cursor is selected when a user lifts a finger off the surface of the touch sensitive display.15. The method of example 12, wherein the offset position of the displayed cursor may be changed based on a user defined preference.16. A mobile communication device, comprising:a plurality of keys;a display; andlogic configured to:control the display to display groups of characters, wherein a position of the displayed groups of characters correspond to physical locations of the plurality of keys;select one of the displayed groups of characters based on a first key input; andselect one character from the selected displayed group of characters based on a second key input.17. The mobile communication device of example 16, wherein the logic is further configured to:control the display to display the selected group of characters in an enlarged manner.18. The mobile communication device of example 17, wherein a displayed position of enlarged characters within a group correspond to physical locations of the plurality of keys.19. The mobile communication device of example 16, wherein the displayed groups of characters form a "QWERTY" type of keyboard.20. The mobile communication device of example 16, wherein at least some of the displayed group of characters include nine letters. |
A template for use in imprint lithography is disclosed. The template includes at least two ultraviolet transparent materials bonded together by an ultraviolet transparent epoxy. The ultraviolet transparent epoxy is a polymeric, spin-on epoxy or a two-part, amine-cured epoxy having a viscosity at room temperature of from about 35,000 cps to about 45,000 cps. The template has a substantially uniform index of refraction. Additionally, methods of forming and using the templates are disclosed. |
CLAIMS What is claimed is: 1. A template for use in imprint lithography, comprising: a patterned ultraviolet transparent material in contact with an ultraviolet transparent epoxy; and a base ultraviolet transparent material in contact with the ultraviolet transparent epoxy. 2. The template of claim 1 , wherein the template comprises a substantially uniform index of refraction. 3. The template of claim 1 , wherein a thickness of the base ultraviolet transparent material is greater than a thickness of the patterned ultraviolet transparent material by a magnitude of from about 5 to about 15. 4. The template of claim 1 , wherein the patterned ultraviolet transparent material comprises a thickness of from about 250 μm to about 1000 μm. 5. The template of claim 1, wherein the base ultraviolet transparent material comprises a thickness of from about 1250 μm to about 15000 μm 6. The template of claim 1 , wherein the patterned ultraviolet transparent material and the base ultraviolet transparent material are of substantially the same size and shape. 7. The template of claim 1, wherein the ultraviolet transparent epoxy comprises a polymeric, spin-on epoxy. 8. The template of claim 1 , wherein the ultraviolet transparent epoxy comprises a two-part, amine-cured epoxy having a viscosity at room temperature of from about 35,000 cps to about 45,000 cps. 9. The template of claim 1 , wherein the patterned ultraviolet transparent material and the base ultraviolet transparent material are adhered to one another by the ultraviolet transparent epoxy. 10. The template of claim 1 , wherein the patterned ultraviolet transparent material is bonded to a first surface of the ultraviolet transparent epoxy, the base ultraviolet transparent material is bonded to a second surface of the ultraviolet transparent epoxy, and the ultraviolet transparent epoxy, the patterned ultraviolet transparent material, and the base ultraviolet transparent material have substantially similar ultraviolet transparencies. 11. A method of forming a template for use in imprint lithography, comprising: applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprising a pattern therein; placing a base ultraviolet transparent material in contact with the ultraviolet transparent epoxy; and curing the ultraviolet transparent epoxy to bond the ultraviolet transparent material and the base ultraviolet transparent material thereto. 12. The method of claim 11 , wherein applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprising a pattern therein comprises forming the pattern in a material selected from the group consisting of quartz, magnesium fluoride, titanium oxide, calcium fluoride, a borosilicate glass, silicon oxide, silicon dioxide, polycarbonate, sapphire, silicon germanium carbon, gallium nitride, silicon germanium, gallium arsenide, gate oxide, and combinations thereof. 13. The method of claim 11 , wherein applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprising a pattern therein comprises forming a photoresist material over the ultraviolet transparent material, forming a pattern in the photoresist material, transferring the pattern from the photoresist material to the ultraviolet transparent material, and applying the ultraviolet transparent epoxy to the ultraviolet transparent material. 14. The method of claim 13, wherein transferring the pattern from the photoresist material to the ultraviolet transparent material comprises anisotropically etching the pattern into the ultraviolet transparent material. 15. The method of claim 13 , wherein transferring the pattern from the photoresist material to the ultraviolet transparent material comprises isotropically etching the pattern into the ultraviolet transparent material. 16. The method of claim 11 , wherein applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprising a pattern therein comprises forming the pattern having at least one feature dimension of less than about 100 nm in the ultraviolet transparent material. 17. The method of claim 11 , wherein applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprising a pattern therein comprises forming the pattern having at least one feature dimension of less than about 45 nm in the ultraviolet transparent material. 18. The method of claim 11 , wherein applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprising a pattern therein comprises forming the pattern by electron beam projection, electron beam direct write, ion direct write, photolithography, or maskless lithography. 19. The method of claim 11 , wherein placing a base ultraviolet transparent material in contact with the ultraviolet transparent epoxy comprises forming the base ultraviolet transparent material having a thickness greater than the thickness of the ultraviolet transparent material by a magnitude of from about 5 to about 15. 20. The method of claim 11 , wherein placing a base ultraviolet transparent material in contact with the ultraviolet transparent epoxy comprises forming the base ultraviolet transparent material from a material selected from the group consisting of quartz, magnesium fluoride, titanium oxide, calcium fluoride, a borosilicate glass, siliconoxide, silicon dioxide, polycarbonate, sapphire, silicon germanium carbon, gallium nitride, silicon germanium, gallium arsenide, gate oxide, and combinations thereof. 21. The method of claim 11 , wherein applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprises applying the ultraviolet transparent epoxy having an index of refraction substantially similar to an index of refraction of each of the ultraviolet transparent material and the base ultraviolet transparent material. 22. The method of claim 11 , wherein applying an ultraviolet transparent epoxy to an ultraviolet transparent material comprises applying the ultraviolet transparent epoxy at a thickness of between about 5 μm and about 10 μm. 23. A method of imprinting features on a substrate, comprising: contacting a substrate with an imprint template, the imprint template comprising: a patterned ultraviolet transparent material bonded to an ultraviolet transparent epoxy; and a base ultraviolet transparent material bonded to the ultraviolet transparent epoxy; transferring a pattern of the patterned ultraviolet transparent material into a transfer material on the substrate; and transferring the pattern into the substrate underlying the transfer material to form features on the substrate. 24. The method of claim 23 , wherein transferring the pattern into a substrate underlying the transfer material to form features on the substrate comprises forming features having a dimension of less than about 45 nm in the substrate. |
TEMPLATES FOR IMPRINT LITHOGRAPHY AND METHODS OF FABRICATING AND USING SUCH TEMPLATES PRIORITY CLAIM This application claims the benefit of the filing date of United States Patent Application Serial Number 12/106,732, filed April 21, 2008, for "TEMPLATES FOR IMPRINT LITHOGRAPHY AND METHODS OF FABRICATING AND USING SUCH TEMPLATES" TECHNICAL FIELD Embodiments of the invention relate to methods of fabricating and using templates for use in imprint lithography and the templates resulting from the same. More specifically, embodiments of the invention relate to templates having at least two ultraviolet ("UV") wavelength radiation transparent materials bonded by a UV transparent epoxy. BACKGROUND In the semiconductor industry, conventional patterning processes include patterning a photoresist layer by lithographic methods, such as photolithography, electron beam, or X-ray lithography, for mask definition. The pattern on the photoresist layer is subsequently transferred into a hard material in contact with the photoresist layer using a dry etch, wet etch, or lift-off technique. Photolithography is limited to forming features of about 90 nm with a 248 nm light, about 45 nm with a 193 nm light, and from about 25 nm to about 30 nm with a 13.7 nm (extreme ultraviolet ("EUV")) light. The limitations on the resolution of conventional photolithography are due to the wavelength of radiation used in the process. In addition, photolithographic equipment becomes increasingly expensive as feature sizes become smaller. In contrast, electron beam lithography is capable of creating smaller features, such as features in the tens of nanometers range. With electron beam lithography, the features are generated at an earlier point in time than with conventional lithography. However, electron beam lithography is expensive and very slow. As feature sizes on semiconductor devices become smaller, imprint lithography has been proposed as a replacement for photolithography. In imprint lithography, atemplate having a nanoscale pattern is pressed into a film on the semiconductor device. The pattern on the template deforms the film and forms a corresponding or negative image in the film. After removing the template, the pattern in the film is transferred into the semiconductor device. The size of the pattern on the template and of the corresponding features on the semiconductor device are substantially similar. Therefore, unlike photolithographic techniques where a mask or reticle pattern is reduced substantially (for example, 4X) in size when transferred to the surface of a semiconductor device, imprint lithography is considered a "IX" pattern transfer process because it provides no demagnification of the pattern on the template that is transferred to the semiconductor. device surface. Templates for use in imprint lithography are known in the art, as described in United States Patent Nos. 6,580,172 to Mancini et al. and 6,517,977 to Resnick et al. To form the high resolution pattern on the template, electron beam mask-making techniques are typically used. However, use of these techniques is undesirable because they are expensive, have low throughput, and are defect ridden. As described in U.S. Published Patent Application 20060286490 to Sandhu et al , a template is typically formed from quartz or other UV transparent material. To provide increased mechanical strength and integrity to the template during the imprinting process, the template is bonded to another UV transparent material using an adhesive composition. As feature sizes on semiconductor devices approach sub- 100 nm, there is a need for a fast, reliable, and cost effective method of making small features. Since imprint lithography is capable of forming small features, it would be desirable to more easily, cheaply, and reproducibly produce templates for use in imprint lithography. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a cross-sectional view of a template of the invention; FIG. 2 is an elevational view of a template of the invention; FIG. 3 schematically illustrates an embodiment of fabricating the template of FIG. 1; FIGs. 4 and 5 schematically illustrate an embodiment of fabricating the template of FIG. l; and FIGs. 6-8 schematically illustrate using the template of the invention in an imprint lithography process to form features on a substrate.BEST MODES FOR CARRYING OUT THE INVENTION A template for use in imprint lithography is disclosed. The template includes a high resolution pattern that may be formed by lithography. The pattern on the template provides topography that is used to imprint a pattern of corresponding features on a substrate. As used herein, the term "substrate" means and includes a semiconductor wafer at an intermediate stage in processing. The substrate has already been exposed to at least one processing act, but has yet to undergo additional processing. As such, the template functions as a mold or form to transfer the pattern to the substrate, forming the features on _ a surface thereof contacted by .the template. _ As describedinmore detail below, the template may be transparent to UV wavelength radiation. The features formed on the substrate may have dimensions substantially similar to dimensions of the pattern formed on the template. The features may have a feature size or dimension of less than about 100 nm, such as less than about 45 nm. By using photolithographic techniques to form the pattern, the template may be easily and cheaply fabricated. In addition, new infrastructure and processing equipment may not need to be developed because existing photolithographic infrastructure and processing equipment may be used to fabricate the template. In the following detailed description, reference is made to the accompanying drawings, which form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable a person of ordinary skill in the art to practice the invention. However, other embodiments may be utilized, and changes may be made, without departing from the scope of the invention. The drawings presented herein are not necessarily drawn to scale and are not actual views of a particular template, fabrication process thereof, substrate, or fabrication process thereof, but are merely idealized representations that are employed to describe the embodiments of the invention. Additionally, elements common between drawings may retain the same numerical designation. The following description provides specific details, such as material types and material thicknesses in order to provide a thorough description of embodiments of the invention. However, a person of ordinary skill in the art would understand that the embodiments of the invention may be practiced without employing these specific details. Indeed, the embodiments of the invention may be practiced in conjunction with- A - conventional semiconductor materials employed in the industry. In addition, the description provided below does not form a complete process flow for manufacturing a complete electronic device utilizing the template, and the substrates described below do not form a complete electronic device. Only those process acts and substrates necessary to understand the embodiments of the invention are described in detail below. Additional processing acts to form a complete electronic device from the substrate may be performed by conventional techniques, which are not described herein. As shown in FIG. 1, template 2 may include at least two UV wavelength radiation transparent (which may also he.termeil "UY transparent" for convenience) materials 3, 4 that are j oined together by a UV transparent epoxy 5. As used herein, the term "epoxy" means and includes a thermoset resin whose chemical reactivity is due to the presence therein of at least one epoxide group or moiety. While FIG. 1 schematically illustrates the UV transparent materials 3, 4 and the UV transparent epoxy 5 as layers, the materials are not limited thereto and may be formed in other configurations. The UV transparent materials 3 , 4 may be of substantially the same size and shape and, in the case of a wafer-shaped template, of substantially the same diameter. Thus, the template 2 may have substantially the same dimensions (diameter, etc.) as a conventional semiconductor wafer (silicon wafer) so that processing equipment currently used in photolithography techniques may be used to fabricate the template 2 and so that the template 2 may be used to imprint a pattern on the entire surface of a semiconductor wafer simultaneously. The dimensions of the template 2 may also enable the template 2 to be utilized in a conventional imprint lithography device without further modifications to the template 2 or to the imprint lithography device. However, if the UV transparent materials 3, 4 of the template 2 have smaller or larger dimensions than a conventional semiconductor wafer, the processing equipment may be modified, as desired, to accommodate the UV transparent materials 3, 4 and the template 2. Template 2 may also be configured for use with bulk semiconductor substrates other than wafers, for example silicon-on-insulator (SOI) substrates as exemplified by silicon-on-sapphire (SOS) substrates and silicon-on-glass (SOG) substrates. Template 2 is, further, not limited to use with semiconductor substrates comprising a silicon layer, but has utility with substrates of any semiconductor material or materials. One of the UV transparent materials may have a pattern 6 formed on a surface thereof and is referred to herein as patterned UV transparent material 4. As described inmore detail below, the other UV transparent material may provide mechanical integrity to the patterned UV transparent material 4 and is referred to herein as base UV transparent material 3. The template 2 may be formed from UV transparent materials to enable UV radiation to be transmitted through the template 2 during the imprinting process. Each of the base UV transparent material 3 and patterned UV transparent material 4 may be formed from a material that is substantially transparent to UV wavelength radiation including, but not limited to, quartz, magnesium fluoride, a borosilicate glass, titanium oxide, calcium fluoride, silicon oxide, silicon dioxide, a polycarbonate material, a sapphire — material, silicon germaniunucarbon,. gallium nitride, silicon germanium, gallium arsenide, gate oxide, or combinations thereof. By way of non-limiting example, the borosilicate glass may be a PYREX® material or BOROFLOAT® 33 ("BF33"), which is a quartz material that includes greater than about 8% boric acid and no alkaline earth compounds, has a thermal expansion coefficient of 33 x 10~7 K"1, and is available from Schott North America, Inc. (Elmsford, NY). The material used for each of the base UV transparent material 3 and patterned UV transparent material 4 may be the same or different as long as the overall UV transparency of the template 2 is achieved. The relative thicknesses of the base UV transparent material 3 and the patterned UV transparent material 4 may be different, with the base UV transparent material 3 having an increased thickness relative to that of the patterned UV transparent material 4. The base UV transparent material 3 may be thicker than the patterned UV transparent material 4 by a magnitude of from about five to about fifteen. In other words, base UV transparent material may be about five to about fifteen times thicker than the patterned UV transparent material. The thickness of the patterned UV transparent material 4 may range from about 250 μm to about 1000 μm, while the thickness of the base UV transparent material 3 may range from about 1250 μm to about 15000 μm. Together, the base UV transparent material 3, the patterned UV transparent material 4, and the UV transparent epoxy 5 may form the template 2 having a thickness of from about 1500 μm to about 17000 μm. Since the patterned UV transparent material 4 may not possess sufficient mechanical strength and integrity to be used, by itself, as the imprint template, the patterned UV transparent material 4 may be joined or adhered to the base UV transparent material 3. The base UV transparent material 3 and the patterned UV transparentmaterial 4 may be adhered by the UV transparent epoxy 5, providing additional mechanical integrity and strength to the patterned UV transparent material 4. The UV transparent epoxy 5 may be applied to a surface of at least one of the base UV transparent material 3 and the patterned UV transparent material 4 and cured to join these materials. The UV transparent epoxy 5 may be thermally cured or cured with UV radiation depending on the material selected. The UV transparent epoxy 5, before cure, may be of sufficient flexibility to provide increased physical contact between the UV transparent epoxy 5 and the base UV transparent material 3 and between the UV transparent epoxy 5 andihepattemed UV transparent material 4. The UV transparent epoxy 5 may remain flexible after cure, or may become rigid after cure. The desired degree of flexibility of the UV transparent epoxy 5 may be affected by the bonding ability of the patterned UV transparent material 4 with the base UV transparent material 3, specifically the rigidity of the base UV transparent material 3. hi addition, the bow and warp of the base UV transparent material 3 and the patterned UV transparent material 4 may affect the degree of rigidity or flexibility needed in the UV transparent epoxy 5. Depending on the material selected, a curing temperature for the UV transparent epoxy 5 may be determined by a person of ordinary skill in the art in accordance with the manufacturer's instructions. By way of non-limiting example, the UV transparent epoxy 5 may be UV cured at a temperature of from about room temperature to about 440° C. In addition, the UV transparent epoxy 5 may have a minimal effect on the UV transparency of the template 2. In other words, using the UV transparent epoxy 5 in the template 2 may have substantially no effect on the UV transparency of the template 2. As such, the template 2 may exhibit a substantially uniform index of refraction throughout the thickness thereof. Depending on the material used as the UV transparent epoxy 5, the UV transparent epoxy 5 may be UV transparent before and after cure, or may be UV transparent after cure. The UV transparent epoxy 5 may also have a thermal expansion coefficient substantially similar to that of the base UV transparent material 3 and the patterned UV transparent material 4. The UV transparent epoxy 5 may be applied to at least one of the base UV transparent material 3 and the patterned UV transparent material 4 by conventional techniques, such as by spin coating. Depending on the material used for the UV transparent epoxy 5, a suitable manner of application may be selected by a person of ordinary skill in the art. The UV transparent epoxy 5 may at least partially cover thesurface of the at least one of the base UV transparent material 3 and the patterned UV transparent material 4. The viscosity and thickness of the UV transparent epoxy 5 may be selected to provide a sufficient degree of bonding between the UV transparent epoxy 5 and the base UV transparent material 3 and between the UV transparent epoxy 5 and the patterned UV transparent material 4. The viscosity of the UV transparent epoxy 5 may be within a range of from about 25,000 cps to about 50,000 cps at room temperature (about 25° C). The thickness at which the UV transparent epoxy 5 is applied may depend on the planarities of the base UV transparent material 3 and the patterned UV transparent materials Jf the .base UV transparent material 3 and the patterned UV transparent material 4 are substantially planar, the UV transparent epoxy 5 may be relatively thin, such as from about 2 μm to about 10 μm. However, if the base UV transparent material 3 and the patterned UV transparent material 4 have an increased surface roughness, the UV transparent epoxy 5 may be thicker, such as greater than or equal to about 20 μm. By way of non-limiting example, the UV transparent epoxy 5 may be a polymeric, spin-on epoxy having stability to high temperatures, such as that sold under the WAFERBOND™ HT tradename. WAFERBOND™ HT products, such as WAFERBOND HT-250, are commercially available from Brewer Science, Inc. (Rolla, MO). By way of non-limiting example, the UV transparent epoxy 5 may be a high temperature, humidity resistant epoxy, such as EP30HT, which is commercially available from Master Bond, Inc. (Hackensack, NJ). EP30HT is a two-part, amine-cured epoxy having a viscosity at room temperature of from about 35,000 cps to about 45,000 cps. EP30HT has a service temperature range of from about -6O0F to about 400°F (from about -510C to about 2050C). By way of non-limiting example, the UV transparent epoxy 5 may be EP-400, which is commercially available from Asahi Denka Kogyo K.K. (Tokyo, Japan). In one embodiment, the base UV transparent material 3 is a conventional 0.25-inch (about 6350 μm) thick BF33 quartz wafer, the patterned UV transparent material 4 is a patterned, 500 μm thick quartz wafer, and the UV transparent epoxy 5 is WAFERBOND HT-250. However, other UV transparent materials may also be used. Direct bonding (without using the UV transparent epoxy 5) of the BF33 and the 500 μm thick quartz wafer is not effective because the stiffness of these materials prevents sufficient bonding. Without being bound by any theory, it is believed that the UV transparent epoxy 5 provides an additional degree of contact and flexibility between thebase UV transparent material 3 and the patterned UV transparent material 4 for these materials to bond together. Since the template 2 is transparent to UV radiation, an optically opaque material O may be deposited on the template 2 to form alignment marks 12 thereon, as shown in FIG. 2. The optically opaque material O may be chromium or chrome, polysilicon, a metal suicide, such as molybdenum suicide, tungsten suicide, or titanium silicide, or a metal, such as aluminum, tungsten, titanium, titanium nitride, tantalum, or tantalum nitride. The optically opaque material O may be deposited by conventional blanket deposition techniques, such as by coating or sputtering techniques. The optically opaque material O may be deposited on portions of the patterned UV transparent material 4 of the template 2, such as scribe areas or the periphery, where the alignment marks 12 are desired, the rest of the patterned UV transparent material 4 being masked to prevent such deposition. Alternatively, as described below, all of template 2 may be covered by the optically opaque material O. To provide proper alignment of the pattern 6 on the patterned UV transparent material 4, the alignment marks 12 may be formed before forming the pattern 6 in the patterned UV transparent material 4. The alignment marks 12 may also be used to align the template 2 with the substrate, which would typically include substrates on an unsingulated wafer, onto which the features corresponding to pattern 6 are to be formed. The pattern 6 on the patterned UV transparent material 4, which may also be termed an "imprint pattern" for the sake of convenience, may include a topography having a plurality of recesses 8 and protrusions 10 of satisfactory size, configuration, and orientation in one surface of the patterned UV transparent material 4. The recesses 8 and protrusions 10 are ultimately used to produce substantially identical features on substrates fabricated on a wafer or other bulk semiconductor substrate contacted by the template 2. To form the pattern 6 in a UV transparent material and the alignment marks 12 in optically opaque material O, photolithographic techniques may be used. For instance, a photoresist material 14 may be formed on UV transparent material 4', and patterned using a mask (not shown) having opaque and transparent openings in the desired pattern, as shown in FIG. 3. The photoresist material 14 may be formed from a conventional positive or negative photoresist material and may be deposited by conventional techniques, such as by spin coating. The opaque and transparent openings in the mask form a pattern that is complementary to the pattern 6 that is ultimately to be formed in the UV transparentmaterial 4. The mask may be fabricated by conventional techniques and, therefore, is not described in detail herein. The mask may include, for example, a 4X pattern in that the pattern is four times the size of the pattern 6 to be formed in the UV transparent material 4 and four times the size of the features ultimately formed on the substrate. The photoresist material 14 may be exposed and developed, as known in the art, exposing selected portions of the UV transparent material 4' to electromagnetic radiation. Exposure and development of the photoresist material 14 may be performed using conventional exposure equipment and developing solutions. Developing solutions for the photoresist material 14 may be selected by. one of ordinary skill in the art and,, therefore, are not discussed in detail herein, hi addition to conventional photolithography, electron beam projection, electron beam direct write, ion direct write, or maskless lithography may be used to form the pattern 6 on the UV transparent material 4. The pattern in the photoresist material 14 may be then transferred to the UV transparent material 4' and the alignment marks 12 formed in the optically opaque material O by etching. Two separate, selective etches may also be used, one for optically opaque material O and one for UV transparent material 4'. Depending on the material used, the UV transparent material 4' may be etched isotropically (wet etched) or anisotropically (dry etched). Wet and dry etching solutions for the UV transparent materials described above are known in the art and, therefore, are not discussed in detail herein. By way of non-limiting example, if the UV transparent layer 4' is a quartz wafer, the quartz may be etched using a fluorine-based plasma etch. The fluorine-based plasma may include a fluorine-containing gas, such as CF4, CHF3, C4F8, SF6, or combinations thereof, and an inert gas, such as argon, xenon, or combinations thereof. Alternatively, the pattern 6 may be formed in the UV transparent material 4' as illustrated in FIGs. 4 and 5. A chromium material 16, used as optically opaque material O, may be blanket deposited over the UV transparent material 4' and the photoresist material 14 deposited over the chromium material 16, as shown in FIG. 4. The chromium material 16 may be deposited by conventional techniques and may range in thickness from about 80 nm to about 100 nm. While material 16 is described as being formed from chromium, material 16 may be formed from other metal materials that are opaque to the imaging wavelength and have significant etch selectivity relative to the UV transparent material 4' including, but not limited to, chromium oxide, titanium, titanium nitride, tungsten, or combinations thereof. The photoresist material 14 may be aconventional photoresist material and may be deposited by conventional techniques, such as spin coating. The photoresist material 14 may be patterned as described above, to expose portions of the chromium material 16. As shown in FIG. 5, the pattern in the photoresist material 14 may be transferred to the chromium material 16 and, subsequently, to the UV transparent material 4', by etching. For instance, the exposed portions of the chromium material 16 may be etched, using the photoresist material 14 as a mask. The remaining portions of the chromium material 16 may function as a hard mask for etching the UV transparent material 4' and to provide alignment marks 12. Each of the chromium material 16 and the UV transparent material 4' may be etched using a suitable, conventional wet or dry etch process. The etching solutions may be selected by one of ordinary skill in the art and, therefore, are not discussed in detail herein. As previously discussed, to form features having a high resolution on the substrate 18, which may be a wafer bearing a plurality of substrate locations thereon, the UV transparent material 4' may be etched anisotropically, such as by using the fluorine-based plasma etch described above. Any portions of the photoresist material 14 and undesired portions of the chromium material 16 remaining on the UV transparent material 4' after etching may be removed as desired, producing the pattern 6 and alignment marks 12 on the template 2, as shown in FIGS. 1 and 2. While not illustrated, the pattern 6 may also be formed in the UV transparent material 4 by bonding the UV transparent material 4' to the base UV transparent material 3 with the UV transparent epoxy 5, and then patterning the UV transparent material 4'. After bonding the UV transparent material 4' and the base UV transparent material 3, the photoresist material 14 may be formed on the UV transparent material 4' and patterned, as previously described in regard to FIG. 3, and this pattern transferred to the UV transparent material 4'. Alternatively, after bonding the UV transparent material 4' and the base UV transparent material 3, the chromium material 16 and the photoresist material 14 may be formed on the UV transparent material 4' and patterned, as previously described in regard to FIGs. 4 and 5, and this pattern transferred to the UV transparent material 4'. The pattern 6 may also be formed in the UV transparent material 4' by conventional pitch doubling or pitch multiplication methods. Such methods are known in the art and, therefore, are not described in detail herein. The template 2 shown in FIGs. 1 and 2 may be used directly in an imprint lithographic technique to imprint the pattern 6 on the substrate 18 of like size, formingcorresponding features on the substrate 18. Likewise, template 2 may be used directly in an imprint lithographic technique to imprint the pattern 6 on a subsequent template. The features to be formed on the substrates 18 may be a negative image (reversed image) of the pattern 6 on the template 2. Alternatively, the template 2 may be divided, such as by dicing, to form smaller templates that are used in imprint lithography or smaller groups of substrates. The pattern 6 on each of the smaller templates may be the same or different. The template 2 on each of the divided templates may be bonded to the optional second UV transparent material before or after dicing. To form the desired features on the substrate IS^ by imprint lithography, the template 2 having the pattern 6 may be brought into contact with the substrate 18. A complete process flow for fabricating the substrate 18 is not described herein. However, the remainder of the process flow is known to a person of ordinary skill in the art. Accordingly, only the process steps necessary to understand the invention are described herein. As shown in FIG. 6, substrate 18 may include a semiconductor substrate 20 and additional layers thereon, such as metal layers, oxide layers, carbon hard mask layers, or polysilicon layers. The substrate 18 may also include trenches or diffusion regions. For the sake of clarity, the additional layers, trenches, and diffusion regions are not shown in FIG. 6. The semiconductor substrate 20 may be a conventional substrate or other bulk substrate including a semiconductive material. As used herein, the term "semiconductor substrate" includes not only silicon wafers, but also silicon-on-insulator ("SOI") substrates, silicon-on-sapphire ("SOS") substrates, epitaxial layers of silicon on a base semiconductor foundation, and other semiconductor, or optoelectronics materials, such as silicon-germanium, germanium, gallium arsenide, or indium phosphide. The substrate 18 may also include a transfer material 22 that is deformable under applied pressure and does not adhere to a surface of the template 2, especially as the template 2 is removed from the substrate 18. Since the transfer material 22 is deformable, the transfer material 22 may fill the recesses 8 in the pattern 6 when the template 2 and the substrate 18 come into contact. The transfer material 22 may be a radiation sensitive material including, but not limited to, a photocurable or photosensitive material, such as a photoresist material. The transfer material 22 may be sensitive to UV light, visible light, infrared light, actinic light or other radiation sources, such as electron beams or x-rays. Materials that may be used as the transfer material 22 are known in the art. For the sake ofexample only, the transfer material 22 may be formed from a conventional photoresist material that is curable by exposure to UV light, such as a curable organosilicon material. The substrate 18 and the template 2 may be maintained substantially parallel, and in close proximity, to one another. The substrate 18 and the template 2 may then be contacted with minimal pressure so that the transfer material 22 deforms into the pattern 6 of the template 2. As shown in FIG. 7, the substrate 18' may thus be provided with a negative image 24 (reversed image) of the pattern 6 in its imprinted transfer material 22. If the transfer material 22 is a radiation-sensitive material, the transfer material 22 maysubsequently be exposed to radiation, such as UV_ radiation. Since the template 2 is UV transparent, the UV radiation is transmitted through the template 2 from the back, unpatterned surface thereof to harden portions of the negative image 24 of transfer material 22 that include photoresist material filling recesses 8 of pattern 6 or to harden all of the negative image 24 of transfer material 22 that includes photoresist material filling recesses 8 and protrusions 10 of pattern 6. Alternatively, if the transfer material 22 includes a material that is sensitive to heat, pressure, or combinations thereof, which are generated by contacting the template 2 with the substrate 18, the heat, pressure, or combinations thereof may be used to cure, harden, or solidify the transfer material 22. The template 2 may then be removed from the substrate 18. The template 2 and the substrate 18 may be separated without damaging, or otherwise adversely affecting, the negative image 24. For instance, the template 2 may be treated with a material that lowers the surface energy of the template 2, as known in the art, to assist in separating the template 2 from the substrate 18 without damage to the imprinted, exposed negative image 24. The negative image 24 in the transfer material 22 may be transferred to the semiconductor substrate 20 or underlying materials of the substrate 18' using the transfer material 22 as a mask. For instance, the negative image 24 may be transferred into the semiconductor substrate 20 or into the metal, carbon, hard mask layer, oxide, or polysilicon layers (not shown) previously formed on the semiconductor substrate 20 by dry etching or wet etching. Any remaining portions of the transfer material 22 may then be removed, providing the features 26 on the substrate 18" as shown in FIG. 8. The features 26 may be substantially the same size, configuration, and orientation as the dimensions of the pattern 6 on the template 2. Since the pattern 6 is formed by photolithography, the feature sizes may be determined by the resolution of thephotolithographic techniques used to form the pattern 6. In one embodiment, the features 26 have a feature size of less than about 100 nm, such as less than about 45 nm. Alternatively, the negative image 24 in the transfer material 22 may be subjected to ion implantation to form implanted regions on the substrate 18". In addition to forming features 26 on the substrate 18", the template 2 may be used as a master template to create at least one daughter template. To form the daughter template, the pattern 6 on the template 2 may be transferred to an additional structure (not shown), which includes a UV transparent material and a transfer material, such as a photoresist material. .The UV_ transparent material_and transfer material of the structure that is ultimately to become the daughter template may be one of the materials described above. The transfer material may be deformable under pressure so that when the template 2 contacts the transfer material of the structure that is ultimately to be the daughter template, the pattern 6 of the master template is transferred to the transfer material. The pattern in the transfer material may subsequently be etched into the UV transparent material, producing the daughter template. The pattern on each of the daughter templates may be the reverse of the pattern 6 on the master template. In other words, the pattern 6 on the master template may be a negative image of the pattern on the daughter template. Since the template 2 contacts the substrate 18 or other structure that is ultimately to become the daughter template during imprint lithography, the template 2 may become easily damaged. Therefore, the master template may be stored and preserved while one of the daughter templates fabricated from it is used to imprint the features on the substrate 18. If the daughter template is damaged during imprinting, another daughter template may be used to imprint the features or the master template may be used to create additional daughter templates. The template 2 produced by the methods of the invention provides numerous advantages. In forming the substrate 18", if imprint lithography is used at some process levels and conventional photolithography is used at other process levels, lens distortion and magnification factor effects are typically observed in the substrate 18". However, the template 2 formed by the methods of the invention may be used to provide improved matching between the imprint lithography process levels and the conventional photolithography process levels. For instance, if the same photostepper used in the process levels formed by conventional photolithography is also used to form thetemplate 2, the lens distortion and magnification factor effects at the different process levels in the substrate 18" may be minimized. The method of the invention may also provide the template 2 at a reduced cost compared to conventional techniques. In addition, use of the UV transparent epoxy 5 to join together base UV transparent material 3 and patterned UV transparent material 4 enables bonding of materials that previously could not be adequately bonded. In addition, since the UV transparent epoxy 5 is suitable for use within a wide temperature range, the base UV transparent material 3 and the patterned UV transparent material 4 may be bonded to form the template 2 without restrictions on the formation temperature^. While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the invention as defined by the following appended claims and their legal equivalents. |
A chemical vapor deposition method of forming a high k dielectric layer includes positioning a substrate within a chemical vapor deposition reactor. At least one metal comprising precursor and N2O are provided within the reactor under conditions effective to deposit a high k dielectric layer on the substrate comprising oxygen and the metal of the at least one metal precursor. The N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor. In one implementation, the conditions are void of injection of any of O2, O3, NO, and NOX to within the reactor during the portion of the deposit. In one implementation, a capacitor is formed using the above methods. In preferred implementations, the technique can be used to yield smooth, continuous dielectric layers in the absence of haze or isolated island-like nuclei. |
What is claimed is: 1. A chemical vapor deposition method of forming a high k dielectric layer comprising:positioning a substrate within a chemical vapor deposition reactor; and providing at least one metal comprising precursor and N2O within the reactor under conditions effective to deposit a high k dielectric layer on the substrate comprising oxygen and the metal of the at least one metal precursor, the N2O being present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor. 2. The method of claim 1 wherein the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 95% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor.3. The method of claim 1 wherein the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 99% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor.4. The method of claim 1 wherein the portion comprises a majority portion.5. The method of claim 1 wherein the portion comprises all of the deposit.6. The method of claim 1 wherein the substrate is received on a susceptor, and the conditions during the portion comprise a susceptor temperature of less than or equal to 550[deg.] C.7. The method of claim 1 wherein the substrate is received on a susceptor, and the conditions during all of the deposit comprise a susceptor temperature of less than or equal to 550[deg.] C.8. The method of claim 1 wherein the conditions are effective to form an outermost surface of the high k dielectric layer at conclusion of the portion to have a roughness of no greater than 20 Angstroms as determinable by average atomic force microscopy RMS roughness.9. The method of claim 1 wherein the conditions are effective to form an outermost surface of the high k dielectric layer at conclusion of the portion to have a roughness of no greater than 15 Angstroms as determinable by average atomic force microscopy RMS roughness.10. The method of claim 1 wherein the conditions are void of plasma and remote plasma.11. The method of claim 1 wherein the conditions comprise at least one of plasma or remote plasma.12. The method of claim 1 wherein the high k dielectric layer comprises a titanate.13. The method of claim 1 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any O2 injected to within the reactor.14. The method of claim 1 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any O3 injected to within the reactor.15. The method of claim 1 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any NO injected to within the reactor.16. The method of claim 1 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any NOX injected to within the reactor.17. A method of forming a capacitor comprising:forming a first capacitor electrode layer over a substrate; positioning the substrate with the first capacitor electrode layer within a chemical vapor deposition reactor; providing at least one metal comprising precursor and N2O within the reactor under conditions effective to deposit a high k capacitor dielectric layer comprising oxygen and the metal of the at least one metal precursor over the first capacitor electrode, the N2O being present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor to form an outermost surface of the capacitor dielectric layer at conclusion of the portion to have a roughness of no greater than 20 Angstroms as determinable by average atomic force microscopy RMS roughness; and forming a second capacitor electrode layer over the high k capacitor dielectric layer. 18. The method of claim 17 wherein the second capacitor electrode layer is formed on the high k capacitor dielectric layer.19. The method of claim 17 wherein the second capacitor electrode layer is formed on the outermost surface.20. The method of claim 17 wherein thickness of total capacitor dielectric material intermediate the first and second capacitor electrode layers is no greater than 500 Angstroms.21. The method of claim 17 wherein thickness of total capacitor dielectric material intermediate the first and second capacitor electrode layers is no greater than 300 Angstroms.22. The method of claim 17 wherein thickness of total capacitor dielectric material intermediate the first and second capacitor electrode layers is no greater than 200 Angstroms.23. The method of claim 17 wherein the conditions are effective to form said roughness to be no greater than 15 Angstroms.24. The method of claim 17 wherein the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 95% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor.25. The method of claim 17 wherein the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 99% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor.26. The method of claim 17 wherein thickness of total capacitor dielectric material intermediate the first and second capacitor electrode layers is no greater than 500 Angstroms, and wherein the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 95% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor.27. The method of claim 17 wherein thickness of total capacitor dielectric material intermediate the first and second capacitor electrode layers is no greater than 500 Angstroms, and wherein the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 99% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor.28. The method of claim 17 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any O2 injected to within the reactor.29. The method of claim 17 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any O3 injected to within the reactor.30. The method of claim 17 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any NO injected to within the reactor.31. The method of claim 17 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% and less than 100% concentration by volume as compared with any NOX injected to within the reactor.32. A method of forming a capacitor comprising:forming a first capacitor electrode layer over a substrate; positioning the substrate with the first capacitor electrode layer within a chemical vapor deposition reactor; providing at least one metal comprising precursor and N2O within the reactor under conditions effective to deposit a high k capacitor dielectric layer comprising oxygen and the metal of the at least one metal precursor over the first capacitor electrode, the N2O being present within the reactor during at least a portion of the deposit at greater than or equal to at least 95% and less than 100% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor to form an outermost surface of the capacitor dielectric layer at conclusion of the portion to have a roughness of no greater than 15 Angstroms as determinable by average atomic force microscopy RMS roughness; and forming a second capacitor electrode layer over the high k capacitor dielectric layer, and wherein thickness of total capacitor dielectric material intermediate the first and second capacitor electrode layers is no greater than 300 Angstroms. 33. The method of claim 32 wherein the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 99% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor.34. The method of claim 32 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 95% and less than 100% concentration by volume as compared with any O2 injected to within the reactor.35. The method of claim 32 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 95% and less than 100% concentration by volume as compared with any O3 injected to within the reactor.36. The method of claim 32 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 95% and less than 100% concentration by volume as compared with any NO injected to within the reactor.37. The method of claim 32 wherein the N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 95% and less than 100% concentration by volume as compared with any NOX injected to within the reactor. |
TECHNICAL FIELDThis invention relates to chemical vapor deposition methods of forming a high K dielectric layer and to methods of forming a capacitor.BACKGROUND OF THE INVENTIONAs DRAMs increase in memory cell density, there is a continuing challenge to maintain sufficiently high storage capacitance despite decreasing cell area. Additionally, there is a continuing goal to further decrease cell area. One principal way of increasing cell capacitance is through cell structure techniques. Such techniques include three-dimensional cell capacitors, such as trenched or stacked capacitors. Yet as feature size continues to become smaller and smaller, development of improved materials for cell dielectrics as well as the cell structure are important. The feature size of 256 Mb DRAMs and beyond will be on the order of 0.25 micron or less, and conventional dielectrics such as SiO2 and Si3N4 might not be suitable because of small dielectric constants.Highly integrated memory devices, such as 256 Mbit DRAMs, are expected to require a very thin dielectric film for the 3-dimensional capacitor of cylindrically stacked or trench structures. To meet this requirement, the capacitor dielectric film thickness will be below 2.5 nm of SiO2 equivalent thickness.Insulating inorganic metal oxide materials (such as ferroelectric materials, perovskite materials and pentoxides) are commonly referred to as "high k" materials due to their high dielectric constants, which make them attractive as dielectric materials in capacitors, for example for high density DRAMs and non-volatile memories. In the context of this document, "high k" means a material having a dielectric constant of at least 20. Such materials include tantalum pentoxide, barium strontium titanate, strontium titanate, barium titanate, lead zirconium titanate and strontium bismuth tantalate. Using such materials enables the creation of much smaller and simpler capacitor structures for a given stored charge requirement, enabling the packing density dictated by future circuit design.SUMMARYThe invention comprises chemical vapor deposition methods of forming a high K dielectric layer and methods of forming a capacitor. In one implementation, a chemical vapor deposition method of forming a high k dielectric layer includes positioning a substrate within a chemical vapor deposition reactor. At least one metal comprising precursor and N2O are provided within the reactor under conditions effective to deposit a high k dielectric layer on the substrate comprising oxygen and the metal of the at least one metal precursor. The N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor. In one implementation, the conditions are void of injection of any of O2, O3, NO, and NOX to within the reactor during the portion of the deposit.In one implementation, a method of forming a capacitor includes forming a first capacitor electrode layer over a substrate. The substrate with the first capacitor electrode layer is positioned within a chemical vapor deposition reactor. At least one metal comprising precursor and N2O are provided within the reactor under conditions effective to deposit a high k capacitor dielectric layer comprising oxygen and the metal of the at least one metal precursor over the first capacitor electrode. The N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% concentration by volume as compared with any O2, O3, NO, and NOX injected to within the reactor to form an outermost surface of the capacitor dielectric layer at conclusion of the portion to have a roughness of no greater than 20 Angstroms as determinable by average atomic force microscopy RMS roughness. A second capacitor electrode layer is formed over the high k capacitor dielectric layer.In preferred implementations, the technique can be used to yield smooth, continuous dielectric layers in the absence of haze or isolated island-like nuclei.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.FIG. 1 is a diagrammatic sectional view of a semiconductor wafer fragment at one processing step in accordance with an aspect of the invention.FIG. 2 is a view of the FIG. 1 wafer fragment at a processing step subsequent to that shown by FIG. 1.FIG. 3 is a view of the FIG. 1 wafer fragment at a processing step subsequent to that shown by FIG. 2.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThis disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8).The invention was motivated from problems associated with achieving thin continuous films in deposition of barium strontium titanate (BST) as some or all of the capacitor dielectric region to be received intermediate a pair of capacitor electrodes. Film discontinuity resulted in certain metal organic chemical vapor deposition (MOCVD) processes when film thickness started to fall to at or below 300 Angstroms. Film discontinuity is typically highly undesirable, particularly in capacitor, where fatal plate-to-plate shorts can occur through a discontinuous capacitor dielectric layer.The existing, not necessarily prior art, process within which these discontinuous films manifested utilized a plurality of MOCVD precursors collectively having barium, strontium and titanium therein. Carrier gases and one or more oxidizers were provided within the reactor with the MOCVD precursors to deposit a desired BST film on a substrate. The oxidizers which were utilized were either 100% O2 or a 50-50 mix of O2 and N2O. Discontinuity in the subject layer manifested in many cases where deposition thickness fell to at or below 300 Angstroms. Such could be determined by scanning electron microscopy, and by measurement of surface roughness which turned out to be an indication of discontinuity in such films.As deposition temperatures started to exceed 600[deg.] C. on the susceptor during deposition, discontinuity problems started to disappear at deposition thicknesses at and below 300 Angstroms. However, such higher temperatures tended to result in poorer conformality in the deposition in spite of improved continuity in the deposited layer. Further, such higher temperature depositions started to result in a hazy appearance and roughness in the deposited layer. Roughness either from discontinuity or haze at the higher temperatures tended to be greater than 100 Angstroms, including some around 1000 Angstroms, as determined by average atomic force microscopy RMS roughness.Such describes the background upon which the invention was motivated. However, the invention is in no way so limited, with the invention only being limited and defined by the accompanying claims as literally worded and as appropriately interpreted in accordance with the Doctrine of Equivalents. Aspects of the invention are seen applicable to chemical vapor deposition methods of forming high k dielectric layers other than titanates or barium strontium titanate, and other than in the fabrication of capacitors. Further, the invention is perceived and supported by the accompanying claims and in accordance with the Doctrine of Equivalents independent of whether some or all of the above implied objects are achieved unless otherwise literally included in an accompanying specific claim. A preferred embodiment description proceeds with the fabrication of an exemplary capacitor with respect to FIGS. 1-3.FIG. 1 depicts a wafer fragment 10 comprising a substrate 12. Substrate 12 might comprise one or more layers of, for example, insulative, semiconductor or conductive materials. In the context of this document, the term "semiconductor substrate" or "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. A first conductive capacitor electrode layer 14 is formed over substrate 12. Any suitable conductive material is contemplated, such as by way of example only, conductively doped polysilicon, platinum, titanium nitride and other existing or to-be-developed materials. An exemplary thickness range for layer 14 is from 50 Angstroms to 1000 Angstroms.Referring to FIG. 2, substrate 10 is positioned within a chemical vapor deposition reactor (not shown). At least one metal precursor and N2O are provided within the reactor under conditions effective to deposit a high k dielectric layer 16 on substrate 10 which comprises oxygen and the metal of the at least one metal precursor. The N2O is present within the reactor during at least a portion of the deposit at greater than or equal to at least 90% concentration by volume as compared with any O2, O3, NO, and NOX which might be injected to within the reactor. Preferably, the N2O is present within the reactor during the portion of the deposit at greater than or equal to at least 95% concentration by volume as compared with any O2, O3, NO, and NOX which might be injected to within the reactor and more preferably greater than or equal to at least 99% concentration. Even more preferably, the N2O is present within the reactor during the portion of the deposit at 100% concentration as compared to any O2, O3, NO, and NOx, in otherwords such that the conditions are totally void of injection of any such materials to within the reactor during the portion of the deposit.In a preferred implementation, the high k dielectric layer comprises a titanate, with BST being a preferred titanate. Other example preferred high k dielectric materials include<tb> <sep> <sep>SrTiO3<sep>ST<tb> <sep> <sep>BaTiO3<sep>BT<tb> <sep> <sep>Pb(Zr,Ti)O3<sep>PZT<tb> <sep> <sep>(Pb,La)(Zr,Ti)O3<sep>PLZT<tb> <sep> <sep>SrBi2Ta2O9<sep>SBT<tb> <sep> <sep>SrBi2Nb2O9<sep>SBN<tb> <sep> <sep>SrBi2(Nb,Ta)2O9<sep>SBNT<tb> <sep> <sep>Ta2O5 (also doped Ta2O5, e.g., Ti-doped Ta2O5)<tb> <sep> <sep>ZrO2 (also zirconium silicate)<tb> <sep> <sep>HfO2 (also hafnium silicate)For deposition of BST, example precursors, and by way of example only, include:<tb> <sep> <sep>Ba(thd)2<sep>bis(tetramethylheptanedionate)<tb> <sep> <sep>Sr(thd)2<sep>bis(tetramethylheptanedionate)<tb> <sep> <sep>Ti(thd)2(O-i-Pr)2<sep>(isopropoxide)bis(tetramethylheptanedionate)<tb> <sep> <sep>Ba(thd)2<sep>bis(tetramethylheptanedionate)<tb> <sep> <sep>Sr(thd)2<sep>bis(tetramethylheptanedionate)<tb> <sep> <sep>Ti(dmae)4<sep>bis(dimethylaminoethoxide)<tb> <sep> <sep>Ba(methd)2<sep>bis(methoxyethoxyte,<tb> <sep> <sep> <sep>tetramethylheptanedionate)<tb> <sep> <sep>Sr(methd)2<sep>bis(methoxyethoxyte,<tb> <sep> <sep> <sep>tetramethylheptanedionate)<tb> <sep> <sep>Ti(mpd)(thd)2<sep>bis(methylpentanediol,<tb> <sep> <sep> <sep>tetramethylheptanedionate)<tb> <sep> <sep>Ba(dpm)2<sep>bis(dipivaloylmethanato)<tb> <sep> <sep>Sr(dpm)2<sep>bis(dipivaloylmethanato)<tb> <sep> <sep>TiO(dpm)2<sep>(titanyl)bis(dipivaloylmethanato)<tb> <sep> <sep>Ba(dpm)2<sep>bis(dipivaloylmethanato)<tb> <sep> <sep>Sr(dpm)2<sep>bis(dipivaloylmethanato)<tb> <sep> <sep>Ti(t-BuO)2(dpm)2<sep>(t-butoxy)bis(dipivaloylmethanato)<tb> <sep> <sep>Ba(dpm)2<sep>bis(dipivaloylmethanato)<tb> <sep> <sep>Sr(dpm)2<sep>bis(dipivaloylmethanato)<tb> <sep> <sep>Ti(OCH3)2(dpm)2<sep>(methoxy)bis(dipivaloylmethanato)Adducts (i.e., tetraglyme, trietherdiamine, pentamethyldiethlyenetriamine), solvents (i.e., butylacetate, methanol, tetrahydrofuran), and/or other materials might be utilized with the precursors.Conductive or dielectric barrier, or other materials, might be provided over electrode layer 14 prior to deposition of layer 16. Further, conductive or dielectric barrier, or other layers, might be provided over high k dielectric layer 16 after its formation prior to fabrication of a subsequent capacitor electrode layer.The portion of the deposit having the stated high concentration(s) of N2O might be a small portion, a majority portion, or all of the deposit which forms the exemplary depicted layer 16. Further, the conditions during the deposit might comprise at least one of plasma or remote plasma conditions, or be void of any plasma or remote plasma. Preferably, deposition thickness for layer 16 is no greater than 500 Angstroms, more preferably no greater than 300 Angstroms, and even more preferably no greater than 200 Angstroms.By way of example only, and where the precursors include metal organic precursors, example flow rates for the various of such precursors include anywhere from 10 mg/min. to 1000 mg/min. of liquid feed to any suitable vaporizer. Preferred N2O flows include from 100 sccm to 4000 sccm, more preferably between 500 sccm and 2000 sccm, and most preferably between 750 sccm and 1250 sccm. Such flow rates and reduction-to-practice of the invention are with respect to an Applied Materials Centura Frame(TM) processor. A preferred pressure range is from 100 mTorr to 20 Torr, with a range of from 1 Torr to 6 Torr being more preferred. Susceptor temperature is preferably from 400[deg.] C. to 700[deg.] C., with less than or equal to 550[deg.] C. being even more preferred and in attaining continuity in the deposited layer at thicknesses at or below 200 Angstroms, and preferably at least down to 50 Angstroms. Most preferably, susceptor temperature is kept at less than or equal to 550[deg.] C. during all of the deposit to form layer 16 regardless of whether the N2O is present at the stated concentration(s) during all of the deposit.The conditions are also preferably effective to form an outermost surface of the high k dielectric layer at conclusion of the portion of the deposit to have a roughness of no greater than 20 Angstroms as determinable by average atomic force microscopy RMS roughness, and more preferably no greater than 15 Angstroms. FIG. 2 depicts high k dielectric layer 16 as having an outer surface 18. FIG. 3 depicts formation of a second capacitor electrode layer 20 over high k capacitor dielectric layer 16, and in the preferred embodiment as shown on (in contact with) outermost surface 18. An example thickness for layer 20 is 200 Angstroms. Again as alluded to above, alternate processing is contemplated whereby conductive or dielectric barrier or other materials might be provided intermediate high k capacitor dielectric layer 16 and second capacitor plate 20. Further, surface 18 may or may not comprise the desired stated roughness, depending perhaps on whether the preferred high concentration(s) of N2O existed at the conclusion of the deposit to form layer 16.Preferably, the thickness of total capacitor dielectric material intermediate first capacitor electrode layer 14 and second capacitor electrode layer 20 is no greater than 500 Angstroms, more preferably no greater than 300 Angstroms, and even more preferably no greater than 200 Angstroms.A reduction-to-practice example in the Applied Materials Centura Frame processor included a susceptor temperature of 500[deg.] C. and chamber pressure of 2 Torr. Precursor flows to the vaporizer were Ba(thd)2 and Sr(thd)2 at 25 mg/min., Ti(thd)2 (O-i-Pr)2 at 85 ma/min., Ar to the vaporizer at 150 sccm, Ar as a carrier from the vaporizer at 200 sccm, and N2O to the reactor at 1200 scam. The precursor liquid ampoules were at room temperature, and the vaporizer at 280[deg.] C. Deposition rate was at approximately 20 Angstroms/minute to form a 250 Angstrom thick layer. The produced film was continuous and conformally deposited, having a determined surface roughness of 10 Angstroms.In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents. |
A system and method are disclosed for providing highly parallel, FFT calculations in a circuit including a plurality of RADIX-2 elements. Partitioned RAM resources allow RADIXes at all stages to have optimal bandwidth memory access. Preferably more memory is made available for early RADIX stages and a "critical" stage. RADIXes within stages beyond the critical stage preferably each need only a single RAM partition, and can therefore simultaneously operate without fighting for memory resources. In a preferred configuration having P RAM partitions and P RADIX stages, the critical stage is stage number log2P, and until the critical stage, only P/2 RADIX elements can simultaneously operate within each stage. After the critical stage, all RADIXes within each stage can simultaneously operate. |
What is claimed is: 1. In an integrated circuit device comprising a memory resource and a plurality of circuit stages, each one of said stages including a plurality of RADIX-2 elements, a method of efficient simultaneous operation of the RADIX-2 elements to enable a Fast Fourier Transform (FFT) calculation, the method comprising the steps of: dividing the memory resource into a plurality "P" of memory partitions, where "P" is the total number of RADIX-2 elements in the plurality of circuit stages; designating a selected circuit stage as a critical stage; assigning at least two of said memory partitions to each of the plurality of RADIX-2 elements utilized in a first stage, and repeating said assigning step for each stage up to but not including said critical stage; and assigning a separate single memory partition to each of the plurality of RADIX-2 elements utilized in said critical stage; said RADIX-2 elements utilized in said pre-critical stages each accessing said at least two of said memory partitions, thereby simultaneously operating a plurality of RADIX-2 elements to enable an FFT calculation. 2. The method of claim 1, wherein each said stage has a position, and the position of the critical stage equals log2 P. 3. In an integrated circuit device comprising a memory resource and a plurality of circuit stages, each one of said stages including a plurality of RADIX-2 elements, a system for efficient simultaneous operation of the RADIX-2 elements to enable a Fast Fourier Transform (FFT) calculation, the system comprising: means for dividing the memory resource into a plurality "P" of memory partitions, where "P" is the total number of RADIX-2 elements in the plurality of circuit stages; means for designating a selected circuit stage as a critical stage; means for assigning at least two of said memory partitions to each of the plurality of RADIX-2 elements utilized in a first stage, and repeating said assignment for each stage up to but not including said critical stage; and means for assigning a single memory partition to each of the plurality of RADIX-2 elements utilized in said critical stage; said RADIX-2 elements utilized in said pre-critical stages each accessing said at least two of said memory partitions, thereby simultaneously operating a plurality of RADIX-2 elements to enable an FFT calculation. 4. The system of claim 3, wherein each said stage has a position, and the position of the critical stage equals log2P. |
FIELD OF THE INVENTION The present invention relates generally to the field of digital signal processing (DSP) in field programmable gate arrays (FPGAs) and more specifically to a method of computing large Fast Fourier Transforms (FFTs) using RADIX-2 elements, through efficient utilization of distributed memory resources. BACKGROUND OF THE INVENTION The use of FPGAs for carrying out high speed arithmetic computations has gained recognition in recent years. FPGA architectures including logic blocks having a plurality of look-up-table (LUT) function generators, such as the XC4000.TM. family of devices from XILINX, Inc. (the assignee of the present invention), are particularly suited for such computations. However, many of the important digital signal processing (DSP) algorithms are multiply-intensive, and even FPGAs having a large number of logic blocks and LUTs normally cannot embed the multiplier circuits and the attendant control and support circuits in a single chip. It is therefore incumbent upon the designer to choose efficient DSP algorithms and to realize them with efficient circuit designs. The Fast Fourier Transform (FFT) is an outstanding example of an efficient DSP algorithm. Distributed arithmetic (DA) is a well-established design approach for DSP implementation in FPGAs that replaces gate-consuming array multipliers with more efficient shift and add circuits offering comparable performance. The FFT is a highly efficient procedure for computing the Discrete Fourier Transform (DFT) of a sampled time series. The DFT, taken from a continuous waveform, is derived from and closely related to the Fourier transform and is particularly useful for digital power spectrum analysis and filtering. The FFT takes advantage of the fact that the coefficients of the DFT can be calculated iteratively, which results in a considerable savings of computation time and a substantial performance advantage over the DFT. Distributed Arithmetic (DA) was developed as an efficient computation scheme for DSP utilizing FFTs. The DA computation algorithm is now being effectively applied to embed DSP functions in FPGAs, particularly those with coarse-grained look-up table architectures, as described in U.S. Pat. No. 6,021,423. DA enables the replacement of the array multiplier, central to many DSP applications, with a gate-efficient serial/parallel multiplier, with little or no reduction in speed. U.S. Pat. No. 6,021,423 discloses a space-efficient DA implementation of a DSP circuit implemented in an FPGA using FFTs. In the disclosed circuit, time-invariant systems are implemented using a 16-word, SRAM-based DA look-up table (DALUT). The DALUT contains the pre-computed values of all possible sums of coefficients, weighted by binary values of serial input data. Additional RAM resources are required for the large sine/cosine basis function database. These memory requirements are accommodated using a DALUT containing the pre-computed sums of partial products for combinations of input variables Xrm, Xim, Xrn, Xin and .theta.k, as illustrated in FIG. 1. The highly space-efficient implementation of a RADIX-2 circuit, illustrated in FIG. 1 and described in U.S. Pat. No. 6,021,423, allows for the implementation of complex FFT circuitry in a single programmable logic device. While the implementation disclosed in the parent case provides a number of significant advantages over the prior art, there remains a need to increase the speed of circuits that benefit from the use of a plurality of RADIX-2 implementations. The need for multiple RADIXes is apparent from a time series containing N=2@s samples or "points" (where s is the number of stages), wherein the corresponding FFT entails 2sN=2Nlog2 N multiply operations. For a complete N=1024 point FFT operation on 1024 time-points, a total of 5120=(N/2*log2 N=512*10) RADIX-2 operations will be required. If only one RADIX-2 is used, since two cycles are required for each RADIX-2 operation, the total time required will be 10240 (5120*2) cycles. To reduce the number of cycles required for FFT calculations, it appears one need only increase the number of RADIX-2 elements in the circuit and use them simultaneously in each stage. However, two cycles (assuming dual-port RAM is used) are also needed to read and write variables to and from memory, and RAM read and write operations are required for every FFT function, even if additional RADIXes are used. Thus, where only a single RAM is available, there is little, if any, gain in implementation speed from the use of more than one RADIX-2 in a stage. A bottleneck in the data-rate from and to the RAM will retard the function of the circuit. Thus, using k RADIXes in a particular stage does not necessarily provide for k-times speedup of FFT calculations over a single-RADIX implementation. There is therefore a need in the art to which the present invention pertains to optimize FFT implementation for simultaneous use of a plurality of RADIXes. SUMMARY OF THE INVENTION To address the shortcomings of the available art, the present invention provides a method and system for partitioning RAM resources in such a manner as to optimize memory access bandwidth in a multi RADIX-2 system for FFT calculation. This enables system speed to increase in nearly direct proportion to the increase in processing speed provided by the addition of a plurality of RADIXes to the circuit. In a preferred embodiment, a plurality of memory partitions are provided for RADIXes at early stages in the circuit up to, but not including, a critical stage (pre-critical stages), while only a single RAM partition is required for each RADIX-2 in stages at and beyond the critical stage. In the preferred embodiment, a plurality of memory partitions are accessed by p/2 RADIXes in pre-critical stages of the circuit, while only a single RAM partition is accessed by all of the p RADIXes in stages at and beyond the critical stage. Multiplexing resources are preferably structured to reflect the RAM partition and RADIX interaction for each stage. It is therefore a first advantage of the present invention to provide a method and system for designing a circuit for providing efficient simultaneous operation of a plurality of RADIX-2 elements to enable a Fast Fourier Transform (FFT) calculation, the circuit being implemented in a logic device including programmable resources for implementing the circuit, the method comprising the steps of (and the system comprising means for) assigning a plurality of memory resources, each designated Rvw, to a plurality of RADIX-2 elements, each designated Xab, assigning a first multiplexing means to selectively forward data to at least one of the RADIX-2 elements from at least one of the plurality of memory resources, assigning a second multiplexing means to selectively forward data from at least one of the RADIX-2 elements to at least one of the plurality of memory resources, whereby a RADIX-2 Xab receives data from memory resources Rvw such that (vw=ab) or vw can be derived by changing either a or b from 0 to 1, and a memory resource Rvw receives data from a RADIX-2 Xab whose ab is such that (ab =vw) or ab can be derived by changing either v or w from 1 to 0. It is a further advantage of the present invention to provide a method and system for designing a circuit for providing efficient simultaneous operation of a plurality of RADIX-2 elements, the circuit being implemented in a logic device including programmable resources for implementing the circuit, the circuit processing N time samples at P RADIX-2 elements within S stages, the method comprising the steps of selecting N, P, and S for implementation of the circuit in a first pre-selected programmable logic device, the selection of N, P, and S designating a total area required for the circuit implementation, calculating a calculation time required to perform calculations within the circuit having N samples, P elements and S stages, and modifying either of N, P, and S to change either of the total area required and the calculation time of the circuit implementation. This method can be further characterized where P/2 elements are utilized at stages zero through [log2 P-1] and P elements are utilized at stages log2 P through S. A still additional advantage of the present invention is the provision in an integrated circuit device comprising a memory resource and a plurality of RADIX-2 stages, each one of the stages including a plurality of RADIX-2 elements, of a method and system for efficient simultaneous operation of the plurality of RADIX-2 elements to enable a Fast Fourier Transform (FFT) calculation, the method comprising the steps of dividing the memory resource into a plurality "P" of memory partitions, designating a selected stage as a critical stage, assigning at least two of the memory partitions to each of the plurality of RADIX-2 elements within a first stage, and repeating the assigning step for each stage up to but not including the critical stage, assigning a single memory partition to each of the plurality of RADIX-2 elements within the critical stage, the RADIX-2 elements within the pre-critical stages each accessing the at least two of the memory partitions, thereby providing efficient, simultaneous operation of the plurality of RADIX-2 elements. This method can be further characterized wherein each said stage has a position, and the position of the critical stage equals log2 P. This method can be further characterized as having P/2 RADIX-2 elements contained within the first stage and each said pre-critical stage up to but not including the critical stage. BRIEF DESCRIPTION OF THE DRAWINGS The aforementioned advantages of the present invention as well as additional advantages thereof will be more clearly understood hereinafter as a result of a detailed description of a preferred embodiment of the invention when taken in conjunction with the following drawings. FIG. 1 is an FFT computation flow diagram first disclosed in U.S. Pat. No. 6,021,423. FIG. 2 abstractly illustrates a four-stage, multi-RADIX, DA circuit and the access to memory resources required for each of the RADIXes in the circuit. FIGS. 3A, 3B and 3C illustrate the interactions of RAM partitions and RADIX-2 implementations in a two-partition, a four-partition, and an eight-partition implementation, respectively, of a 16-point FFT utilizing the method and system of the present invention. FIG. 4 illustrates the relationship between FFTs and RAM partitions in a preferred embodiment of the present invention. FIG. 5 illustrates a preferred relationship between multiplexers (MUXes) and RAM partitions in the present invention. DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT An s-stage, 16-point RADIX-2 based FFT structure is illustrated in FIG. 2 (where s=4), wherein each column represents a calculation stage and each crossed-line pair 10 (eight in each column) represents a RADIX-2 for calculating a two-point FFT. We know from observing the illustrated structure that the more advanced the stage, the smaller the memory which interacts with the RADIXes within the stage. Also, for a particular partitioning of the available RAM, there exists a critical stage ("cstage" at position Scstage), at and beyond which p RADIX-2 structures within every stage can operate simultaneously (i.e., data is never required from multiple memory partitions by a single RADIX-2 operation). The critical stage and p are directly related in that p=2@(cstage). Until the critical stage, only p/2 RADIXes can operate in a parallel fashion. The RAM-RADIX interactions of the present invention can be better understood with reference to FIGS. 3A-3C, illustrating the cases of p (representing both the number of RADIXes within each stage and the number of RAM partitions interacting with each stage) equals two (FIG. 3A), four (FIG. 3B), and eight (FIG. 3C). Preferably, for ease of RAM access, only RADIXes with their kth bits at 0 are used, thereby utilizing only p/2 RADIXes, in the pre-critical stages. Also, any RADIX labeled as Xabc, for example, preferably interacts with partition Rabc and all other RAM-partitions that can be addressed by changing any one of the sub-indices a, b, or c from 0 to 1. So, RADIX X000 in FIG. 3C interacts with RAM partitions R000 and R100 (in stage 0), R010 (in stage 1), R001 (in stage 2), and R000 alone(in stages 3 and beyond). The RAM-partition interfaces with the RADIXes can be similarly structured. In the preferred embodiment of the present invention, two multiplexer (MUX) elements are needed, one for reading from the RAM and another for writing to the RAM. Preferably, dual-port RAM is utilized to minimize read and write delays. The input-size of a MUX varies from 1 to (b+1), where b=log2 p. The average MUX input-size is therefore (1+[b/2]). For example, if p=4 and N=16, as illustrated in FIGS. 2 and 3B, we have four RADIXes labeled X00, X01, X10, X11, and four RAM-partitions, R00, R01, R10, R11. The preferred RADIX/RAM interaction is provided below for each of the four stages, and is illustrated in FIG. 4. Stage 00: RADIXes used: X00 and X01 (those with 1st bit 0) Interaction: X00.rarw..fwdarw.(R00, R10) X01.rarw..fwdarw.(R01, R11) Stage 01: RADIXes used: X00 and X10 (those with 2nd bit 0) Interaction: X00.rarw..fwdarw.(R00, R01) X10.rarw..fwdarw.(R10, R11) Stage 10: RADIXes used: X00, X01, X10, X11 (all four) Interaction: X00.rarw..fwdarw.R00 X01.rarw..fwdarw.R01, X10.rarw..fwdarw.R10 X.rarw..fwdarw.R11 Stage 11: RADIXes used: X00, X01, X10, X11 (all four) Interaction: X00.rarw..fwdarw.R00 X01.rarw..fwdarw.R01 X10.rarw..fwdarw.R10 X11.rarw..fwdarw.R11 A preferred embodiment of the present invention therefore requires multiplexing data between the RADIXes and RAM partitions. For the case illustrated in FIG. 4, the MUX settings for reading and writing to and from the RAM partitions is illustrated in FIG. 5. Looking first at the right hand column of FIG. 5, to read, we have ([R00 R01 R10 ].fwdarw.X00 at MUX setting 30, [R01 R11 ].fwdarw.X01 at MUX setting 32, [R10 R11 ].fwdarw.X10 at MUX setting 34, and R11.fwdarw.X11 at MUX setting 36). Thus, we can see that an Xab element gets data from those Rvw elements whose vw are such that (vw=ab) or vw can be derived by changing either a or b from 0 to 1. The MUXing required to write into the RAM partition is similarly given by (X00.fwdarw.R00, [X00 X01 ].fwdarw.R01, [X00 X10 ].fwdarw.R10, [X01 X10 X11 ].fwdarw.R11), as illustrated in the left column of FIG. 5. In this case, an Rvw gets date from those Xab whose ab are such that (ab=vw) or ab can be derived by changing either v or w from 1 to 0. The addresses within each accessed RAM are generated by a counter (not shown, having only two bits in the example of FIG. 5). Next, we can calculate the area and speed advantages provided by the preferred embodiment of the present invention, wherein the parallel, DALUT-based implementation of U.S. Pat. No. 6,021,423 is used within each RADIX-2. Also, we assume an N-point FFT where the data is b bits-wide and the sine-cos values (of .theta.k) are c bits-wide. The DALUT addressing is accomplished with k=log2 (N/2) bits. Further, we assume p partitions, i.e., there are p RADIXes available for any stage, and p RAM partitions preferably, but not necessarily, contained within a single block RAM. The number of stages to perform the N point FFT is s=log2 N. First we consider increased speed. We will be able to use (p/2) RADIXes for log2 (p) stages and p RADIXes for the rest of the stages, beginning with the critical stage. Thus, the total number of cycles required will be [(N/p*log2 p+N/2p*[s-log2 p])*2]. Next, we consider the area required for the implementation of the circuit of the present invention in an FPGA. In the parent application, it was shown that (cd+d/4+bc+(b-1)c+2b)=f Configurable Logic Blocks (CLBs) are required to implement a single, parallel RADIX-2, FFT implementation. For p RADIXes, therefore, the total number of CLBs required will be (p)*(f). Thus, the speed vs. area trade-off for p RADIXes, representing the optimal performance ratio that can be achieved using p RADIX-2 FFTs, is given by ##EQU1## The speed and area performance levels for an 8192-point and 1024-point RADIX implementation are provided in Tables 1 and 2, respectively, illustrating the trade-off that the user can engage in when designing a highly parallel RADIX-2 FFT implementation using the present invention.<tb>TABLE 1<tb>8192-Point RADIX Area = Total No. of<tb>No. of RADIXes No. of CLBs (Rel. Cycles (Relative<tb>Avail. per stage Size in Parens.) Speed in Parens.)<tb>1 723 (1X) 106496 (1F)<tb>2 1446 (2X) 57344 (1.9F)<tb>4 2892 (4X) 30720 (3.5F)<tb>8 5784 (8X) 16384 (6.5F)<tb>TABLE 1<tb>8192-Point RADIX Area = Total No. of<tb>No. of RADIXes No. of CLBs (Rel. Cycles (Relative<tb>Avail. per stage Size in Parens.) Speed in Parens.)<tb>1 723 (1X) 106496 (1F)<tb>2 1446 (2X) 57344 (1.9F)<tb>4 2892 (4X) 30720 (3.5F)<tb>8 5784 (8X) 16384 (6.5F) The present invention therefore comprises a novel method and system for utilizing multiple RADIX-2 elements using partitioned RAM resources. Where, as in most FPGAs, on-chip RAMs are distributed, there is no performance penalty for using partitioned RAM instead of single block RAM with the present invention, although either approach or any equivalent is envisioned. The inventive approach provides a mechanism for efficiently exploiting the area-performance trade-off available through FFTs. Using the inventive method and system, one can generate an FFT calculation circuit to suit the a broad range of area and speed requirements, even where multiple RADIX-2s are required. It will be understood that the disclosed embodiments are of an exemplary nature and that the inventive method and system are conducive to even larger and more complex computations not explicitly disclosed herein. Accordingly, the present invention is to be limited only by the appended claims and their equivalents. |
Embodiments disclosed herein include optical packages. In an embodiment, an optical package comprises a package substrate and a compute die on the package substrate. In an embodiment, an optics die is on the package substrate, and an integrated heat spreader (IHS) is over the compute die and the optics die. In an embodiment, channels are disposed on a surface of the IHS facing the package substrate. |
An optical package, comprising:a package substrate;a compute die on the package substrate;an optics die on the package substrate; andan integrated heat spreader (IHS) over the compute die and the optics die, wherein channels are disposed on a surface of the IHS facing the package substrate.The optical package of claim 1, wherein the channels comprise:a first channel running in a first direction; anda second channel running in a second direction that is substantially orthogonal to the first direction.The optical package of claim 1, wherein the channels comprise:a first ring channel; anda second ring channel surrounding the first ring channel.The optical package of claim 1, wherein the channels comprise:a first channel extending in a first direction;a second channel intersecting the first channel, wherein the second channel has a first width; anda third channel intersecting the first channel, wherein the third channel has a second width.The optical package of claim 1, 2, 3 or 4, wherein the package substrate comprises a recess along an edge.The optical package of claim 5, wherein the channels are positioned over the recess in the package substrate.The optical package of claim 1, 2, 3, 4, 5 or 6, further comprising:an optical connector coupled to the optics die.The optical package of claim 7, further comprising:a sealant around the optical connector, wherein the sealant at least partially fills the channels.The optical package of claim 1, 2, 3, 4, 5, 6, 7 or 8, further comprising:a channel in the package substrate.The optical package of claim 9, wherein the channel is between the compute die and the optics die.A method of fabricating an optical package, the method comprising:coupling a compute die on a package substrate;coupling an optics die on the package substrate; andcoupling an integrated heat spreader (IHS) over the compute die and the optics die, wherein channels are disposed on a surface of the IHS facing the package substrate.The method of claim 11, wherein the channels comprise:a first channel running in a first direction; anda second channel running in a second direction that is substantially orthogonal to the first direction.The method of claim 11, wherein the channels comprise:a first ring channel; anda second ring channel surrounding the first ring channel.The method of claim 11, wherein the channels comprise:a first channel extending in a first direction;a second channel intersecting the first channel, wherein the second channel has a first width; anda third channel intersecting the first channel, wherein the third channel has a second width.The method of claim 11, 12, 13 or 14, wherein the package substrate comprises a recess along an edge. |
TECHNICAL FIELDEmbodiments of the present disclosure relate to optical packaging systems, and more particularly to multi chip optical packages.BACKGROUNDBandwidth increases in computing platforms has resulted in a switch from electrical signals to optical signals. In an optical platform, a compute die is communicatively coupled to a plurality of optics dies. The optics dies are configured to convert signals between electrical and optical regimes. In some instances optical connectors are attached to the optics dies. The optical connectors house optical fibers over which the optical signals are propagated. The ends of the optical fibers are exposed and fit into V-grooves on the optics die.The use of optical connectors is not without issue. One issue is that optical connectors result in incomplete encapsulation of the system. This may provide reliability concerns. For example, incomplete encapsulation will allow moisture to enter the package, and ice can form when temperatures go below 0°C. When the ice melts, short circuits may occur. Another issue is that existing optical connectors may not be compatible with high volume manufacturing (HVM). Exposed optical fibers are vulnerable to mechanical shock during handling and thermal shock during solder reflow. Additionally, the tool for mounting the optical connector may require several pick-up tool tips in order to provide the mounting.Yet another drawback of existing optical systems is the uncontrolled flow of underfill and encapsulation materials. For example, the multi-chip packages require controlled underfills. Additionally, encapsulation materials also need precise control in order not to spread beyond desired areas. Currently, location and uniformity of material with a certain viscosity is hard to control by just dispensing it on planar surfaces.Yet another drawback of existing optical systems is the need for a large external fiber shuffler for routing the optical fibers. The large footprint of fiber shufflers occupy a large area on the package substrate and/or the board. As such, valuable real estate is lost to optical routing.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1A is a plan view illustration of an optical package from below with the package substrate being transparent in order to see the dies and the optical connectors, in accordance with an embodiment.Figure 1B is a plan view illustration of an optical package from below that shows lids over recesses in the package substrate, in accordance with an embodiment.Figure 2A is a cross-sectional illustration of an optical package that shows a lid with clips, in accordance with an embodiment.Figure 2B is a cross-sectional illustration of an optical package that shows a lid with clips that is substantially coplanar with the package substrate, in accordance with an embodiment.Figure 2C is a cross-sectional illustration of an optical package that shows a lid with spring clips, in accordance with an embodiment.Figure 3 is a cross-sectional illustration of an optical package with a lid that is adhered to an integrated heat spreader (IHS) with an adhesive, in accordance with an embodiment.Figure 4A is a perspective view illustration of a recess in the package substrate and magnets provided on the IHS, in accordance with an embodiment.Figure 4B is a perspective view illustration of a lid that can be attached to the IHS using magnets, in accordance with an embodiment.Figures 5A and 5B illustrate a process for fully encapsulating the recess with a sealant, in accordance with an embodiment.Figures 6A and 6B illustrate a process for fully encapsulating the recess with a sealant using a lid with protrusions, in accordance with an embodiment.Figures 7A and 7B illustrate a process for fully encapsulating the recess with a sealant using a lid that interfaces with protrusions on the IHS, in accordance with an embodiment.Figure 8A is a perspective view of an optical connector, in accordance with an embodiment.Figure 8B is a perspective view of an optical connector with a fiber distribution housing, in accordance with an embodiment.Figure 8C is a perspective view of a pick-and-place tool for connecting the optical connector to an optics die, in accordance with an embodiment.Figures 9A-9E are illustrations depicting the control of a fluid on an optics package using micro channels, in accordance with an embodiment.Figure 9F is an illustration of a V-groove that comprises micro channels for controlling the flow of an epoxy for securing a fiber, in accordance with an embodiment.Figure 10A is a perspective view illustration of an optics package with a recess in the package substrate to expose a portion of the IHS, in accordance with an embodiment.Figure 10B is an illustration of the IHS with micro channels for controlling the flow of a sealant, in accordance with an embodiment.Figure 10C is an illustration of a package substrate with micro channels to control the flow of underfill material around the compute die and the optics dies, in accordance with an embodiment.Figure 11A is a perspective view illustration of an optical connector with integrated fiber shuffle coupling to an optical coupler on an optical chip, in accordance with an embodiment.Figure 11B is a perspective view illustration of a fiber array unit (FAU) of the optical connector, in accordance with an embodiment.Figure 11C is a perspective view illustration of a fiber shuffler of the optical connector, in accordance with an embodiment.Figure 11D is a perspective view illustration of a ferrule of the optical connector, in accordance with an embodiment.Figure 11E is a perspective view illustration of an optical package with optical connectors, in accordance with an embodiment.Figure 12 is a perspective view illustration of an optics system, in accordance with an embodiment.Figure 13 is a schematic of a computing device built in accordance with an embodiment.EMBODIMENTS OF THE PRESENT DISCLOSUREDescribed herein are multi chip optical packages, in accordance with various embodiments. In the following description, various aspects of the illustrative implementations will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that the present invention may be practiced with only some of the described aspects. For purposes of explanation, specific numbers, materials and configurations are set forth in order to provide a thorough understanding of the illustrative implementations. However, it will be apparent to one skilled in the art that the present invention may be practiced without the specific details. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative implementations.Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the present invention, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.As noted above, multi-chip optical packages suffer from various integration issues. Accordingly, embodiments disclosed herein include various architectures for improving the reliability and ease of assembly, as well as providing a decreased footprint. For example, reliability may be improved by sealing the optical connectors. Ease of assembly may be improved by providing micro channels and/or by embedding the optical fibers. The footprint may be decreased by providing a shuffler with a reduced footprint.Particularly, in some embodiments, the optical connectors are fully encapsulated. The full encapsulation may prevent reliability concerns, such as water in the package resulting in short circuits. The full encapsulation is made by providing a lid over the optical interconnects. The lid may cover a recessed portion of the package substrate. The lid may be clipped or adhered to the integrated heat spreader (IHS) with an adhesive or by magnets. The lid may be further improved by providing a sealant. The sealant fills the cavity and surrounds the ferrules to provide a complete seal at the edge of the optical package.In some embodiments, ease of assembly is increased by providing micro channels in various surfaces. The micro channels can be used to direct and/or restrict flow of dispensed materials, such as epoxies, underfills, and sealants. For example, the micro channels may be provided on package substrates, the IHS, or the V-grooves on the optics die. Various micro channel designs can be used to promote the flow of the dispensed fluid in a certain direction and/or restrict flow of the dispensed fluid.In some embodiments, ease of assembly and reliability is further improved by providing the fibers in a molded housing. Particularly, the ferrule, a fiber distribution housing, and a fiber holder are molded into a single component. Molding into a single component allows for a single tip in a pick-and-place tool to be used to insert the fibers into the V-grooves on the optics die. Additionally, the molding protects the fibers from thermal and mechanical shock.In some embodiments, the footprint of the optical system is reduced by utilizing an improved fiber shuffler. In a fiber shuffler design, the optical fibers are coupled to grating couplers on the optics die. The optical fibers are bent by a fiber array unit (FAU) and fed into a fiber shuffler that has V-grooves with different depths. After exiting the fiber shuffler, a ferrule aligns the fibers into a plurality of columns. As such, a single row at a first end of optical fibers can be rerouted to form a plurality of columns at a second end of the optical fibers.Referring now to Figure 1A , a plan view illustration of an optical package 100 is shown, in accordance with an embodiment. In an embodiment, the optical package 100 is shown from below with the package substrate being transparent for clarity. As shown, the optical package 100 comprises a compute die 105 and a plurality of optics dies 106. The compute die 105 may be any die, such as a processor, a graphics processor, or the like. In an embodiment, the optics dies 106 provide functionality for converting between optical and electrical signals. The optics dies 106 may be communicatively coupled to the compute die 105 by a bridge or other interconnect on the package substrate (not shown).In an embodiment, optical connectors 120 may be coupled to the optics dies 106. The optical connectors 120 may comprise a plurality of fibers 122 and a ferrule 121. For example, the plurality of fibers 122 may comprise twenty four fibers in some embodiments. The fibers 122 may be secured into V-grooves on the optics dies 106.As shown in Figure 1A , an integrated heat spreader (IHS) 115 is provided over top surfaces of the compute die 105 and the optics dies 106. That is, in the view of Figure 1A , the IHS 115 is below the compute die 105 and the optics die 106. In an embodiment, the IHS 115 also covers portions of the optical connectors 120. In an embodiment, the ferrules 121 may extend out past an edge of the IHS 115.Referring now to Figure 1B , a plan view illustration of the bottom of the optical package 100 showing the package substrate 116 is shown, in accordance with an embodiment. In an embodiment, the package substrate 116 may have an H-shaped footprint. That is, recesses 119 may be provided along opposite edges of the package substrate 116. In an embodiment, the optical connectors 120 are provided in the recesses 119. As shown in Figure 1B , the recesses 119 are covered by a lid 123. The lid 123 provides a seal around the bottom of the optical connectors 120.The lid 123 may comprise any number of architectures. For example, the lid 123 may be a clip on component. In other embodiments, the lid 123 may be secured to the IHS 116 by an adhesive or by magnets.Examples of various lid 123 architectures that include clip on architectures are shown in Figures 2A-2C . As shown in Figure 2A , the optical package 200 comprises a package substrate 216 and an IHS 215. Solder balls 217 or other interconnects may be provided on the package substrate 216. In an embodiment, the lid 223 comprises a plate 224 and clips 225. The plate 224 seals an opening (i.e., the recess) in the package substrate 216. The clips 225 may secure the plate 224 to the package substrate 216. In an embodiment, the plate 224 and the clips 225 may comprise a plastic material or they may comprise metallic materials. In the illustrated embodiment, the plate 224 sits above a surface of the package substrate 216.Referring now to Figure 2B , a cross-sectional illustration of the optical package 200 with an alternative lid 223 architecture is shown, in accordance with an embodiment. As shown, the plate 224 may be substantially coplanar with a surface of the package substrate 216.Referring now to Figure 2C , a cross-sectional illustration of the optical package with yet another lid 223 architecture is shown, in accordance with an embodiment. In an embodiment, the lid 223 may comprise spring ends 226 on opposite ends of the plate 224. The spring ends 226 may bend inward to pass through the opening and expand out after passing the opening to secure the lid 223. In an embodiment, the plate 224 and the spring end 226 may comprise a thin metal sheet, though other materials may also be used in accordance with additional embodiments.Referring now to Figure 3 , a cross-sectional illustrations of an optical package 300 is shown, in accordance with an additional embodiment. In an embodiment, the optical package 300 comprises a package substrate 316 and an IHS 315. Ferrules 321 are provided on the IHS 315. In an embodiment, a lid 323 provides a seal around the ferrules 321. The lid 323 may comprise a plate 324 with support posts 327. The support posts 327 may extend down to the IHS 315 between the ferrules 321. In an embodiment, the support posts 327 are secured to the IHS 315 by an adhesive 328.Referring now to Figure 4A , a perspective view illustration of an optical package 400 is shown, in accordance with an embodiment. In an embodiment, the optical package 400 comprises a package substrate 416 with a recess 419. The IHS 415 is shown below the recess 419. Edges of the optics dies 406 are also illustrated in Figure 4A . The optical connectors 420 are shown removed from the recess 419 in order to more clearly illustrate the layout on the IHS 415. As shown, a plurality of magnets 431 and 432 may be provided on the IHS 415. The magnets 431 and 432 may be positioned to interface with support posts of the lid (not shown in Figure 4A ). While magnets 431 and 432 are shown in Figure 4A , it is to be appreciated that the magnets may alternatively be formed only on the lid in cases where the IHS 415 is a magnetic or metallic material.Referring now to figure 4B , a perspective view illustration of the lid 423 is shown, in accordance with an embodiment. As shown, the lid 423 comprises a plate 424 and support posts 427. Magnets 433 may also be provided along edges of the plate 424. The magnets 433 may align with the positioning of magnets 431 shown in Figure 4A . Additionally, magnets may also be included on the support posts 427 in some embodiments.It is to be appreciated that the lids described above provide a seal of the bottom surface of the recess in the package substrate. However, the edge surface between the lid and the IHS where the ferrules exit the cavity may not be sealed in some embodiments. In additional embodiments, a sealant may also be included in order to provide a seal around the ferrules. Examples of such embodiments are provided in Figures 5A-7B .Referring now to Figure 5A , an edge view of the recess in an optical package 500 is shown, in accordance with an embodiment. As shown, the ferrules 521 are set on the IHS 515. A sealant 535 may be disposed around the ferrules 521. The sealant 535 may be a high viscosity sealant. For example, the sealant 535 may be an epoxy or the like. In an embodiment, a plate 524 of a lid is brought down into contact with the sealant 535, as indicated by the arrow. In an embodiment, the plate 524 may be part of any of the lids such as those described above. For example, the plate 524 may be clipped or otherwise adhered to the IHS (e.g., with adhesive or magnets). As shown in Figure 5B , the edge of the optical package 500 is completely sealed with the plate 524, the sealant 535, and the ferrules 521 completely filling the edge of the recess. In addition to providing a seal for the recess, the sealant 535 may also prevent the ferrules from moving when mechanical and/or thermal shock is applied to the optical package 500.Referring now to Figure 6A , an edge view of the recess in an optical package 600 is shown, in accordance with an embodiment. In an embodiment, the plate 624 may further comprise supports 627. In an embodiment, the supports 627 may be attached to the plate 624, or the supports 627 may be a monolithic part of the plate 624. The supports 627 are configured to be placed between the ferrules 621 on the IHS 615. The supports 627 provide additional volume that may make it easier for the sealant 635 to provide a proper seal of the recess.As shown by the arrow in Figure 6A , the plate 624 and supports 627 are inserted down towards the IHS 615 to provide a device similar to the optical package 600 shown in Figure 6B . In Figure 6B , the spacings between the ferrules 621 are filled by a combination of the supports 627 and the sealant 635. As such, a smaller volume of sealant 63 5 is necessary to fill the edge of the recess.Referring now to Figures 7A and 7B , a series of edge view illustrations depicting a process for sealing a recess in an optical package 700 is shown, in accordance with an additional embodiment. As shown in Figure 7A , a plurality of supports 727 are provided between the ferrules 721 over the IHS 715. That is, instead of connecting the supports 727 to the plate 724 (as shown in Figures 6A and 6B ), the supports 727 are attached to the IHS 715. In an embodiment, the supports 727 may be adhered to the IHS 715, or the supports 727 may be a monolithic part of the IHS 715. As indicated by the arrows, the sealant 735 and the plate 724 are then added to provide a structure similar to that shown in Figure 7B . Similar to Figure 6B , the spacings between the ferrules 721 are filled by a combination of the supports 727 and the sealant 735. As such, a smaller volume of sealant 735 is necessary to fill the edge of the recess.The use of a lid and a sealant such as described above allows for the recess into the package substrate to be fully sealed. The IHS may form one surface of a cavity and the package substrate provides portions of the sidewalls of the cavity. In an embodiment, the lid can provide the surface of the cavity opposite from the IHS, and the sealant can provide the missing sidewall of the cavity. As such, the cavity can be fully sealed to prevent moisture from entering the optical package. In some embodiments, the sealant may also substantially fill the cavity.Embodiments disclosed herein may also provide for improved manufacturability and reliability. An example of an optical connector 820 is shown in Figure 8A . The optical connector 820 comprises a socket 821 (also sometimes referred to as a plug). A ferrule 824 is inserted into the socket 821, and a holder 825 is spaced away from the ferrule 824. The holder 825 maintains fibers 822 at the proper alignment for insertion into V-grooves on an optics die. The fibers 822 may be arranged in a 2X12 alignment in the ferrule 824 and a 1X24 alignment in the holder 825. Between the holder 825 and the ferrule 824 the fibers are bent to accommodate the different alignments on each end.However, such an architecture provides several drawbacks. One drawback is that insertion of the fibers 822 into V-grooves requires complex pick-and-place tools. Particularly, the pick-and-place tool requires at least three different tips. A first tip is needed to control the socket 821 and ferrule 824, a second tip is needed to control the holder 825, and a third tip is needed to hold a fiber lid to press the ends of the fibers 822 into the V-grooves. Additionally, the portions of the fibers 822 between the ferrule 824 and the holder 825 are exposed to thermal and mechanical shock. As such, the optical connector 820 is susceptible to environmental damage during and after installation. Accordingly, embodiments disclosed herein provide an enhanced optical connector 820 that mitigates the drawbacks of the optical connector 820 illustrated in Figure 8A . Such an optical connector 820 is illustrated in Figure 8B .Referring now to Figure 8B , a perspective view illustration of an optical connector 820 is shown, in accordance with an embodiment. In an embodiment, the optical connector 820 comprises a socket 821. The socket 821 may be substantially similar to the socket 821 illustrated in Figure 8A . That is, the socket 821 may be configured to receive a ferrule 824. In an embodiment, the ferrule 824 receives the optical fibers 822 and holds them in a 2X24 arrangement. Though, it is to be appreciated that different optical fiber 822 arrangements can be used, depending on the number of optical fibers 822 in the connector 820.In an embodiment, the optical connector 820 may also comprise a holder 825 that is similar to the holder 825 in Figure 8A . The holder 825 may secure the optical fibers 822 in an arrangement that is different than the arrangement of the optical fibers in the ferrule 824. In an embodiment, the holder 825 arranges the optical fibers 822 in a single row. For example, in the case of a twenty four optical fiber 822 optical connector 820, the optical fibers 822 may be positioned in a 1X24 arrangement.The optical connector 820 in Figure 8B differs from the optical connector 820 in Figure 8A in that a fiber distribution housing 826 is provided between the ferrule 824 and the holder 825. In an embodiment, the fiber distribution housing 826 may be the same material as one or both of the ferrule 824 and the holder 825. For example, the fiber distribution housing 826, the ferrule 824, and the holder 825 may each comprise glass. In such embodiments, the fiber distribution housing 826, the ferrule 824, and the holder 825 may be a monolithic part. That is, in some embodiments, there may be no discernable seam or other boundary between the fiber distribution housing 826, the ferrule 824, and the holder 825. However, in other embodiments, seams may be present between two or more of the fiber distribution housing 826, the ferrule 824, and the holder 825, even when formed from the same material. In yet another embodiment, two or more of the fiber distribution housing 826, the ferrule 824, and the holder 825 may be formed with different materials.The fiber distribution housing 826 provides an enclosure around the optical fibers 822 as they are bent to allow conversion from one arrangement at the ferrule 824 to a second arrangement at the holder 825. For example, the optical fibers 822 may be bent so that at the ferrule 824 the optical fibers 822 are arranged in two or more rows (e.g., a 2X12 array) and at the holder 825 the optical fibers 822 are arranged in a single row (e.g., a 1X24 array).In an embodiment, the fiber distribution housing 826 entirely surrounds the optical fibers 822. As such, the optical fibers 822 are less susceptible to thermal or mechanical shock. Additionally, the fiber distribution housing 826 mechanically couples the ferrule 824 to the holder 825. As such, a tip for the pick-and-place tool for mounting the optical connector 820 to the V-grooves on the optics die may be omitted.Referring now to Figure 8C , a perspective view illustration of the assembly of an optical package 800 is shown, in accordance with an embodiment. As shown, the optical package 800 may comprise a package substrate 816 and an integrated heat spreader 815. A recess in the package substrate 816 may result in the exposure of an edge of one or more optics dies 806 that are attached to the package substrate 816. The optics dies 806 may comprise V-grooves for receiving optical fibers 822 of the optical connector 820.As shown, the optical connector 820 may be substantially similar to the optical connector 820 in Figure 8B . For example, the optical connector 820 comprises a socket 821, a ferrule 824, a fiber distribution housing 826, a holder 825, and optical fibers 822. The optical connector 820 may be handled by a pick-and-place tool 840. The pick-and-place tool 840 may comprise a first tip 842A and a second tip 842B. The first tip 842A may hold the socket 821. The fiber distribution housing 826 mechanically couples the holder 825 to the ferrule 824 that is held by the socket 821. As such, only a single first tip 842A is needed to handle these components of the optical connector 820. In contrast, the optical connector 820 in Figure 8A requires an additional tip in order to properly align the holder 825 since it is not mechanically coupled to the ferrule 824. In an embodiment, the second tip 842B may be arranged to handle the bare ends of the optical fibers 822. The second tip 842B is therefore responsible for holding a fiber lid to press in the bare ends of the optical fibers 822 into the V-grooves on the optics dies 806.The embodiments described above may include the fluidic dispensing of one or more materials in order to allow for proper coupling of components and/or sealing of cavities. For example, the compute die and the optics dies typically include underfill materials, an epoxy may be needed to secure the bare optical fibers to the V-grooves, and sealant may be dispensed to fully seal the cavity formed by the package substrate recess. It is to be appreciated that the control of the various flows of one or more of such fluids is critical in order to provide high yielding and robust optics packages. Accordingly, embodiments disclosed herein include various micro channels into surfaces where such fluids are dispensed in order to control the spread of the materials.Referring now to Figure 9A , a cross-sectional illustration of a fluid 902 dispensed on a surface 901 is shown, in accordance with an embodiment. With the inclusion of no micro channels or other features, the fluid 902 is distributed as viscosity and surface tensions dictate. For example, the fluid 902 may have a cross-section that follows a normal distribution.Referring now to Figure 9B , a cross-sectional illustration of a fluid 902 dispensed on a surface 901 with a micro patterned channel 903 is shown, in accordance with an embodiment. As shown, the channel 903 interrupts the natural flow of the fluid and truncates a tail of the fluid. Such a channel 903 can therefore be used to halt the flow of a fluid across a surface. In the illustrated embodiment, a single channel 903 is shown, but it is to be appreciated that multiple channels 903 may be provided next to each other to further enhance control of the fluid across the channels.In contrast, Figure 9C is an illustration of channels 904 that are formed into a surface that coincide with the primary flow direction of a fluid 902 (as indicated by the arrow). As shown, the guiding channels 904 promote flow of the fluid 902 along the path dictated by the channels 904. In the illustrated embodiment a pair of parallel channels 904 are shown. However, it is to be appreciated that a single guiding channel 904 or a plurality of guiding channels may be provided to modify the flow of the fluid 902.In Figures 9B and 9C , the channels 903 and 904 are shown as being substantially straight lines. However, it is to be appreciated that the channels may not be linear in some embodiments. For example, in Figure 9D an illustration of a circular channel 907 is shown, in accordance with an embodiment. The use of a circular channel 907 may be used to confine a fluid 902 to a given area. In an embodiment, a single circular channel 907 may be sufficient to confine a fluid 902. However, in other embodiments, two or more circular channels (e.g., a first circular channel 907A and a second circular channel 907B) may be used to confine the flow of a fluid 902. Additionally, while shown as being circular channels 907, it is to be appreciated that any shaped closed loop channel may be used in other embodiments.Referring now to Figure 9E , a plan view illustration of a fluid distribution structure is shown, in accordance with an embodiment. As shown, the fluid may be dispensed into a main reservoir 936 and a plurality of branches 937, 938, 939 may intersect the main reservoir 936. The various branches 937, 938, 939 have different widths W1, W2, and W3, respectively. Depending on the width W, different amounts of capillary force will draw the fluid to different distances away from the reservoir 936. For example, the first width W1 is the smallest and results in the greatest distance of fluid transfer, and the third width W3 is the largest and results in the shortest distance of fluid transfer.In Figures 9B-9E , the various channels are referred to as being "micro channels". A micro channel may refer to features (e.g., width and/or depth) of the channel being at the micron scale. For example, the width and/or depth of the micro channels may be approximately 500µm or less, approximately 100µm or less, or approximately 10µm or less, depending on the fluid that is being dispensed. Additionally, it is to be appreciated that lengths of the micro channels may be several millimeters or longer. That is the length of the micro channels may not be considered as being on the micron scale in some embodiments.Referring now to Figure 9F , a perspective view illustration of a V-groove 907 is shown, in accordance with an embodiment. A first end of the V-groove 907 may comprise a spot size converter (SCC) 908. The opposite end of the V-groove 907 may be at the end of the optics die (not shown). In an embodiment, a plurality of parallel channels 909A extend along a length direction of the V-groove 907. In an embodiment, the channels 909A may be provided on one or more surfaces of the V-groove 907. For example, the channels 909A may be provided on sidewall surfaces and/or the bottom surface of the V-groove 907. The channels 909A allow for optical epoxy to evenly distribute under a fiber (not shown) and flow under the SSC 908. In an embodiment, second channels 909B may be provided at the end of the V-groove 907 opposite from the SSC 908. The second channels 909B may be substantially orthogonal to the channels 909A. Such a configuration prevents overflow of epoxy outside of the V-groove 907 and forces more fluid flow towards the SSC 908.Referring now to Figure 10A , a perspective view illustration of an optical package 1000 is shown, in accordance with an embodiment. As shown, the optical package 1000 comprises an IHS 1015 and a package substrate 1016. Portions of the optics dies 1006 are exposed by a recess in the package substrate 1016. The optical connectors 1020 are removed from the recess to more clearly illustrate exposed surfaces of the IHS 1015. However, it is to be appreciated that the optical connectors 1020 are attached to the IHS 1015 by a mechanical epoxy. It is desirable for this epoxy to not overflow outside of the package and to be evenly distributed. As such, the surfaces of the IHS 1015 exposed by the recess in the package substrate 1016 may comprise micro channels to control the flow of the epoxy.Referring now to Figure 10B , an illustration of the IHS 1015 in isolation from the remainder of the optical package 1000 is shown, in accordance with an embodiment. As shown, the IHS 1015 may comprise a first set of parallel channels 1009A. The first set of parallel channels 1009A may be used to evenly distribute the epoxy in the recess region. One or more second channels 1009B may be provided along an edge of the IHS 1015. The second channels 1009B may be substantially perpendicular to the first set of parallel channels 1009A. The second channels 1009B prevent the flow of the epoxy off the edge of the IHS 1015.Referring now to Figure 10C , an illustration of the package substrate 1016 with a compute die 1005 and optics dies 1006 is shown, in accordance with an embodiment. In an embodiment, keep out zones surrounding the optics dies 1006 and the compute die 1005 may be provided. The keep out zones are areas of the package substrate 1016 that should remain free of underfill material. As such, channels 1019 may be provided between the keep out zones. The channels 1019 may be fluidically coupled to reservoirs 1018. As such, excess underfill material flows into the channels 1019 and is transported to the reservoirs 1018 through capillary action. Therefore, precise control of the underfill in order to maintain proper keep out zones is provided.In the embodiments described above, the optical connector is described as interfacing with the optics die through V-grooves. However, it is to be appreciated that in other embodiments, the optical connector may be optically coupled to the optics die from above. In such an embodiment, a grating coupler may be provided on the optics die to receive optical signals from a fiber coming down towards the optics die from above. In some instances that fibers are routed to a ferrule using a fiber shuffler or a loom. However, such architectures occupy a large area on the footprint of the printed circuit board (PCB) and are therefore not desirable.Accordingly, embodiments disclosed herein include an optical coupler that has a reduced footprint. In an embodiment, the optical coupler may be supported by pillars that are on the package substrate or the PCB. The pillars are smaller than previous solutions, and therefore save valuable package or board real estate.Referring now to Figure 11A , a perspective view illustration of a portion of an optics package 1100 is shown, in accordance with an embodiment. In an embodiment, the optics package 1100 comprises a package substrate 1116 and an optics die 1106 over the package substrate 1116. A grating coupler 1171 may be provided on the optics die 1106 to receive optical signals. The optics package 1100 may further comprise an IHS 1115 over the optics die 1106. The IHS 1115 may comprise an opening to allow for optical signals to pass through the IHS 1115.In an embodiment, the optical coupler comprises a fiber array unit (FAU) 1153. The FAU 1153 bends the optical fibers 1161. For example, the bend provided by the FAU 1153 may be approximately 125°. After exiting the FAU 1153, the fibers 1161 pass through a fiber shuffler 1154. The fiber shuffler 1154 redistributes the fibers 1161 to different Z-heights. For example, fiber 1161A is at a first Z-height, fiber 1161B is at a second Z-height that is above the first Z-height, and fiber 1161C is at a third Z-height that is above the second Z-height. After exiting the fiber shuffler 1154, the fibers 1161 enter a ferrule 1155. The ferrule 1155 routs the fibers 1161 so they are arranged into a column. That is, the third fiber 1161C is directly above the second fiber 1161B, and the second fiber 1161B is directly above the first fiber 1161A. Accordingly, the optical coupler may translate a row into a column. In an embodiment, the fiber shuffler 1154 and the ferrule 1155 may be supported from below by pillars 1156. The pillars 1156 may be supported by the PCB (not shown) or by a portion of the package substrate 1116. In the illustrated embodiment, the FAU 1153, the optical fibers 1161, the fiber shuffler 1154, and the ferrule 1155 are shown as discrete components. However, it is to be appreciated that the FAU 1153, the optical fibers 1161, the fiber shuffler 1154, and the ferrule 1155 may be molded together as a single component.The optical coupler described above highlights the coupling of a set of three fibers 1161A-C. However, the optical coupler may provide translation of a plurality of optical fibers 1161. For example, in Figure 11A , a 1X24 row of fibers 1161 is translated into a 3X8 array at the end of the ferrule 1155.In an embodiment, the fibers 1161 may be optically coupled to the grating coupler 1171 by lenses 1152 and 1151. Lens 1152 may be coupled to the FAU 1153, and lens 1151 may be coupled to the optics die 1106. As such, optical signals from the fibers 1161 may be focused onto the grating couplers 1171 of the optics die 1106. However, it is to be appreciated that one or both of the lenses 1152 and 1151 may be omitted in some embodiments.Referring now to Figure 11B , a perspective view illustration of an FAU 1153 is shown, in accordance with an embodiment. As shown, the FAU 1153 may comprise a housing for guiding the fibers (not shown) towards a grating coupler 1171 on an optics die (not shown). Channels 1172 may be formed in a bottom portion of the FAU 1153 to receive the fibers. The channels 1172 may be completely surrounded by the housing of the FAU 1153. In an embodiment, the channels 1172 may comprise a bend in order to change the path of the fibers. For example, the bend in the channels 1172 may be approximately 125°. In the upper portion of the FAU 1153, V-grooves 1173 may be provided. The V-grooves provide an alignment feature that allows for fibers to be properly inserted into the channels 1172.In the illustrated embodiment, the FAU 1153 is shown with paths for three fibers. However, it is to be appreciated that the components of the FAU 1153 may be repeated any number of times in order to provide an FAU 1153 that accommodates architectures with more fibers. For example, the FAU 1153 may comprise twenty four paths for fibers.Referring now to Figure 11C , a perspective view illustration of a fiber shuffler 1154 is shown, in accordance with an embodiment. In an embodiment, the fiber shuffler 1154 comprises a main body 1174. A plurality of trenches 1175 are provided into the main body 1174. Each of the trenches 1175 may have a V-groove bottom in order to properly align the fibers (not shown). In an embodiment, the trenches 1175 may have two or more different depths. For example, trenches 1175A have a first depth D1, trenches 1175B have a second depth D2, and trenches 1175C have a third depth D3. The difference in the depths allow for the fibers to be aligned at different heights within the system. In an embodiment, a rigid plate with protrusions or the like (not shown) may press the fibers into the V-groove bottoms.In the illustrated embodiment, each set of trenches (e.g., trenches 1175A-1175C) includes three trenches. The sets of trenches 1175 may be repeated any number of times in order to accommodate any number of fibers. For example, eight sets of trenches 1175 may be used to accommodate a system that comprises twenty four fibers. Additionally while sets with three trenches 1175 are shown, it is to be appreciated that each set may comprise two or more trenches 1175.Referring now to Figure 11D , a perspective view illustration of a ferrule 1155 is shown, in accordance with an embodiment. In an embodiment, the ferrule 1155 comprises a first end 1181 and a second end 1183. A fiber realignment region 1182 is provided between the first end 1181 and the second end 1183. A single fiber 1161 passing through the fiber realignment region 1182 is show for simplicity. However, it is to be appreciated that the all of the fibers 1161 pass through the fiber realignment region 1182 in some embodiments.As shown in Figure 11D , for each set of three fibers 1161, the first end of the fibers 11611 are not aligned. This is because the fiber shuffler 1154 has set different Z-heights for the fibers 1161. Additionally, no lateral displacement has taken place at this point, so the fibers 1161 within a set are not aligned above/below each other. During the transition from the first end 1181 to the second end 1183, the fibers 11612 are laterally displaced so that the second end of the fibers 11613 are aligned in a column. As such, the combination of the FAU 1153, the fiber shuffler 1154, and the ferrule 1155 allow for the conversion of a row of fibers into a multi column array. For example, the fibers may be converted from a 1X24 array to a 3X8 array in some embodiments.Referring now to Figure 11E , a perspective view illustration of an optical package 1100 is shown, in accordance with an embodiment. As shown, the optical package 1100 comprises a package substrate 1116 and an IHS 1115. A compute die 1105 and optics dies 1106 are provided between the package substrate 1116 and the IHS 1115. Each of the optics dies 1106 are optically coupled to an optical connector 1120. For example, six optical connectors 1120 are shown in Figure 11E . However, it is to be appreciated that there may be any number of optical connectors 1120 to match the number of optics dies 1106.In an embodiment, each of the optical connectors 1120 may comprise a FAU 1153, a fiber shuffler 1154, and a ferrule 1155. The ferrule 1155 and the fiber shuffler 1154 may be supported by pillars 1156. The pillars 1156 may be attached to a board (not shown) that is below the package substrate 1116. In other embodiments, the package substrate 1116 may extend out and provide the support for the pillars 1156.Referring now to Figure 12 , a perspective view illustration of an optical system 1290 is shown, in accordance with an embodiment. In an embodiment, the optical system 1290 comprises a board 1291, such as a PCB. A package substrate 1216 may be attached to the board 1291. An IHS 1215 may be provided over the package substrate 1216. In an embodiment, a compute die (not shown) and a plurality of optics dies (not shown) are provided between the IHS 1215 and the package substrate 1216. Optical connectors 1220 may be optically coupled to the optics dies.In an embodiment, the optical system 1290 may comprise any of the embodiments described above. For example, a lid covering a recess in the package substrate may be provided. A sealant epoxy may also be provided to seal the cavity below the lid in some embodiments. Additionally, the optical connector 1220 may comprise a molded fiber distribution housing between a holder and the ferrule. Embodiments may also include one or more micro channels on various surfaces in order to control the dispensing of various materials. For examples, micro channels may be provided on the IHS 1215, the package substrate 1216, or on V-grooves of the optics dies.In the illustrated embodiment, the optical connectors 1220 exit from the side of the optical system. However, it is to be appreciated that optical connectors similar to those described in Figures 11A-11E may replace the illustrated optical connectors 1220. In such an embodiment, holes through the IHS 1215 may be provided to allow access to grating couplers on the optics dies. In an embodiment, such vertical optical connectors may comprise an FAU, a fiber shuffler, and a ferrule.Figure 13 illustrates a computing device 1300 in accordance with one implementation of the invention. The computing device 1300 houses a board 1302. The board 1302 may include a number of components, including but not limited to a processor 1304 and at least one communication chip 1306. The processor 1304 is physically and electrically coupled to the board 1302. In some implementations the at least one communication chip 1306 is also physically and electrically coupled to the board 1302. In further implementations, the communication chip 1306 is part of the processor 1304.These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 1306 enables wireless communications for the transfer of data to and from the computing device 1300. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1306 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 1300 may include a plurality of communication chips 1306. For instance, a first communication chip 1306 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1306 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 1304 of the computing device 1300 includes an integrated circuit die packaged within the processor 1304. In some implementations of the invention, the integrated circuit die of the processor may be part of an optical package that comprises optical connectors coupled to optics dies, in accordance with embodiments described herein. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 1306 also includes an integrated circuit die packaged within the communication chip 1306. In accordance with another implementation of the invention, the integrated circuit die of the communication chip may be part of an optical package that comprises optical connectors coupled to optics dies, in accordance with embodiments described herein.The above description of illustrated implementations of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.These modifications may be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific implementations disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.Example 1: an optical package, comprising: a package substrate; a compute die on the package substrate; an optics die on the package substrate; and an integrated heat spreader (IHS) over the compute die and the optics die, wherein channels are disposed on a surface of the IHS facing the package substrate.Example 2: the optical package of Example 1, wherein the channels comprise: a first channel running in a first direction; and a second channel running in a second direction that is substantially orthogonal to the first direction.Example 3: the optical package of Example 1, wherein the channels comprise: a first ring channel; and a second ring channel surrounding the first ring channel.Example 4: the optical package of Example 1, wherein the channels comprise: a first channel extending in a first direction; a second channel intersecting the first channel, wherein the second channel has a first width; and a third channel intersecting the first channel, wherein the third channel has a second width.Example 5: the optical package of Examples 1-4, wherein the package substrate comprises a recess along an edge.Example 6: the optical package of Example 5, wherein the channels are positioned over the recess in the package substrate.Example 7: the optical package of Examples 1-6, further comprising: an optical connector coupled to the optics die.Example 8: the optical package of Example 7, further comprising: a sealant around the optical connector, wherein the sealant at least partially fills the channels.Example 9: the optical package of Examples 1-8, further comprising: a channel in the package substrate.Example 10: the optical package of Example 9, wherein the channel is between the compute die and the optics die.Example 11: a photonics die, comprising: a die substrate; a V-groove in the die substrate; and a channel into a surface of the V-groove.Example 12: the photonics die of Example 11, wherein the V-groove has a length direction, and wherein the channel extends along the length direction.Example 13: the photonics die of Example 11 or Example 12, wherein the V-groove has a first end and a second end at the edge of the die substrate, and wherein a spot size converter is at the first end of the V-groove.Example 14: the photonics die of Example 13, wherein the channel extends between the first end and the second end, and wherein a second channel is provided between the channel and the second end.Example 15: the photonics die of Example 14, wherein the second channel extends in a length direction that is substantially perpendicular to a length direction of the channel.Example 16: the photonics die of Examples 11-15, further comprising: a plurality of channels into surfaces of the V-groove, wherein the plurality of channels run substantially parallel to each other.Example 17: the photonics die of Example 16, wherein the plurality of channels are disposed into a first sidewall of the V-groove and a second sidewall of the V-groove.Example 18: an optical system, comprising: a board; a package substrate on the board; a compute die on the package substrate; an optics die on the package substrate; and an integrated heat spreader (IHS) over the compute die and the optics die, wherein channels are disposed on a surface of the IHS facing the package substrate.Example 19: the optical system of Example 18, wherein the package substrate comprises a recessed edge, and wherein the channels are disposed over the recess.Example 20: the optical system of Example 18 or Example 19, wherein the optics die comprises a V-groove, and wherein a second channel is disposed into a surface of the V-groove. |
A system and method for efficiently processing access requests for a shared resource are described. Each of many requestors are assigned to a partition of a shared resource. When a controller determines no requestor generates an access request for an unassigned partition, the controller permits simultaneous access to the assigned partitions for active requestors. When the controller determines at least one active requestor generates an access request for an unassigned partition, the controller allows a single active requestor to gain exclusive access to the entire shared resource while stalling access for the other active requestors. The controller alternatives exclusive access among the active requestors. In various embodiments, the shared resource is a local data store in a graphics processing unit and each of the multiple requestors is a single instruction multiple data (SIMD) compute unit. |
WHAT IS CLAIMED IS1. A computing system comprising:a shared resource comprising a plurality of partitions;a plurality of requestors, each assigned to a different partition of the plurality of partitions of the shared resource; anda controller coupled to the shared resource, wherein in response to receiving a request for access to a given partition from a first requestor of the plurality of requestors, the controller is configured to:provide the first requestor with access to only the given partition, in response to determining the given partition is assigned to the first requestor; and provide the first requestor with access to all partitions of the plurality of partitions, in response to determining the given partition is not assigned to the first requestor.2. The computing system as recited in claim 1, wherein the controller is further configured to stall access to the shared resource for each of the plurality of requestors other than the first requestor when providing the first requestor with access to all partitions.3. The computing system as recited in claim 2, wherein the first requestor is the least recently selected active requestor of the plurality of requestors.4. The computing system as recited in claim 1, wherein the controller is further configured to deselect the first requestor responsive to:determining completion of a given number of access requests for the first requestor; and determining the plurality of requestors have more access requests.5. The computing system as recited in claim 4, wherein the given number of access requests is a number of access requests serviced within a single clock cycle.6. The computing system as recited in claim 4, wherein the controller is further configured to: stall access of the shared resource for the first requestor; and mark the first requestor as the most recently selected active requestor of the plurality of requestors.7. The computing system as recited in claim 6, wherein the controller is further configured to: select a second requestor different from the first requestor of the plurality of requestors; remove the stall for the selected second requestor; andprovide the second requestor with access to all partitions of the plurality of partitions.8. The computing system as recited in claim 1, wherein the shared resource is a local data store in a graphics processing unit and each of the plurality of requestors is a single instruction multiple data (SEVID) compute unit.9. A method comprising:assigning each of a plurality of requestors to a different partition of a plurality of partitions of a shared resource;in response to receiving a request for access to a given partition from a first requestor of a plurality of requestors:providing the first requestor with access to only the given partition, in response to determining the given partition is assigned to the first requestor; and providing the first requestor with access to all partitions of the plurality of partitions, in response to determining the given partition is not assigned to the first requestor.10. The method as recited in claim 9, further comprising stalling access to the shared resource for each of the plurality of requestors other than the first requestor when providing the first requestor with access to all partitions.11. The method as recited in claim 10, wherein the first requestor is the least recently selected active requestor of the plurality of requestors.12. The method as recited in claim 9, further comprising deselecting the first requestor responsive to:determining completion of a given number of access requests for the first requestor; and determining the plurality of requestors have more access requests.13. The method as recited in claim 12, wherein the given number of access requests is a number of access requests serviced within a single clock cycle.14. The method as recited in claim 12, further comprising:stalling access of the shared resource for the first requestor; andmarking the first requestor as the most recently selected active requestor of the plurality of requestors.15. The method as recited in claim 9, further comprising:selecting a second requestor different from the first requestor of the plurality of requestors;removing the stall for the selected second requestor; andpermitting access of any of the plurality of partitions for the second requestor.16. The method as recited in claim 9, wherein the shared resource is a local data store in a graphics processing unit and each of the plurality of requestors is a single instruction multiple data (SEVID) compute unit.17. A controller comprising:a first interface coupled to a shared resource comprising a plurality of partitions;a second interface coupled to a plurality of requestors, each assigned to a different partition of the plurality of partitions of the shared resource; anda control unit; andwherein in response to receiving a request for access to a given partition from a first requestor of the plurality of requestors, the control unit is configured to:provide the first requestor with access to only the given partition, in response to determining the given partition is assigned to the first requestor; and provide the first requestor with access to all partitions of the plurality of partitions, in response to determining the given partition is not assigned to the first requestor.18. The controller as recited in claim 17, wherein the control unit is further configured to stall access to the shared resource for each of the plurality of requestors other than the first requestor when providing the first requestor with access to all partitions.19. The controller as recited in claim 17, wherein the control unit is further configured to deselect the first requestor responsive to:determining completion of a given number of access requests for the first requestor; and determining the plurality of requestors have more access requests.20. The controller as recited in claim 19, wherein the control unit is further configured to:stall access of the shared resource for the first requestor; andmark the first requestor as the most recently selected active requestor of the plurality of requestors. |
DUAL MODE LOCAL DATA STOREBACKGROUND Description of the Relevant Art[0001] The parallelization of tasks is used to increase the throughput of computer systems. To this end, compilers or the software programmer extract parallelized tasks from program code to execute in parallel on the system hardware. Out-of-order execution, deep pipelines, speculative execution and multi -threaded execution are used to exploit instruction level parallelism, and thus, increase throughput. To further increase parallel execution on the hardware, a parallel architecture processor is included in the system to exploit data level parallelism and offload computationally intensive and repetitive tasks from conventional general -purpose processors. Examples of these tasks include video graphics rendering, cryptography, garbage collection and other vector instruction applications.[0002] Various examples of the above systems exploiting data level parallelism include a single instruction multiple data (SIMD) processor as the parallel architecture processor. A graphics processing unit (GPU) is one example of a SIMD processor. The GPU includes one or more SEVID compute units, each with multiple lanes of processing resources for executing instructions of a respective thread. The instructions are the same in the threads executing across the lanes but with data elements particular to a given lane. An operating system scheduler or a programmer via a software programming platform schedules the threads on the lanes of the SIMD compute units.[0003] Without the use of a local data store, the result data generated by a given lane within the SEVID compute unit is inaccessible to other lanes without costly latencies of storing and retrieving the result data to other forms of data storage. Although the multiple lanes of the SEVID compute unit share the local data store, systems do not provide an architecture that allows the number of lanes to dynamically change, and thus, alter the amount of storage to share within the local data store. Therefore, the systems do not support conflict resolution and full accessibility (addressability) of the local data store.[0004] In view of the above, efficient methods and systems for efficiently processing access requests for a shared resource are desired. BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 is a generalized diagram of one embodiment of a computing system supporting access of a shared resource.[0006] FIG. 2 is a generalized diagram of one embodiment of a parallel architecture processor.[0007] FIG. 3 is a generalized diagram of one embodiment of a method for processing access requests targeting a shared resource.[0008] FIG. 4 is a generalized diagram of another embodiment of a method for processing access requests targeting a shared resource.[0009] FIG. 5 is a generalized diagram of one embodiment of a method for selecting sources of access requests for use of a shared resource.[0010] While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION[0011] In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, one having ordinary skill in the art should recognize that the invention might be practiced without these specific details. In some instances, well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring the present invention. Further, it will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements.[0012] Systems and methods for efficiently processing access requests for a shared resource are contemplated. In various embodiments, each of many requestors are assigned to a partition of a shared resource. In some embodiments, each partition is a separate partition, which is non- overlapping with other partitions of the shared resource. A controller is used to support access to the shared resource. When the controller determines no requestor generates an access request for an unassigned partition, the controller permits simultaneous access to the assigned partitions for active requestors. However, when the controller determines at least one active requestor generates an access request for an unassigned partition, the controller allows a single active requestor to gain access to the entire shared resource while stalling access for the other active requestors.[0013] The controller performs arbitration by selecting an active requestor. In some embodiments, the selection is based on least recently used criteria. The controller stalls access of the shared resource for unselected requestors while permitting access for the selected requestor. In some embodiments, the controller sets a limit on a number of access requests performed for the selected requestor or sets a limit on an amount of time for performing access requests for the selected requestor such as a number of clock cycles. If the active requestors have more access requests, the controller stalls access of the shared resource for the selected requestor and marks it as the most recently selected active requestor. Afterward, the controller deselects the requestor and again performs arbitration by selecting another active requestor to have exclusive access to the entire shared resource.[0014] In various embodiments, the shared resource is a local data store in a graphics processing unit and each of the multiple requestors is a single instruction multiple data (SEVID) compute unit. In some embodiments, the controller detects access requests to unassigned partitions by detecting accesses to regions of the local data store external to the assigned memory address boundaries for the SEVID compute units. In various embodiments, when a given SIMD compute unit has exclusive access to the entire local data store, it has exclusive access for a single clock cycle before arbitration reoccurs and another SIMD compute unit gains exclusive access. However, another number of clock cycles is possible and contemplated. Alternatively, in other embodiments, the controller monitors a number of access requests and when the number reaches a limit, arbitration reoccurs. In various embodiments, each SIMD compute unit includes read and write ports to the local data store, which are used to provide access to the local data store for another SIMD compute unit when the other SEVID compute unit has exclusive access to the local data store.[0015] Turning to FIG. 1, a generalized block diagram of one embodiment of a computing system supporting access of a shared resource is shown. In the shown embodiment, the computing system includes requestors 110A-110H accessing the shared resource 140 via the arbitration control unit 120. In some embodiments, the shared resource 140 is a shared memory and the arbitration control unit 120 is a memory controller. In other embodiments, the shared resource 140 is a unit with specific intensive computational functionality or a unit for providing switching access to a network. Other examples of a resource and any associated controller are possible and contemplated.[0016] The requestors 110A-110H include the computation resources 112A-112H. In various embodiments, the computational resources 112A-112H include pipeline registers, data structures for storing intermediate results, circuitry for performing integer arithmetic, floating-point arithmetic, Boolean logic operations, branch condition comparisons and so forth. As shown, the shared resource 140 is partitioned into multiple partitions 142A-142H. In some embodiments, each of the partitions 142A-142H includes a same amount of data storage, a same amount of intensive computational functionality and so forth. In other embodiments, one or more of the partitions 142A-142H includes less or more data storage or intensive computational functionality than other ones of the partitions 142A-142H.[0017] In various embodiments, each of the partitions 142A-142H is a separate partition which does not overlap with any other partition of the partitions 142A-142H. In other embodiments, overlapping is used. In various embodiments, each partition of the partitions 142A-142H is assigned to one of the computational resources 112A-112H. In other embodiments, two or more of the computational resources 112A-112H are assigned to a same one of the partitions 142A- 142H.[0018] In some embodiments, the assignments between the computational resources 112A-112H and the partitions 142A-142H in addition to the sizes of the partitions 142A-142H are set by programmable control and status registers (not shown). Firmware, an executing software application or other software is used to update the control and status registers to initially assign and subsequently reassign the computational resources 112A-112H to the partitions 142A-142H and the sizes of the partitions 142A-142H. In other embodiments, control logic implemented by hardware circuitry within the requestors 110A-110H and/or the arbitration control unit 120 performs the initial assignment and sizing and subsequent reassignments and resizing.[0019] As one or more of the computational resources 112A-112H process instructions of one or more applications, one or more of the requestors 11 OA- 11 OH generate access requests for the shared resource 140. In various embodiments, the generated access requests identify one of the partitions 142A-142H. By identifying one of the partitions 142A-142H, the generated access request targets the identified partition. The targeted partition is either an assigned partition or an unassigned partition.[0020] If no access request generated by the requestors 110A-110H target an unassigned one of the partitions 142A-142H, then the access requests are serviced based on the assignments. Each access request is permitted by the arbitration control unit 120 to access its assigned partition. The selection logic implemented by the multiplexer ("mux") gates 130A-130H selects access information 134A-134H based on the grant signal(s) 132A-132H. The grant signal(s) 132A- 132H are asserted by the arbitration control unit 120 in a manner to select the assigned one of the requestors 1 lOA-110H based on the earlier set assignments. Therefore, the partitions 142A-142H are accessed by its assigned one of the requestors 110A-110H. In various embodiments, two or more of the partitions 142A-142H are accessed simultaneously when there are no conflicts based on the assignments.[0021] If any access request generated by the requestors 1 lOA-110H targets an unassigned one of the partitions 142A-142H, then the requestors 110A-110H gain exclusive access to the partitions 142A-142H. The exclusive access occurs based on arbitration provided by the arbitration control unit 120. For example, in various embodiments, each active requestor of the requestors 110A- 110H gains exclusive access for a clock cycle based on a least recently selected basis. In other embodiments, a number of clock cycles or a number of access requests is used by the arbitration control unit 120 to determine when to allow another active requestor of the requestors 110A- 110H to gain exclusive access to the partitions 142A-142H.[0022] In some embodiments, the computing system includes a hybrid arbitration scheme wherein the arbitration control unit 120 includes a centralized arbiter and one or more of the requestors 110A-110H include distributed arbitration logic. For example, one or more of the requestors 11 OA- 11 OH includes an arbiter for selecting a given request to send to the arbitration control unit 120 from multiple requests generated by multiple sources within the computational resources 112A-112H. The arbitration control unit 120 selects one or more requests to send to the shared resource 140 from multiple requests received from the requestors 110A-110H. The grant signals 132A-132H are asserted based on the received requests and detecting whether any received request targets an assigned one of the partitions 142A-142H. In addition, in some embodiments, the arbitration control unit 120 adjusts the number of clock cycles or the number of access requests for exclusive access to the shared resource 140 based on an encoded priority along with the least-recently- selected scheme. [0023] Responses 150 for the requests are shown as being sent back to the arbitration control unit 120. In other embodiments, the responses 150 are sent directly to the requestors 1 lOA-110H such as via a bus. In some embodiments, polling logic within the interfaces of the requestors 11 OA- 11 OH is used to retrieve associated response data 150 from the bus or the arbitration control unit 120. In various other embodiments, the responses 150 are sent to other computational units (not shown) within the computing system.[0024] Referring now to FIG. 2, one embodiment of a parallel architecture processor 200 is shown. In various embodiments, the parallel architecture processor 200 is a graphics processing unit (GPU) with compute units 210A-210D accessing the local data store 260 via the arbitration control unit 250. Generally, a GPU includes a separate local data share for each of the compute units 210A-210D for sharing data among the lanes 220A-220M. Here, however, the local data share 260 is shared among the compute units 210A-210D. Therefore, it is possible for one or more of lanes 220A-220M within the compute unit 21 OA to share result data with one or more lanes 220A-220M within the compute unit 210D based on an operating mode.[0025] As described earlier, the parallel architecture processor 200 includes special-purpose integrated circuitry optimized for highly parallel data applications such as single instruction multiple data (SF D) operations. In various embodiments, the parallel architecture processor 200 is a graphics processing unit (GPU) used for video graphics rendering. As shown, each of the lanes 220A-220M within the compute unit 21 OA comprises registers 222A and an arithmetic logic unit (ALU) 224A. Lanes within other compute units of the compute units 210A-210D also include similar components. In various embodiments, the registers 222A are storage elements used as a register file for storing operands and results.[0026] In various embodiments, the data flow within the ALU 224A is pipelined. The ALU 224A includes pipeline registers, data structures for storing intermediate results and circuitry for performing integer arithmetic, floating-point arithmetic, Boolean logic operations, branch condition comparisons and so forth. These components are not shown for ease of illustration. Each of the computation units within a given row across the lanes 220A-220M is the same computation unit. Each of these computation units operates on a same instruction, but different data associated with a different thread.[0027] Each of the lanes 220A-220M within the compute unit 210A accesses the cache 230 for instructions. In addition, the cache 230 stores operand data to load into the registers 222A. For embodiments performing video graphics rendering, the cache 230 is referred to as a level one (LI) texture cache. Each of the compute units 210A-210D has further access to a shared L2 cache (not shown) which acts as a global data share for the compute units 210A-210D. For example, in various embodiments, each of the compute units 210A-210D includes a cache controller placed logically at the top above the cache 230 to store and retrieve data from the shared L2 cache.[0028] As described earlier, each of the lanes 220A-220M processes data for a separate thread. Each of the compute units 210A-210D processes threads for a given work unit. An operating system (OS) scheduler or a user-level scheduler schedules workloads running on a computer system with the parallel architecture processor 200 using a variety of schemes such as a round- robin scheme, a priority scheme, an availability scheme or a combination. Alternatively, a programmer schedules the workloads in combination with the runtime system. In such a case, the programmer utilizes a software platform to perform the scheduling. For example, the OpenCL® (Open Computing Language) framework supports programming across heterogeneous computing environments and includes a low-level application programming interface (API) for heterogeneous computing.[0029] The OpenCL framework (generally referred to herein as "OpenCL") includes a C-like language interface used to define execution queues, wherein each queue is associated with an OpenCL device. An OpenCL device may be a general -purpose central processing unit (CPU), a GPU, or other unit with at least one processor core within a heterogeneous multi-core architecture. In the OpenCL framework a function call is referred to as an OpenCL compute kernel, or simply a "compute kernel". A software programmer schedules the compute kernels in the execution queues. A compute kernel is matched with one or more records of data to produce one or more work units of computation. Each work unit has a unique identifier (ID). Each of the compute units 210A-210D is assigned one of the many work units by the OS or by the software programmer. Each of the lanes 220A-220M within a given one of the compute units 210A-210D is assigned a thread within the assigned work unit.[0030] Each of the lanes 220A-220M accesses the local data share 260. For example, in various embodiments, each of the lanes 220A-220M has allocated space within the local data share 260. Each of the lanes 220A-220M within a given one of the compute units 210A-210D has access to the allocated space of the other lanes within the same given compute unit. For example, lane 220 A within the compute unit 21 OA has access to the allocated space within the local data store 260 assigned to the lane 220M within the compute unit 21 OA. The lanes 220 A- 220M within the compute unit 21 OA have access each other's allocated space due to processing a same work unit.[0031] The requests generated by each of the lanes 220A-220M seek to access a block of data. In various embodiments, the block of data, or data block, is a set of bytes stored in contiguous memory locations. The number of bytes in a data block is varied according to design choice, and may be of any size. The scheduler 240 is used to schedule the access requests generated by the lanes 220A-220M within the compute unit 21 OA. The generated access requests are sent from the scheduler 240 to the local data store 260 via the arbitration control unit 250.[0032] As shown, the local data share 260 is divided into multiple partitions 262A-262D. In various embodiments, each of the partitions 262A-262D is a separate partition which does not overlap with any other partition of the partitions 262A-262D. In some embodiments, each of the partitions 262A-262D includes a same amount of data storage. In other embodiments, one or more of the partitions 262A-262D includes less or more data storage than other ones of the partitions 262A-262D.[0033] In various embodiments, the assignments between the compute units 210A-210D and the partitions 262A-262D in addition to the sizes of the partitions 262A-262D are set by an operating system, a software programmer, a dedicated control unit or other. For example, in some embodiments, programmable control and status registers (not shown) store particular values to set the assignments. Firmware, an executing software application or other software is used to update the control and status registers to initially assign and subsequently reassign the compute units 210A-210D and the partitions 262A-262D in addition to defining the sizes of the partitions 262A-262D. In other embodiments, control logic implemented by hardware circuitry within the compute units 210A-210D and/or the arbitration control unit 250 performs the initial assignment, subsequent reassignments and resizing.[0034] In various embodiments, the arbitration control unit 250 is used to provide shared memory capability across the compute units 210A-210D. For example, in various embodiments, threads of a same work unit are scheduled across two or more of the compute units 210A-210D, rather than scheduled to a single one of the compute units 210A-210D. For efficient processing, communication between the lanes should expand beyond a single one of the compute units 210A- 210D.[0035] In one example, the compute unit 21 OA is assigned to the partition 262A and the compute unit 210D is assigned to the partition 262D. However, later, threads of a same work unit are scheduled across the two compute units 21 OA and 210D. It is now possible for efficient execution that one or more of the lanes 220A-220M in the compute unit 21 OA needs to communicate with one or more lanes 220A-220M in the compute unit 210D. The arbitration control unit 250 identifies this situation and provides exclusive access to the local data store 260 for a selected one of the compute units 21 OA and 210D.[0036] The compute unit selected by the arbitration control unit 250 has exclusive access for a given duration of time. In various embodiments, the given duration is a single clock cycle. Therefore, in the above example, the compute units 21 OA and 210D alternate having exclusive access of the local data store 260 each clock cycle. In various embodiments, the given duration is programmable. In other embodiments, the duration is measured based on another number of clock cycles. In yet other embodiments, the given duration is measured based on a number of access requests, an encoded priority, an identifier (ID) of the requestor, an ID of a destination for the response data, a least-recently- selected scheme, and so forth. Further details of the logic used by the arbitration control unit 250 is next described.[0037] Referring now to FIG. 3, one embodiment of a method 300 for processing access requests targeting a shared resource is shown. For purposes of discussion, the steps in this embodiment (as well as in Figures 4-5) are shown in sequential order. However, in other embodiments some steps occur in a different order than shown, some steps are performed concurrently, some steps are combined with other steps, and some steps are absent.[0038] In various embodiments, multiple requestors are set up in a computing system to access a shared resource. The shared resource is divided into multiple partitions. Part of the setup process is assigning each of the multiple requestors to one of the multiple partitions (block 302). The assignments are based on logic implemented in hardware, software or a combination. An operating system, a software programmer, a dedicated control unit or other performs the assignments. In addition, in some embodiments, the sizes of the partitions are also set during the setup process. When the last requestor is reached for assignment ("yes" branch of the conditional block 304), instructions of one or more software applications are processed by the computing system (block 306).[0039] During the processing of the one or more software applications, the active requestors generate access requests for the shared resource (block 308). In various embodiments, the generated access requests identify one of the multiple partitions. In some embodiments, the identification includes an identifier (ID) of a partition. In other embodiments, an indication, such as a field or encoding, indirectly identifies the partition and control logic determines the identification based on the indication. In yet other embodiments, an address indirectly identifies the partition by indicating a data storage location within a given address range associated with the partition. By identifying one of the multiple partitions, the generated access request targets the identified partition. The targeted partition is either an assigned partition or an unassigned partition.[0040] If no generated access requests target an unassigned partition ("no" branch of the conditional block 310), then the access requests are serviced based on the assignments (block 312). Each access request is permitted to access its assigned partition. However, if any generated access request targets an unassigned partition ("yes" branch of the conditional block 310), then the access requests are serviced based on the arbitration allowing exclusive access to the entire shared resource (block 314). For example, each one of the active requestors gains exclusive access to the entire shared resource for a given duration. In various embodiments, the given duration is measured based on a number of clock cycles. In other embodiments, the given duration is measured based on a number of access requests. In various embodiments, the given duration is programmable. In some embodiments, the given duration is further based on an encoded priority, an identifier (ID) of the requestor, an ID of a destination for the response data, a least-recently- selected scheme, and so forth.[0041] Turning now to FIG. 4, another embodiment of a method 400 for processing access requests targeting a shared resource is shown. Multiple requestors have been assigned to partitions within a shared resource. As described earlier, the requestors generate access requests identifying one of the partitions. If no generated access requests target an unassigned partition ("no" branch of the conditional block 402), then the access requests are serviced based on accessing the assigned partitions (block 404). Each access request is permitted to access its assigned partition. In various embodiments, unshared partitions are accessed simultaneously. The processing of the instructions continue (block 406) and the requestors generate access requests.[0042] If any generated access request targets an unassigned partition ("yes" branch of the conditional block 402), then one requestor is selected for non-conflicting access of the shared resource (block 408). In various embodiments, the selected requestor is the requestor that generated the access request targeting the unassigned partition. In other embodiments, the selected requestor is the requestor which is currently the least-recently-selected requestor. In some embodiments, being the least-recently- selected requestor is based on time since the last access request was serviced for the requestor. In other embodiments, being the least-recently- selected requestor is based on a number of access requests serviced for the requestor. In some embodiments, selection is further based on an encoded priority, an ID of the requestor, identification of the operations being processed by computational units associated with the requestor and so forth.[0043] The unselected requestors are stalled (block 410). In some embodiments, stalling includes preventing the unselected requestors from sending access requests for the shared resource. In other embodiments, stalling includes not selecting access requests stored in a request queue from the unselected requestors. In some embodiments, an ID of the unselected requestors is used to identify the access requests to ignore in the queue.[0044] Any partition in the shared resource is available for access by the access requests generated by the selected requestor (block 412). Access requests generated by the selected requestor have exclusive access to the shared resource for a given duration of time. As described earlier, in some embodiments, the given duration is measured based on a number of clock cycles. In other embodiments, the given duration is measured based on a number of access requests. In various embodiments, the given duration is programmable. In some embodiments, the given duration is further based on an encoded priority, an identifier (ID) of the requestor, an ID of a destination for the response data, a least-recently-selected scheme, and so forth.[0045] When the given duration is reached, an indication is set to switch selection of requestors using arbitration. The currently selected requestor is deselected and stalled. Another active requestor is selected based on the arbitration criteria used earlier such as the criteria described for the selecting step in block 408. The selection based on arbitration logic continues until the current workload is completed or a reset is forced. The processing of the instructions continue (block 406) and the requestors generate access requests. As can be seen from the above, the access requests are processed in one of two modes. If no generated access requests target an unassigned partition, then processing continues in a first mode where the assigned partitions are available for servicing the access requests. However, if any generated access request targets an unassigned partition, then processing switches to a second mode where the requestors are selected for exclusive access to the entire shared resource.[0046] Turning now to FIG. 5, a generalized block diagram of one embodiment of a method 500 for selecting sources of access requests for use of a shared resource is shown. Multiple requestors have been assigned to partitions within a shared resource. As described earlier, the requestors generate access requests identifying one of the partitions. It is determined at least one active requestor requests access of an unassigned partition of the resource (block 502). One of the active requestors is selected as the next requestor to have exclusive access to entire resource (block 504). As described earlier, many factors are considered for selection such as a least- recently- selected scheme, an encoded priority, a number of pending access requests, a number of access requests already serviced, an indication of the computation being performed by an associated computational unit, an age of current outstanding requests and so forth.[0047] In various embodiments, the selected requestor has exclusive access of each partition of the shared resource for a given duration. As described earlier, the given duration is based on a variety of factors. If the selected requestor did not access the shared resource such as for the given duration ("no" branch of the conditional block 506), then the selected requestor maintains selection and continues to access the shared resource with exclusive access (block 508). However, if the selected requestor did access the shared resource for the given duration ("yes" branch of the conditional block 506), then the selected requestor is deselected (block 510).[0048] An indication is set indicating the requestor is the most-recently-selected requestor (block 512). If the workload for the requestors is not yet completed ("no" branch of the conditional block 514), then control flow of method 500 returns to block 504 where another requestor is selected for exclusive access to the shared resource. If the workload for the requestors is completed ("yes" branch of the conditional block 514), then selection of the requestors is also completed (block 516). Should another workload be assigned to the requestors, in some embodiments, the mode of operation resets to providing access to only assigned partitions of the shared resource.[0049] It is noted that one or more of the above-described embodiments include software. In such embodiments, the program instructions that implement the methods and/or mechanisms are conveyed or stored on a computer readable medium. Numerous types of media which are configured to store program instructions are available and include hard disks, floppy disks, CD- ROM, DVD, flash memory, Programmable ROMs (PROM), random access memory (RAM), and various other forms of volatile or non-volatile storage. Generally speaking, a computer accessible storage medium includes any storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium includes storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, or DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media further includes volatile or non-volatile memory media such as RAM (e.g. synchronous dynamicRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, low-power DDR(LPDDR2, etc.) SDRAM, Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, Flash memory, non-volatile memory (e.g. Flash memory) accessible via a peripheral interface such as the Universal Serial Bus (USB) interface, etc. Storage media includes microelectromechanical systems (MEMS), as well as storage media accessible via a communication medium such as a network and/or a wireless link.[0050] Additionally, in various embodiments, program instructions include behavioral -level descriptions or register-transfer level (RTL) descriptions of the hardware functionality in a high level programming language such as C, or a design language (HDL) such as Verilog, VHDL, or database format such as GDS Π stream format (GDSII). In some cases the description is read by a synthesis tool, which synthesizes the description to produce a netlist including a list of gates from a synthesis library. The netlist includes a set of gates, which also represent the functionality of the hardware including the system. The netlist is then placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks are then used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the system. Alternatively, the instructions on the computer accessible storage medium are the netlist (with or without the synthesis library) or the data set, as desired. Additionally, the instructions are utilized for purposes of emulation by a hardware based type emulator from such vendors as Cadence®, EVE®, and Mentor Graphics®.[0051] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
Some embodiments include apparatuses and methods forming the apparatuses. One of the apparatuses includes a first memory cell including a first transistor including a first channel region and a first charge storage structure, and a second transistor including a second channel region formed over the charge storage structure; a second memory cell adjacent the first memory cell, the second memory cell including a third transistor including a third channel region and a second charge storage structure, and a fourth transistor including a fourth channel region formed over the second charge storage structure; a first access line adjacent a side of the first memory cell; a second access line adjacent a side of the second memory cell; a first dielectric material adjacent the first channel region; a second dielectric material adjacent the third channel region; and a conductive structure between the first and second dielectric materials and adjacent the first and second dielectric materials. |
What is claimed is:1. An apparatus comprising: a first memory cell including a first transistor and a second transistor, the first transistor including a first channel region and a first charge storage structure separated from the first channel region, and the second transistor including a second channel region formed over the charge storage structure; a second memory cell adjacent the first memory cell, the second memory cell including a third transistor and a fourth transistor, the third transistor including a third channel region and a second charge storage structure separated from the third channel region, and the fourth transistor including a fourth channel region formed over the second charge storage structure; a first access line adjacent a side of the first memory cell; a second access line adjacent a side of the second memory cell; a first dielectric material adjacent the first channel region; a second dielectric material adjacent the third channel region; and a conductive structure between the first and second dielectric materials and adjacent the first and second dielectric materials.2. The apparatus of claim 1, wherein the first channel region and the second channel region have different conductivity types.3. The apparatus of claim 1, wherein the second channel region includes semiconducting oxide material.4. The apparatus of claim 1, wherein: each of the first and second access lines and the conductive structure has a height in a direction perpendicular to a direction from the first memory cell to the second memory cell; and the height of the conductive structure is the same as the height of each of the52
first and second access lines.5. The apparatus of claim 1, wherein: each of the first and second access lines and the conductive structure has a height in a direction perpendicular to a direction from the first memory cell to the second memory cell; and the height of the conductive structure is unequal to the height of each of the first and second access lines.6. The apparatus of claim 1, wherein: each of the first and second access lines and the conductive structure has a thickness in a direction parallel to a direction from the first memory cell to the second memory cell; and the thickness of the conductive structure is the same as the thickness of each of the first and second access lines.7. The apparatus of claim 1, wherein: each of the first and second access lines and the conductive structure has a thickness in a direction parallel to a direction from the first memory cell to the second memory cell; and the thickness of the conductive structure is greater than the thickness of each of the first and second access lines.8. An apparatus comprising: a first trench, a second trench, and a third trench, the second trench being between the first and third trenches; a first memory cell between the first and second trenches, the first memory cell including a first side and a second side opposite from the first side, a first transistor including a first channel region and a first charge storage structure separated from the first channel region, and a second transistor including a second53
channel region formed over the charge storage structure; a second memory cell between the second and third trenches, the second memory cell including a first side and a second side opposite from the first side, a third transistor including a third channel region and a second charge storage structure separated from the third channel region, and a fourth transistor including a fourth channel region formed over the second charge storage structure; a first access line in the second trench and adjacent the second side of the first memory cell; a second access line in the second trench and adjacent the first side of the second memory cell; a first conductive shield structure adjacent the first side of the first memory cell; and a second conductive shield structure adjacent the second side of the second memory cell.9. The apparatus of claim 8, wherein the apparatus comprises a memory device having memory cells and access lines for the memory cells, wherein: the first and second memory cells are included in the memory cells; the first and second access lines are included in the access lines; and each of the first and third trenches void of an access line among the access lines.10. The apparatus of claim 1, wherein the second channel region includes at least one of zinc tin oxide (ZTO), indium zinc oxide (IZO), zinc oxide (ZnOx), indium gallium zinc oxide (IGZO), indium gallium silicon oxide (IGSO), indium oxide (InOx, InaOa), tin oxide (SnOa), titanium oxide (TiOx), zinc oxide nitride (ZnxOyNz), magnesium zinc oxide (MgxZnyOz), indium zinc oxide (InxZnyOz), indium gallium zinc oxide (InxGayZnzOa), zirconium indium zinc oxide (ZrxInyZnzOa), hafnium indium zinc oxide (HfxInyZnzOa), tin indium zinc oxide (SnxInyZnzOa), aluminum tin indium zinc oxide (AlxSnylnzZnaOd), silicon indium54
zinc oxide (SixInyZnzOa), zinc tin oxide (ZnxSnyOz), aluminum zinc tin oxide (AlxZnySnzOa), gallium zinc tin oxide (GaxZnySnzOa), zirconium zinc tin oxide (ZrxZnySnzOa), indium gallium silicon oxide (InGaSiO), and gallium phosphide (GaP).11. The apparatus of claim 8, wherein the first, second, and third trenches have a same width in a direction parallel to a direction from the first memory cell to the second memory cell.12. The apparatus of claim 8, wherein: each of the first, second, and third trenches have a width in a direction parallel to a direction from the first memory cell to the second memory cell; and the width of the second trench is greater than the width of each of the first and third trenches.13. The apparatus of claim 8, wherein: each of the first and second access lines and the conductive shield structure has a thickness in a direction parallel to a direction from the first memory cell to the second memory cell; and the thickness of the conductive shield structure is greater than the thickness of each of the first and second access lines.14. The apparatus of claim 8, wherein: each of the first and second access lines and the conductive shield structure has a height in a direction perpendicular to a direction from the first memory cell to the second memory cell; and the height of the conductive shield structure is greater than the height of each of the first and second access line.15. A method comprising:55
applying a first voltage to a first access line associated with a selected memory cell of a memory device during an operation of the memory device, the selected memory cell including a first channel region and a first charge storage structure separated from the first channel region, and a second transistor including a second channel region formed over the charge storage structure; applying a second voltage less than the first voltage to a second access line associated with an unselected memory cell of the memory device during the operation, the unselected memory cell including a third transistor including a third channel region and a second charge storage structure separated from the third channel region, and a fourth transistor including a fourth channel region formed over the second charge storage structure; and applying a third voltage, during the operation, to a conductive structure between and adjacent selected memory cell and the unselected memory cell, wherein the conductive structure is neither an access line for the selected memory cell nor an access line for the unselected memory cell.16. The method of claim 15, wherein the third voltage is a negative voltage.17. The method of claim 15, wherein the third voltage is at least zero volts.18. The method of claim 15, wherein: the second voltage is zero volts; and the third voltage is a negative voltage.19. The method of claim 15, further comprising: applying ground potential to the first and third channel regions during the operation.20. The method of claim 15, wherein the operation is a read operation of the21. The method of claim 15, wherein the operation is a write operation of the memory device.22. A method comprising: forming a first memory cell, such that the first memory cell includes a first transistor including a first channel region and a first charge storage structure separated from the first channel region, and a second transistor including a second channel region formed over the charge storage structure; forming a second memory cell, such that the second memory includes a third transistor including a third channel region and a second charge storage structure separated from the third channel region, and a fourth transistor including a fourth channel region formed over the second charge storage structure; forming a first access line adjacent a side of the first memory cell; forming a second access line adjacent a side of the second memory cell; forming a first dielectric material adjacent the first channel region; forming a second dielectric material adjacent the second channel region; and forming a conductive structure between the first and second dielectric materials and adjacent the first and second dielectric materials.23. The method of claim 22, wherein the conductive structure is formed after the first and second dielectric materials are formed.24. The method of claim 22, wherein the conductive structure, the first access line, and the second access line are formed after the first and second dielectric materials are formed.25. The method of claim 22, wherein the conductive structure is formed concurrently with the first and second access lines.26. A method comprising: forming levels of materials over a substrate, the levels of materials including a dielectric material; forming trenches in the dielectric material by removing part of the levels of materials to provide a first remaining part of the levels of materials, such that each of the trenches includes a length in a first direction; forming materials in the trenches; forming additional trenches across the first remaining part of the levels of materials to form memory cells from a second remaining part of the levels of materials, such that each of the trenches includes a length in a second direction, and such that: a first memory cell of the memory cells is adjacent and between a first trench and a second trench of the additional trenches; a second memory cell of the memory cells is adjacent and between the second trench and a third trench of the additional trenches; forming a first access line in the first trench for the first memory cell; forming a second access line in the third trench for the second memory cell; and forming a conductive shield structure in the second trench adjacent the first and second memory cells.27. The method of claim 26, wherein the additional trenches have different widths.28. The method of claim 26, wherein the conductive structure and the first and second access lines are formed from a same material.29. The method of claim 26, further comprising: forming a first dielectric material contacting the conductive shield structure, the first channel region, the second channel region, and the first storage structure;58
and forming a second dielectric material contacting the conductive shield structure, the third channel region, the fourth channel region, and the second storage structure.30. The method of claim 29, wherein the first dielectric material and the second dielectric material are concurrently formed.59 |
VERTICAL MEMORY CELL AND CONDUCTIVE SHIELD STRUCTUREPriority Application[0001] This application claims the benefit of priority to U.S. Application Serial Number 17/515,065, filed October 29, 2021, which is incorporated herein by reference in its entirety.Background[0002] Memory devices are widely used in computers and many other electronic items to store information. Memory devices are generally categorized into two types: volatile memory devices and non-volatile memory devices. A memory device usually has numerous memory cells in which to store information. In a volatile memory device, information stored in the memory cells is lost if power supply is disconnected from the memory device. In a non-volatile memory device, information stored in the memory cells is retained even if supply power is disconnected from the memory device.[0003] The description herein involves volatile memory devices. Most conventional volatile memory devices store information in the form of charge in a capacitor structure included in the memory cell. As demand for device storage density increases, many conventional techniques provide ways to shrink the size of the memory cell in order to increase device storage density for a given device area. However, physical limitations and fabrication constraints may pose a challenge to such conventional techniques if the memory cell size is to be shrunk to a certain dimension. Further, increased device storage density for a given area may cause excessive capacitive coupling between elements of adjacent memory cells.Moreover, the structure and operation of some conventional memory cells require a transistor in the memory cell to have a relatively high threshold voltage. Unlike
some conventional memory devices, the memory devices described herein include features that can overcome challenges faced by conventional techniques.Brief Description of the Drawings[0004] FIG. 1 shows a block diagram of an apparatus in the form of a memory device including memory cells, according to some embodiments described herein.[0005] FIG. 2 shows a schematic diagram of a portion of a memory device including a memory array of two-transistor (2T) memory cells, according to some embodiments described herein.[0006] FIG. 3 shows the memory device of FIG. 2, including example voltages used during a read operation of the memory device, according to some embodiments described herein.[0007] FIG. 4 shows the memory device of FIG. 2, including example voltages used during a write operation of the memory device, according to some embodiments described herein.[0008] FIG. 5 A, FIG. 5B, and FIG. 6A through FIG. 6D show different views of a structure of the memory device of FIG. 2 including access lines and conductive shield structures, according to some embodiments described herein.[0009] FIG. 6E shows an alternative structure of the memory device of FIG. 6D including separate bottom conductive strips, according to some embodiments described herein.[0010] FIG. 7 shows an alternative structure of the memory device of FIG. 2 through FIG. 6D including access lines and conductive shield structures having different heights, according to some embodiments described herein.[0011] FIG. 8 shows an alternative structure of the memory device of FIG. 2 through FIG. 6D including access lines and conductive shield structures having different thickness, according to some embodiments described herein.[0012] FIG. 9 through FIG. 22C show processes of forming a memory device, according to some embodiments described herein.
[0013] FIG. 23 A, FIG. 23B, and FIG. 23C show different views of a structure of a memory device including multiple decks of memory cells, according to some embodiments described herein.Detailed Description[0014] The memory device described herein includes volatile memory cells in which each of the memory cells can include two transistors (2T). One of the two transistors has a charge storage structure, which can form a memory element of the memory cell to store information. The memory device described herein can have a structure (e.g., a 4F2 cell footprint) that allows the size (e.g., footprint) of the memory device to be relatively smaller than the size (e.g., footprint) of similar conventional memory devices. The described memory device can include a single access line (e.g., word line) to control two transistors of a corresponding memory cell. This can lead to reduced power dissipation and improved processing. Each of the memory cells of the described memory device can include a cross-point gain cell structure (and cross-point operation), such that a memory cell can be accessed using a single access line (e.g., word line) and single data line (e.g., bit line) during an operation (e.g., a read or write operation) of the memory device. The described memory device can include a conductive shield structure adjacent a side of the memory cell. The conductive shield structure can suppress or prevent potential leakage of current in the memory cell. This can improve retention of information stored in the memory cell. Other improvements and benefits of the described memory device and its variations are discussed below with reference to FIG. 1 through FIG. 23C.[0015] FIG. 1 shows a block diagram of an apparatus in the form of a memory device 100 including volatile memory cells, according to some embodiments described herein. Memory device 100 includes a memory array 101, which can contain memory cells 102. Memory device 100 can include a volatile memory device such that memory cells 102 can be volatile memory cells. An example of memory device 100 includes a dynamic random- access memory
(DRAM) device. Information stored in memory cells 102 of memory device 100 may be lost (e.g., invalid) if supply power (e.g., supply voltage Vcc) is disconnected from memory device 100. Hereinafter, supply voltage Vcc is referred to as representing some voltage levels; however, they are not limited to a supply voltage (e.g., Vcc) of the memory device (e.g., memory device 100). For example, if the memory device (e.g., memory device 100) has an internal voltage generator (not shown in FIG.l) that generates an internal voltage based on supply voltage Vcc, such an internal voltage may be used instead of supply voltage Vcc.[0016] In a physical structure of memory device 100, each of memory cells 102 can include transistors (e.g., two transistors) formed vertically (e.g., stacked on different layers) in different levels over a substrate (e.g., semiconductor substrate) of memory device 100. Memory device 100 can also include multiple levels (e.g., multiple decks) of memory cells where one level (e.g., one deck) of memory cells can be formed over (e.g., stacked on) another level (e.g., another deck) of additional memory cells. The structure of memory array 101, including memory cells 102, can include the structure of memory arrays and memory cells described below with reference to FIG. 2 through FIG. 23C.[0017] As shown in FIG. 1, memory device 100 can include access lines 104 (e.g., “word lines”) and data lines (e.g., bit lines) 105. Memory device 100 can use signals (e.g., word line signals) on access lines 104 to access memory cells 102 and data lines 105 to provide information (e.g., data) to be stored in (e.g., written) or read (e.g., sensed) from memory cells 102.[0018] Memory device 100 can include an address register 106 to receive address information ADDR (e.g., row address signals and column address signals) on lines 107 (e.g., address lines). Memory device 100 can include row access circuitry 108 (e.g., X-decoder) and column access circuitry 109 (e.g., Y-decoder) that can operate to decode address information ADDR from address register 106. Based on decoded address information, memory device 100 can determine which memory cells 102 are to be accessed during a memory operation. Memory device 100 can perform a write operation to store information in memory cells 102 and a
read operation to read (e.g., sense) information (e.g., previously stored information) in memory cells 102. Memory device 100 can also perform an operation (e.g., a refresh operation) to refresh (e.g., to keep valid) the value of information stored in memory cells 102. Each of memory cells 102 can be configured to store information that can represent at most one bit (e.g., a single bit having a binary 0 (“0”) or a binary 1 (“1”), or more than one bit (e.g., multiple bits having a combination of at least two binary bits).[0019] Memory device 100 can receive a supply voltage, including supply voltages Vcc and Vss, on lines 130 and 132, respectively. Supply voltage Vss can operate at a ground potential (e.g., having a value of approximately zero volts). Supply voltage Vcc can include an external voltage supplied to memory device 100 from an external power source such as a battery or an alternating current to direct current (AC-DC) converter circuitry.[0020] As shown in FIG. 1, memory device 100 can include a memory control unit 118, which includes circuitry (e.g., hardware components) to control memory operations (e.g., read and write operations) of memory device 100 based on control signals on lines (e.g., control lines) 120. Examples of signals on lines 120 include a row access strobe signal RAS*, a column access strobe signal CAS*, a write-enable signal WE*, a chip select signal CS*, a clock signal CK, and a clockenable signal CKE. These signals can be part of signals provided to a DRAM device.[0021] As shown in FIG. 1, memory device 100 can include lines (e.g., global data lines) 112 that can carry signals DQ0 through DQN. In a read operation, the value (e.g., “0” or “1”) of information (read from memory cells 102) provided to lines 112 (in the form of signals DQ0 through DQN) can be based on the values of the signals on data lines 105. In a write operation, the value (e.g., “0” or “1”) of information provided to data lines 105 (to be stored in memory cells 102) can be based on the values of signals DQ0 through DQN on lines 112.[0022] Memory device 100 can include sensing circuitry 103, select circuitry 115, and input/output (I/O) circuitry 116. Column access circuitry 109 can
selectively activate signals on lines (e.g., select lines) based on address signals ADDR. Select circuitry 115 can respond to the signals on lines 114 to select signals on data lines 105. The signals on data lines 105 can represent the values of information to be stored in memory cells 102 (e.g., during a write operation) or the values of information read (e.g., sensed) from memory cells 102 (e.g., during a read operation).[0023] I/O circuitry 116 can operate to provide information read from memory cells 102 to lines 112 (e.g., during a read operation) and to provide information from lines 112 (e.g., provided by an external device) to data lines 105 to be stored in memory cells 102 (e.g., during a write operation). Lines 112 can include nodes within memory device 100 or pins (or solder balls) on a package where memory device 100 can reside. Other devices external to memory device 100 (e.g., a hardware memory controller or a hardware processor) can communicate with memory device 100 through lines 107, 112, and 120.[0024] Memory device 100 may include other components, which are not shown in FIG. 1 so as not to obscure the example embodiments described herein. At least a portion of memory device 100 (e.g., a portion of memory array 101) can include structures and operations similar to or the same as any of the memory devices described below with reference to FIG. 2 through FIG. 23C.[0025] FIG. 2 shows a schematic diagram of a portion of a memory device 200 including a memory array 201, according to some embodiments described herein. Memory device 200 can correspond to memory device 100 of FIG. 1. For example, memory array 201 can form part of memory array 101 of FIG. 1. As shown in FIG. 2, memory device 200 can include memory cells 210 through 217, which are volatile memory cells (e.g., DRAM cells). For simplicity, similar or identical elements among memory cells 210 through 217 are given the same labels. [0026] Each of memory cells 210 through 217 can include two transistors T1 and T2. Thus, each of memory cells 210 through 217 can be called a 2T memory cell (e.g., 2T gain cell). Each of transistors T1 and T2 can include a fieldeffect transistor (FET). As an example, transistor T1 can be a p-channel FET
(PFET), and transistor T2 can be an n-channel FET (NFET). Part of transistor T1 can include a structure of a p-channel metal-oxide semiconductor (PMOS) transistor. Thus, transistor T1 can include an operation similar to that of a PMOS transistor. Part of transistor T2 can include an n-channel metal-oxide semiconductor (NMOS). Thus, transistor T2 can include an operation similar to that of a NMOS transistor.[0027] Transistor T1 of memory device 200 can include a charge- storage based structure (e.g., a floating-gate based). As shown in FIG. 2, each of memory cells 210 through 217 can include a charge storage structure 202, which can include the floating gate of transistor Tl. Charge storage structure 202 can form the memory element of a respective memory cell among memory cells 210 through 217. Charge storage structure 202 can store charge. The value (e.g., “0” or “1”) of information stored in a particular memory cell among memory cells 210 through 217 can be based on the amount of charge in charge storage structure 202 of that particular memory cell. For example, the value of information stored in a particular memory cell among memory cells 210 through 217 can be “0” or “1” (if each memory cell is configured as a single-bit memory cell) or “00”, “01”, “10”, “11” (or other multi-bit values) if each memory cell is configured as a multi-bit memory cell.[0028] As shown in FIG. 2, transistor T2 (e.g., the channel region of transistor T2) of a particular memory cell among memory cells 210 through 217 can be electrically coupled to (e.g., directly coupled to (contact)) charge storage structure 202 of that particular memory cell. Thus, a circuit path (e.g., current path) can be formed directly between transistor T2 of a particular memory cell and charge storage structure 202 of that particular memory cell during an operation (e.g., a write operation) of memory device 200. During a write operation of memory device 200, a circuit path (e.g., current path) can be formed between a respective data line (e.g., data line 271 or 272) and charge storage structure 202 of a particular memory cell through transistor T2 (e.g., through the channel region of transistor T2) of the particular memory cell.
[0029] Memory cells 210 through 217 can be arranged in memory cell groups 20 lo and 2011. FIG. 2 shows two memory cell groups (e.g., 20 lo and 2011) as an example. However, memory device 200 can include more than two memory cell groups. Memory cell groups 2Olo and 2011 can include the same number of memory cells. For example, memory cell group 20 lo can include memory cells 210, 212, 214, and 216, and memory cell group 2011 can include memory cells 211, 213, 215, and 217. FIG. 2 shows four memory cells in each of memory cell groups 2Olo and 201i as an example. The number of memory cells in memory cell groups 20 lo and 201i can be different from four.[0030] Memory device 200 can perform a write operation to store information in memory cells 210 through 217 and a read operation to read (e.g., sense) information from memory cells 210 through 217. Memory device 200 can be configured to operate as a DRAM device. However, unlike some conventional DRAM devices that store information in a structure such as a container for a capacitor, memory device 200 can store information in the form of charge in charge storage structure 202 (which can be a floating gate structure). As mentioned above, charge storage structure 202 can be the floating gate of transistor Tl. During an operation (e.g., a read or write operation) of memory device 200, an access line (e.g., a single access line) and a data line (e.g., a single data line) can be used to access a selected memory cell (e.g., target memory cell).[0031] As shown in FIG. 2, memory device 200 can include access lines (e.g., word lines) 241, 242, 243, and 244 that can carry respective signals (e.g., word line signals) WL1, WL2, LW3, and WLn. Access lines 241, 242, 243, and 244 can be used to access both memory cell groups 2Olo and 2011. In the physical structure of memory device 200, each of access lines 241, 242, 243, and 244 can be structured as (can be formed from) a conductive line (e.g., a single conductive line). [0032] Access lines 241, 242, 243, and 244 form control gates for respective memory cells (e.g., memory cells 210 through 217 in FIG. 2) of memory device 200 to control access to the memory cells during an operation (e.g., read or write operation) of memory device 200.
[0033] Memory device 200 can include conductive shield structures 261 and 262, which are symbolically shown in FIG. 2 as lines (conductive lines). In the physical structure of memory device 200, each of conductive shield structures 261 and 262 can be structured as conductive lines (e.g., conductive regions) that can have respective lengths parallel to the lengths of access lines 241, 242, 243, and 244.[0034] Conductive shield structures 261 and 262 are not access lines (e.g., not word lines) of memory device 200. The operations and functions of conductive shield structures 261 and 262 are unlike those of access lines 241, 242, 243, and 244. In a read or write operation, memory device 200 uses access lines 241, 242, 243, and 244 as selected and unselected access lines to control (e.g., turn on or turn off) transistors T1 and T2 of selected memory cells and unselected memory cells. However, in a read or write operation of memory device 200, each of conductive shield structures 261 and 262 is neither an access line (e.g., selected access line) for a selected memory cell (or selected memory cells) nor an access line (e.g., unselected access line) for unselected memory cells of memory device 200. The conductive shield structures (e.g., conductive shield structures 261 and 262) of memory device 200 allow relaxing of the threshold voltage of transistor T2, improve retention of the memory cells, and other improvements and benefits described below.[0035] As shown in FIG. 2, conductive shield structures 261 and 262 can be applied with a signal SHIELD. Signal SHIELD can be provided with a voltage during read and write operations of memory device 200. The voltage applied to signal SHIELD during a read operation can be the same as (or can be different from) the voltage applied to signal SHIELD during a write operation. Signal SHIELD can be also provided with a voltage during a non-read operation (when a read operation is not performed) and during a non-write operation (when a write operation is not performed). Such non-read and non-write operations can occur in (e.g., can include) an idle mode, a standby mode, or in other inactive modes of memory device 200. Conductive shield structures 261 and 262 can be biased at a
constant bias (e.g., a constant voltage can be applied to signal SHIELD during read and write operations and during non-read and non-write operations (e.g., in inactive modes).[0036] In FIG. 2, access lines 241, 242, 243, and 244 can be selectively activated (e.g., activated one at a time) during an operation (e.g., read or write operation) of memory device 200 to access a selected memory cell (or selected memory cells) among memory cells 210 through 217. A selected memory cell can be referred to as a target memory cell. In a read operation, information can be read from a selected memory cell (or selected memory cells). In a write operation, information can be stored in a selected memory cell (or selected memory cells). [0037] In memory device 200, a single access line (e.g., a single word line) can be used to control (e.g., turn on or turn off) transistors T1 and T2 of a respective memory cell during either a read or write operation of memory device 200. Some conventional memory devices may use multiple (e.g., two separate) access lines to control access to a respective memory cell during read and write operations. In comparison with such conventional memory devices (that use multiple access lines for the same memory cell), memory device 200 uses a single access line (e.g., shared access line) in memory device 200 to control both transistors T1 and T2 of a respective memory cell to access the respective memory cell. This technique can save space and simplify operation of memory device 200. Further, some conventional memory devices may use multiple data lines to access a selected memory cell (e.g., during a read operation) to read information from the selected memory cell. In memory device 200, a single data line (e.g., data line 271 or 272) can be used to access a selected memory cell (e.g., during a read operation) to read information from the selected memory cell. This may also simplify the structure, operation, or both of memory device 200 in comparison with conventional memory devices that use multiple data lines to access a selected memory cell.[0038] In memory device 200, the gate (not labeled in FIG. 2) of each of transistors T1 and T2 can be part of a respective access line (e.g., a respective word line). As shown in FIG. 2, the gate of each of transistors T1 and T2 of memory cell
210 can be part of access line 241. The gate of each of transistors T1 and T2 of memory cell 211 can be part of access line 241. For example, in the physical structure of memory device 200, four different portions of a conductive material (e.g., four different portions of a continuous piece of metal or polysilicon) that forms access line 241 can form the gates (e.g., four gates) of transistors T1 and T2 of memory cell 210 and the gates of transistors T1 and T2 of memory cell 211, respectively.[0039] The gate of each of transistors T1 and T2 of memory cell 212 can be part of access line 242. The gate of each of transistors T1 and T2 of memory cells 213 can be part of access line 242. For example, in the structure of memory device 200, four different portions of a conductive material (e.g., four different portions of a continuous piece of metal or polysilicon) that forms access line 242 can form the gates (e.g., four gates) of transistors T1 and T2 of memory cell 212 and the gates of transistors T1 and T2 of memory cell 213, respectively.[0040] The gate of each of transistors T1 and T2 of memory cell 214 can be part of access line 243. The gate of each of transistors T1 and T2 of memory cell 215 can be part of access line 243. For example, in the structure of memory device 200, four different portions of a conductive material (e.g., four different portions of a continuous piece of metal or polysilicon) that forms access line 243 can form the gates (e.g., four gates) of transistors T1 and T2 of memory cell 214 and the gates of transistors T1 and T2 of memory cell 215, respectively.[0041] The gate of each of transistors T1 and T2 of memory cell 216 can be part of access line 244. The gate of each of transistors T1 and T2 of memory cell 217 can be part of access line 244. For example, in the structure of memory device 200, four different portions of a conductive material (e.g., four different portions of a continuous piece of metal or polysilicon) that forms access line 244 can form the gates (e.g., four gates) of transistors T1 and T2 of memory cell 216 and the gates of transistors T1 and T2 of memory cell 217, respectively.
[0042] In this description, a material can include a single material or a combination of multiple materials. A conductive material can include a single conductive material or combination multiple conductive materials.[0043] Memory device 200 can include data lines (e.g., bit lines) 271 and 272 that can carry respective signals (e.g., bit line signals) BL1 and BL2. During a read operation, memory device 200 can use data line 271 to obtain information read (e.g., sensed) from a selected memory cell of memory cell group 2Olo, and data line 272 to read information from a selected memory cell of memory cell group 2011. During a write operation, memory device 200 can use data line 271 to provide information to be stored in a selected memory cell of memory cell group 20 lo, and data line 272 to provide information to be stored in a selected memory cell of memory cell group 2011.[0044] Memory device 200 can include a ground connection (e.g., ground plate) 297 coupled to each of memory cells 210 through 217. Ground connection 297 can be structured from a conductive plate (e.g., a layer of conductive material) that can be coupled to a ground terminal of memory device 200.[0045] As an example (e.g., like FIG. 6D), ground connection 297 can be part of a common conductive structure (e.g., a common conductive plate) that can be formed on a level of memory device 200 that is under the memory cells (e.g., memory cells 210 through 217) of memory device 200. In this example, the elements (e.g., part of transistors T1 and T2 or the entire transistors T1 and T2) of each of the memory cells (e.g., memory cells 210 through 217) of memory device 200 can be formed (e.g., formed vertically) over the common conductive structure (e.g., a common conductive plate) and electrically coupled to the common conductive structure.[0046] In another example (e.g., like FIG. 6E), ground connection 297 can be part of separate conductive structures (e.g., separate conductive strips) that can be formed on a level of memory device 200 that is under the memory cells (e.g., memory cells 210 through 217) of memory device 200. In this example, the elements (e.g., part of transistors T1 and T2) of each of the memory cells (e.g.,
memory cells 210 through 217) of memory device 200 can be formed over (e.g., formed vertically) respective conductive structures (e.g., respective conductive strips) among the separate conductive structures (e.g., separate conductive strips) and electrically coupled to the respective conductive structures.[0047] As shown in FIG. 2, transistor T1 (e.g., the channel region of transistor Tl) of a particular memory cell among memory cells 210 through 217 can be electrically coupled to (e.g., directly coupled to) ground connection 297 and electrically coupled to (e.g., directly coupled to) a respective data line (e.g., data line 271 or 272). Thus, a circuit path (e.g., current path) can be formed between a respective data line (e.g., data line 271 or 272) and ground connection 297 through transistor Tl of a selected memory cell during an operation (e.g., a read operation) performed on the selected memory cell.[0048] Memory device 200 can include read paths (e.g., circuit paths). Information read from a selected memory cell during a read operation can be obtained through a read path coupled to the selected memory cell. In memory cell group 2Olo, a read path of a particular memory cell (e.g., memory cell 210, 212, 214, or 216) can include a current path (e.g., read current path) through a channel region of transistor Tl of that particular memory cell, data line 271, and ground connection 297. In memory cell group 2011, a read path of a particular memory cell (e.g., memory cell 211, 213, 215, s) can include a current path (e.g., read current path) through a channel region of transistor Tl of that particular memory cell, data line 272, and ground connection 297. In the example where transistor Tl is a PFET (e.g., a PMOS), the current in the read path (e.g., during a read operation) can include a hole conduction (e.g., hole conduction in the direction from data line 271 to ground connection 297 through the channel region (e.g., p-channel region) of transistor Tl). Since transistor Tl can be used in a read path to read information from the respective memory cell during a read operation, transistor Tl can be called a read transistor and the channel region of transistor Tl can be called a read channel region.
[0049] Memory device 200 can include write paths (e.g., circuit paths). Information to be stored in a selected memory cell during a write operation can be provided to the selected memory cell through a write path coupled to the selected memory cell. In memory cell group 20 lo, a write path of a particular memory cell can include transistor T2 (e.g., can include a write current path through a channel region of transistor T2) of that particular memory cell and data line 271. In memory cell group 201i, a write path of a particular memory cell (e.g., memory cell 211, 213, 215, or 217) can include transistor T2 (e.g., can include a write current path through a channel region of transistor T2) of that particular memory cell and data line 272. In the example where transistor T2 is an NFET (e.g., NMOS), the current in a write path (e.g., during a write operation) can include an electron conduction (e.g., electron conduction in the direction from data line 271 to charge storage structure 202) through the channel region (e.g., n-channel region) of transistor T2. Since transistor T2 can be used in a write path to store information in a respective memory cell during a write operation, transistor T2 can be called a write transistor and the channel region of transistor T2 can be called a write channel region.[0050] Each of transistors T1 and T2 can have a threshold voltage (Vt). Transistor T1 has a threshold voltage Vtl. Transistor T2 has a threshold voltage Vt2. The values of threshold voltages Vtl and Vt2 can be different (unequal values). For example, the value of threshold voltage Vt2 can be greater than the value of threshold voltage Vtl. The difference in values of threshold voltages Vtl and Vt2 allows reading (e.g., sensing) of information stored in charge storage structure 202 in transistor T1 on the read path during a read operation without affecting (e.g., without turning on) transistor T2 on the write path (e.g., path through transistor T2). This can prevent leaking of charge (e.g., during a read operation) from charge storage structure 202 through transistor T2 of the write path.[0051] In a structure of memory device 200, transistors T1 and T2 can be formed (e.g., engineered) such that threshold voltage Vtl of transistor T1 can be less than zero volts (e.g., Vtl < 0V) regardless of the value (e.g., “0” or “1”) of information stored in charge storage structure 202 of transistor Tl, and Vtl < Vt2.
Charge storage structure 202 can be in state “0 ’ when information having a value of “0” is stored in charge storage structure 202. Charge storage structure 202 can be in state “1” when information having a value of “1” is stored in charge storage structure 202. Thus, in this structure, the relationship between the values of threshold voltages Vtl and Vt2 can be expressed as follows: Vtl for state “0” < Vtl for state “1” < 0V, and Vt2 = 0V (or alternatively Vt2 > 0V).[0052] In an alternative structure of memory device 200, transistors T1 and T2 can be formed (e.g., engineered) such that Vtl for state “0” < Vtl for state “1,” where Vtl for state “0” < 0V (or alternatively Vtl for state “0” = 0V), Vtl for state “1” > 0V, and Vtl < Vt2.[0053] In another alternative structure, transistors T1 and T2 can be formed (e.g., engineered) such that Vtl for state “0” < Vtl for state “1,” where Vtl for state “0” = 0V (or alternatively Vtl for state “0” > 0V), and Vtl < Vt2.[0054] During a read operation of memory device 200, only one memory cell of the same memory cell group can be selected one at a time to read information from the selected memory cell. For example, memory cells 210, 212, 214, and 216 of memory cell group 20 lo can be selected one at a time during a read operation to read information from the selected memory cell (e.g., one of memory cells 210, 212, 214, and 216 in this example). In another example, memory cells 211, 213, 215, and 217 of memory cell group 20 li can be selected one at a time during a read operation to read information from the selected memory cell (e.g., one of memory cells 211, 213, 215, and 217 in this example).[0055] During a read operation, memory cells of different memory cell groups (e.g., memory cell groups 2Olo and 2011) that share the same access line (e.g., access line 241, 242, 243, or 244) can be concurrently selected (or alternatively can be sequentially selected). For example, memory cells 210 and 211 can be concurrently selected during a read operation to read (e.g., concurrently read) information from memory cells 210 and 211. Memory cells 212 and 213 can be concurrently selected during a read operation to read (e.g., concurrently read) information from memory cells 212 and 213. Memory cells 214 and 215 can be
concurrently selected during a read operation to read (e.g., concurrently read) information from memory cells 214 and 215. Memory cells 216 and 217 can be concurrently selected during a read operation to read (e.g., concurrently read) information from memory cells 216 and 217.[0056] The value of information read from the selected memory cell of memory cell group 20 lo during a read operation can be determined based on the value of a current detected (e.g., sensed) from a read path (described above) that includes data line 271, transistor T1 of the selected memory cell (e.g., memory cell 210, 212, 214, or 216), and ground connection 297. The value of information read from the selected memory cell of memory cell group 20 li during a read operation can be determined based on the value of a current detected (e.g., sensed) from a read path that includes data line 272, transistor T1 of the selected memory cell (e.g., memory cell 211, 213, 215, or 217), and ground connection 297.[0057] Memory device 200 can include detection circuitry (not shown) that can operate during a read operation to detect (e.g., sense) a current (e.g., current II, not shown) on a read path that includes data line 271 and detect a current (e.g., current 12, not shown) on a read path that includes data line 272. The value of the detected current can be based on the value of information stored in the selected memory cell. For example, depending on the value of information stored in the selected memory cell of memory cell group 20 lo, the value of the detected current (e.g., the value of current II) on data line 271 can be zero or greater than zero. Similarly, depending on the value of information stored in the selected memory cell of memory cell group 2011, the value of the detected current (e.g., the value of current 12) on data line 272 can be zero or greater than zero. Memory device 200 can include circuitry (not shown) to translate the value of a detected current into the value (e.g., “0,” “1,” or a combination of multi-bit values) of information stored in the selected memory cell.[0058] During a write operation of memory device 200, only one memory cell of the same memory cell group can be selected at a time to store information in the selected memory cell. For example, memory cells 210, 212, 214, and 216 of
memory cell group 20 lo can be selected one at a time during a write operation to store information in the selected memory cell (e.g., one of memory cell 210, 212, 214, and 216 in this example). In another example, memory cells 211, 213, 215, and 217 of memory cell group 20 li can be selected one at a time during a write operation to store information in the selected memory cell (e.g., one of memory cell 211, 213, 215, and 217 in this example).[0059] During a write operation, memory cells of different memory cell groups (e.g., memory cell groups 2Olo and 2011) that share the same access line (e.g., access line 241, 242, 243, or 244) can be concurrently selected. For example, memory cells 210 and 211 can be concurrently selected during a write operation to store (e.g., concurrently store) information in memory cells 210 and 211. Memory cells 212 and 213 can be concurrently selected during a write operation to store (e.g., concurrently store) information in memory cells 212 and 213. Memory cells 214 and 215 can be concurrently selected during a write operation to store (e.g., concurrently store) information in memory cells 214 and 215. Memory cells 216 and 217 can be concurrently selected during a write operation to store (e.g., concurrently store) information in memory cells 216 and 217.[0060] Information to be stored in a selected memory cell of memory cell group 20 lo during a write operation can be provided through a write path (described above) that includes data line 271 and transistor T2 of the selected memory cell (e.g., memory cell 210, 212, 214, or 216). Information to be stored in a selected memory cell of memory cell group 20 li during a write operation can be provided through a write path (described above) that includes data line 272 and transistor T2 of the selected memory cell (e.g., memory cell 211, 213, 215, or 217). As described above, the value (e.g., binary value) of information stored in a particular memory cell among memory cells 210 through 217 can be based on the amount of charge in charge storage structure 202 of that particular memory cell.[0061] In a write operation, the amount of charge in charge storage structure 202 of a selected memory cell can be changed (to reflect the value of information stored in the selected memory cell) by applying a voltage on a write path that
includes transistor T2 of that particular memory cell and the data line (e.g., data line 271 or 272) coupled to that particular memory cell. For example, a voltage having one value (e.g., 0V) can be applied on data line 271 (e.g., provide 0V to signal BL1) if information to be stored in a selected memory cell among memory cells 210, 212, 214, and 216 has one value (e.g., “0”). In another example, a voltage having another value (e.g., a positive voltage) can be applied on data line 271 (e.g., provide a positive voltage to signal BL1) if information to be stored in a selected memory cell among memory cells 210, 212, 214, and 216 has another value (e.g., “1”). Thus, information can be stored (e.g., directly stored) in charge storage structure 202 of a particular memory cell by providing the information to be stored (e.g., in the form of a voltage) on a write path (that includes transistor T2) of that particular memory cell.[0062] FIG. 3 shows memory device 200 of FIG. 2 including example voltages VI, V2, V3, and VSHIELD_R used during a read operation of memory device 200, according to some embodiments described herein. The example of FIG. 3 assumes that memory cells 210 and 211 are selected memory cells (e.g., target memory cells) during a read operation to read (e.g., to sense) information stored (e.g., previously stored) in memory cells 210 and 211. Memory cells 212 through 217 are assumed to be unselected memory cells. This means that memory cells 212 through 217 are not accessed, and information stored in memory cells 212 through 217 is not read while information is read from memory cells 210 and 211 in the example of FIG. 3. In this example, access line 241 can be called a selected access line (e.g., selected word line), which is the access line associated with (e.g., coupled to) selected memory cells (e.g., memory cells 210 and 211 in this example). In this example, access lines 242, 243, and 244 can be called unselected access lines (e.g., unselected word lines), which are the access lines associated with (e.g., coupled to) unselected memory cells (e.g., memory cells 212 through 217 in this example).[0063] In FIG. 3, voltages VI, V2, and V3 can represent different voltages applied to respective access lines 241, 242, 243, and 244 and data lines 271 and 272 during a read operation of memory device 200. Voltage V 1 can be applied to the
selected access line (e.g., access line 241). In a read operation, Voltage V2 can be applied to the unselected access lines (e.g., access lines 242, 243, and 244).[0064] Voltages VI, V2, and V3 can have different values. As an example, voltages VI, V2, and V3 can have values -IV, 0V, and 0.5V, respectively. The specific values of voltages used in this description are only example values. Different values may be used. For example, voltage V 1 can have a negative value range (e.g., the value of voltage VI can be from -3V to -IV).[0065] In the read operation shown in FIG. 3, voltage VI can have a value (voltage value) to turn on transistor T1 of each of memory cells 210 and 211 (selected memory cells in this example) and turn off (or keep off) transistor T2 of each of memory cells 210 and 211. This allows information to be read from memory cells 210 and 211. Voltage V2 can have a value, such that transistors T1 and T2 of each of memory cells 212 through 217 (unselected memory cells in this example) are turned off (e.g., kept off). Voltage V3 can have a value, such that a current (e.g., read current) may be formed on a read path that includes data line 271 and transistor T1 of memory cell 210 and a read path (a separate read path) that includes data line 272 and transistor T1 of memory cell 212. This allows a detection of current on the read paths (e.g., on respective data lines 271 and 272) coupled to memory cells 210 and 211, respectively. A detection circuitry (not shown) of memory device 200 can operate to translate the value of the detected current (during reading of information from the selected memory cells) into the value (e.g., “0”, “1”, or a combination of multi-bit values) of information read from the selected memory cell. In the example of FIG. 3, the value of the detected currents on data lines 271 and 272 can be translated into the values of information read from memory cells 210 and 211, respectively.[0066] In the read operation shown in FIG. 3, the voltages applied to respective access lines 241, 242, 243, and 244 can cause transistors T1 and T2 of each of memory cells 212 through 217, except transistor T1 of each of memory cells 210 and 211 (selected memory cells), to turn off (or to remain turned off). Transistor T1 of memory cell 210 (selected memory cell) may or may not turn on,
depending on the value of the threshold voltage Vtl of transistor T1 of memory cell 210. Transistor T1 of memory cell 211 (selected memory cell) may or may not turn on, depending on the value of the threshold voltage Vtl of transistor T1 of memory cell 211. For example, if transistor T1 of each of memory cells (e.g., 210 through 217) of memory device 200 is configured (e.g., structured) such that the threshold voltage of transistor T1 is less than zero (e.g., Vtl < -IV) regardless of the value (e.g., the state) of information stored in a respective memory cell 210, then transistor T1 of memory cell 210, in this example, can turn on and conduct a current on data line 271 (through transistor T1 of memory cell 210). In this example, transistor T1 of memory cell 211 can also turn on and conduct a current on data line 272 (through transistor T1 of memory cell 211). Memory device 200 can determine the value of information stored in memory cells 210 and 211 based on the value of the currents on data lines 271 and 272, respectively. As described above, memory device 200 can include detection circuitry to measure the value of currents on data lines 271 and 272 during a read operation.[0067] Voltage VSHIELD_R can have a negative value, zero volts, or a positive value. For example, voltage VSHIELD_R can have a range from -IV to +1V. Other values can be used. In some operations (e.g., read operations and non-read operations) of memory device 200, using a negative value (or zero volts) for voltage VSHTET D can offer more benefit than using a positive value for voltage VSHTET D R. For example, voltage VSHTET D having a negative value (or zero volts) applied to conductive shield structure 261 can suppress or prevent potential leakage of current in memory cells that are adjacent conductive shield structure 261 or 262, or both. This can improve retention of information stored in the adjacent memory cells. [0068] FIG. 4 shows memory device 200 of FIG. 2 including example voltages V4, V5, V6, V7, and VsHiELD_wused during a write operation of memory device 200, according to some embodiments described herein. The example of FIG.4 assumes that memory cells 210 and 211 are selected memory cells (e.g., target memory cells) during a write operation to store information in memory cells 210 and 211. Memory cells 212 through 217 are assumed to be unselected memory
cells. This means that memory cells 212 through 217 are not accessed and information is not to be stored in memory cells 212 through 217 while information is stored in memory cells 210 and 211 in the example of FIG. 4.[0069] In FIG. 4, voltages V4, V5, V6, and V7 can represent different voltages applied to respective access lines 241, 242, 243, and 244 and data lines 271 and 272 during a write operation of memory device 200. In a write operation, voltage V4 can be applied to the selected access line (e.g., access line 241). Voltage V5 can be applied to the unselected access lines (e.g., access lines 242, 243, and 244).[0070] Voltages V4, V5, V6, and V7 can have different values. As an example, voltages V4 and V5 can have values of 3V and 0V, respectively. These values are example values. Different values may be used.[0071] The values of voltages V6 and V7 can be the same or different depending on the value (e.g., “0” or “1”) of information to be stored in memory cells 210 and 211. For example, the values of voltages V6 and V7 can be the same (e.g., V6 = V7) if the memory cells 210 and 211 are to store information having the same value. As an example, V6 = V7 = 0V if information to be stored in each memory cell 210 and 211 is “0”. In another example, V6 = V7 = V+ (e.g., V+ is a positive voltage (e.g., from IV to 3V)) if information to be stored in each memory cell 210 and 211 is “1”.[0072] In another example, the values of voltages V6 and V7 can be different (e.g., V6 V7) if the memory cells 210 and 211 are to store information having different values. As an example, V6 = 0V if “0” is to be stored in memory cell 210, and V7 = V+ (e.g., V+ is a positive voltage (e.g., from IV to 3V)) if “1” is to be stored in memory cell 211. As another example, V6 = V+ (e.g., V+ is a positive voltage (e.g., from IV to 3V)) if “1” is to be stored in memory cell 210, and V7 = 0V if “0” is to be stored in memory cell 211.[0073] The range of voltage of IV to 3V is used here as an example. A different range of voltages can be used. Further, instead of applying 0V (e.g., V6 = 0V or V7 = 0V) to a particular write data line (e.g., data line 271 or 272) for storing
information having a value of “0 ’ to the memory cell (e.g., memory cell 210 or 211) coupled to that particular write data line, a positive voltage (e.g., V6 > 0V or V7 > 0V) may be applied to that particular data line.[0074] In a write operation of memory device 200 of FIG. 4, voltage V5 can have a value (e.g., V5 = 0V or V5 < 0V) such that transistors T1 and T2 of each of memory cells 212 through 217 (unselected memory cells, in this example) are turned off (e.g., kept off). Voltage V4 can have a value (e.g., V4 > 0V) to turn on transistor T2 of each of memory cells 210 and 211 (selected memory cells in this example) and form a write path between charge storage structure 202 of memory cell 210 and data line 271 and a write path between charge storage structure 202 of memory cell 211 and data line 272. A current (e.g., write current) may be formed between charge storage structure 202 of memory cell 210 (selected memory cell) and data line 271. This current can affect (e.g., change) the amount of charge on charge storage structure 202 of memory cell 210 to reflect the value of information to be stored in memory cell 210. A current (e.g., another write current) may be formed between charge storage structure 202 of memory cell 211 (selected memory cell) and data line 272. This current can affect (e.g., change) the amount of charge on charge storage structure 202 of memory cell 211 to reflect the value of information to be stored in memory cell 211.[0075] In the example write operation of FIG. 4, the value of voltage V6 may cause charge storage structure 202 of memory cell 210 to discharge or to be charged, such that the resulting charge (e.g., charge remaining after the discharge or charge action) on charge storage structure 202 of memory cell 210 can reflect the value of information stored in memory cell 210. Similarly, the value of voltage V7 in this example may cause charge storage structure 202 of memory cell 211 to discharge or to be charged, such that the resulting charge (e.g., charge remaining after the discharge or charge action) on charge storage structure 202 of memory cell 211 can reflect the value of information stored in memory cell 211.[0076] Voltage VSHIELD_W can have a negative value, zero volts, or a positive value. For example, voltage VSHIELD_R can have a range from -IV to +1V. Other
values can be used. Voltage VSHIELD_W can have a value that is the same as (equal) or different from the value of voltage VSHIELD_R. In some operations (e.g., write operations and non-write operations) of memory device 200, using a negative value (or zero volts) for voltage VSHIELD_W can offer more benefit (e.g., improved retention, as described above) than using a positive value for voltage VSHTET D R.[0077] FIG. 5 A, FIG. 5B, and FIG. 6A through FIG. 6D show different views of a structure of memory device 200 of FIG. 2 with respect to the X, Y, and Z directions, according to some embodiments described herein. FIG. 6E shows a memory device 200E, which is an alternative structure of memory device 200 of FIG. 6D. For simplicity, cross-sectional lines (e.g., hatch lines) are omitted from most of the elements shown in FIG. 5 A through FIG. 6E and other figures (e.g., FIG. 8A through FIG. 23C) in the drawings described herein. Some elements of memory device 200 (and other memory devices described herein) may be omitted from a particular figure of the drawings so as to not obscure the description of the element (or elements) being described in that particular figure. The dimensions (e.g., physical structures) of the elements shown in the drawings described herein are not scaled.[0078] FIG. 5A and FIG. 5B show different 3-dimensional views (e.g., isometric views) of memory device 200 including memory cell 210 with respect to the X, Y, and Z directions. FIG. 6A shows a side view (e.g., cross-sectional view) of memory device 200 including memory cells 210, 211, 218, 219 with respect to the X-Z direction taken along line 6A-6A of FIG. 6C. FIG. 6B shows a view (e.g., cross-sectional view) taken along line 6B-6B of FIG. 6A and FIG. 6C. FIG. 6C shows a top view (e.g., plan view) of memory device 200 of FIG. 6 A including relative locations of data lines 271, 272, 273, and 274 (and associated signals BL1, BL2, BL3, and BL4), and access lines 241, 242, 243, and 244 (associated signals WL1, WL2, WL3, and WL4). FIG. 6D shows a top view (e.g., plan view) of memory device 200 of FIG. 6C including portions of data lines 271, 272, 273, and 274 and common conductive structure (e.g., a common conductive plate) including semiconductor material 596 and ground connection 297 over substrate 599.
[0079] As shown in FIG. 5 A and FIG. 5B, memory device 200 can include conductive shield structures 261 and 262 located adjacent respective sides of memory cells 210, 212, 214, and 216. For example, conductive shield structure 261 is between and adjacent sides of memory cells 210 and 212. Conductive shield structure 262 is between and adjacent sides of memory cells 214 and 216.[0080] Each of access lines 241, 242, 243, and 244 can be located on a side of a respective memory cell that is opposite from the side of the respective memory cell where conductive shield structure 261 or 262 is located. Each of access line 241 and each of conductive shield structures 261 and 262 can include a structure (e.g., a piece (e.g., a layer)) of conductive material (e.g., metal, conductively doped poly silicon, or other conductive materials). Conductive shield structures 261 and 262 can have the same material as (or alternatively different materials from) access lines 241, 242, 243, and 244.[0081] The following description refers to FIG. 5A through FIG. 6D. FIG. 5A and shows the structure of one memory cell (e.g., memory cell 210) of memory device 200 with data line 271 shown in exploded view (separated from memory cell 210) to show elements of memory cell 210 located below (under) data line 271. FIG. 5A shows details of memory cell 210. The structures of other memory cells (e.g., memory cells 211 through 217 in FIG. 2) of memory device 200 can be similar to or the same as the structure of memory cell 210 in FIG. 5A through FIG. 6D. In FIG. 2 through FIG. 6C, the same elements are given the same reference numbers. Some portions (e.g., gate oxide and cell isolation structures) of memory device 200 are omitted from FIG. 5 A through FIG. 6D so as to not obscure the elements of memory device 200 in the embodiments described herein.[0082] As shown in FIG. 5A, memory device 200 can include a substrate 599 over which memory cell 210 (and other memory cells (not shown) of memory device 200) can be formed. Transistors T1 and T2 of memory cell 210 can be formed vertically with respect to substrate 599. Substrate 599 can be a semiconductor substrate (e.g., silicon-based substrate) or other type of substrate. The Z-direction (e.g., vertical direction) is a direction perpendicular to (e.g.,
outward from) substrate 599. The Z-direction is also perpendicular to (e.g., extended vertically from) the X-direction and the Y-direction. The X-direction and Y-direction are perpendicular to each other.[0083] As shown in FIG. 5A, ground connection 297 can include a structure (e.g., a piece (e.g., a layer)) of conductive material (e.g., conductive region) located over (formed over) substrate 599. Example materials for ground connection 297 include a piece of metal, conductively doped poly silicon, or other conductive materials. Ground connection 297 can be coupled to a ground terminal (not shown) of memory device 200. FIG. 5A shows ground connection 297 contacting (e.g., directly coupled to) substrate 599, as an example. In an alternative structure, memory device 200 can include a dielectric (e.g., a layer of dielectric material, not shown) between ground connection 297 and substrate 599.[0084] As shown in FIG. 5A, memory device 200 can include a semiconductor material 596 formed over ground connection 297. Semiconductor material 596 can include a structure (e.g., a piece (e.g., a layer)) of silicon, polysilicon, or other semiconductor material, and can include a doped region (e.g., p-type doped region), or other conductive materials.[0085] FIG. 6A shows memory cells 218 and 219 and associated data lines 273 and 274 that are not shown in FIG. 2. However, as shown in FIG. 6A, memory cells 218 and 219 can share access line 241 with memory cells 210 and 211. FIG. 6A shows conductive shield structure 261 and access line 241 can be located on opposite sides (e.g., front side and back side with respect the Y-direction) of each of memory cells 210, 211, 218, and 219. Conductive shield structure 261 can have a length in the X-direction. Only a portion (e.g., cutaway view) of conductive shield structure 261 in the X-direction is shown in FIG. 6A to expose details of memory cells 210, 211, 218, and 219.[0086] As shown in FIG. 6A, conductive shield structure 261 can have a height H2 in the Z-direction. As shown in FIG. 6A and FIG. 6B, access line 241 can have a height Hl in the Z-direction. As shown in FIG. 6B, the Z-direction is perpendicular to the Y-direction, which is also a direction from one memory cell to
the next memory cell (e.g., from memory cell 210 to memory cell 212) in the Y- direction. Heights Hl and H2 and can be the same (equal in dimension). However (e.g., FIG. 7), conductive shield structure 261 can be structured (e.g., formed) such that conductive shield structure 261 can have a height H2’ (FIG. 6A) greater than height H2. Thus, the height of conductive shield structure 261 can be the same as the height of access line 241 (e.g., H2 = Hl) or greater than the height of access line 241 (e.g., H2’ > H1).[0087] As shown in FIG. 6B, access line 241 can have a thickness W1 the Y-direction, which is parallel to a direction from memory one memory cell to the next memory cell (e.g., from memory cell 210 to memory cell 212) in the Y- direction. As shown in FIG. 6B, conductive shield structure 261 can have thickness W2. Thickness W2 can be greater than thickness W1 (e.g., W2 > Wl). However (e.g., FIG. 8), conductive shield structure 261 can be structured (e.g., formed), such that conductive shield structure 261 can have a thickness (in the Y-direction) that is the same as (equal to) the thickness of access line 241.[0088] As shown in FIG. 6B, like access line 241, each of access lines 242, 243, and 244 can have height Hl and thickness Wl. Like conductive shield structure 261, other conductive shield structures (e.g., conductive shield structure 262) of memory device 200 can have height H2 and a thickness W2.[0089] As shown in FIG. 6B, memory device 200 can include trenches 290, 291, 292, 293, and 294 that have different (unequal) widths (trench width) TW1 and TW2. Each of trenches 291 and 292 can have width TW1. Trench 292 (also trenches 290 and 294) can have width TW2. Width TW2 can be greater than width TW1. Alternatively, trenches 290, 291, 292, 293, and 294 and have the same (equal) width. For example, in an alternative structure of memory device 200, each of trenches 291 and 291 can have a width TW1’ (not shown), and trench 292 (also trenches 290 and 294) can have a width TW2’ (not shown) where width TW1’ can be the same as width TW2’ (e.g., TW1’ = TW2’).[0090] As shown in FIG. 6B, access lines 241, 242, 243, and 244 and conductive shield structures 261 and 262 can be located in respective trenches 290,
291, 292, 293, and 294. In memory device 200, as shown in FIG. 6B, not all trenches (fewer than all trenches) 290, 291, 292, 293, and 294 have an access line (or access lines) located in them. For example, trenches 291 and 293 do not have an access line located in them. Thus, trenches 291 and 293 are void of an access line (among access lines 241, 242, 243, and 244). This mean that none of access lines 241, 242, 243, and 244 is located in trenches 291 and 293. The trenches that do not have a conductive shield structure (e.g., conductive shield structures 261 or 262) can have an access line (e.g., access line 241 in trench 290 or access line 244 in trench 294) or multiple access lines (e.g., access lines 242 and 243 in trench 292).[0091] As shown in FIG. 6B, memory device 200 can include dielectric materials 545 located in trenches 290, 291, 292, 293, and 294 to electrically separate access lines 241, 242, 243, and 244 and conductive shield structures 261 and 262 from other elements (e.g., read and write channel regions, and charge storage structures) of the memory cells (e.g., memory cells 210, 212, 214, and 216) of memory device 200.[0092] Each of memory cells 210, 212, 214, and 216 can be located between and adjacent two respective trenches among trenches 290, 291, 292, 293, and 294. For example, memory cell 210 can be located between trenches 290 and 291. Memory cell 212 can be located between trenches 291 and 292.[0093] Thus, as shown in FIG. 6B, each memory cell can have opposite sides (e.g., left side and right side in the Y-direction). Each access line (e.g., 242) can be located in a trench (e.g., 292) and adjacent a side of a memory cell (e.g., right side of memory cell 212) in the Y-direction. Each conductive shield structure can be located in trench (e.g., 291) and adjacent a side of a memory cell (e.g., left side of memory cell 212) in Y-direction.[0094] As shown in FIG. 6B, each of conductive shield structures 261 and 262 in a particular trench (among trenches 290, 291, 292, 293, and 294) can be electrically separated from the elements of adjacent memory cells by dielectric materials 545 in that particular trench. For example, as shown in FIG. 6B, each of clude material (e.g., write channelregion) 520 formed over charge storage structure 202. Conductive shield structure 261 can be electrically separated from materials 520 of memory cells 210 and 212 by respective dielectric materials 545 in trench 291. As shown in FIG. 6B, dielectric materials 545 in trench 291 can be adjacent (e.g., can contact or indirectly contact) materials 520 and charge storage structures 202 of respective memory cells 210 and 212. Conductive shield structure 261 can be between dielectric materials 545 and adjacent (e.g., contacting or indirectly contacting) dielectric materials 545. [0095] Charge storage structure 202 (FIG. 5A through FIG. 6D) of each memory cell of memory device 200 can include a charge storage material (or a combination of materials), which can include a piece (e.g., a layer) of semiconductor material (e.g., polysilicon), a piece (e.g., a layer) of metal, or a piece of material (or materials) that can trap charge. The materials for charge storage structure 202 and the access lines (e.g., access line 241) of memory device 200 can be the same or can be different. As shown in FIG. 5 A, FIG 6A, and FIG. 6B, charge storage structure 202 can include a portion (e.g., bottom portion) that is closer (e.g., extends in the Z-direction closer) to substrate 599 than the bottom of access line 241.[0096] As shown in FIG. 6A, each charge storage structure 202 can include an edge (e.g., top edge) 202’, and access line 241 can include an edge (e.g., bottom edge) 24 IE. FIG. 6A shows an example where edge 202’ is at a specific distance (e.g., distance shown in FIG. 6A) from edge 24 IE. However, the distance between edge 202’ of charge storage structure 202 and edge 24 IE of access line 241 can vary. For example, FIG. 6A shows edge 24 IE being below edge 202’ with respect to the Z-direction, such that access line 241 can overlap (in the Z-direction) charge storage structure 202. However, edge 24 IE can alternatively be above edge 202’ with respect to the Z-direction, such that access line 241 may not overlap (in the Z- direction) charge storage structure 202.[0097] As shown in FIG. 6A, material 520 can be located between data line 271 and charge storage structure 202. Material 520 can be electrically coupled to (e.g., directly coupled to (contact)) data line 271. Material 520 can also be
electrically coupled to (e.g., directly coupled to (contact)) charge storage structure 202 of memory cell 210. As described above, charge storage structure 202 of memory cell 210 can form the memory element of memory cell 210. Thus, memory cell 210 can include a memory element (which is charge storage structure 202) located between substrate 599 and material 520 with respect to the Z-direction and the memory element contacts (e.g., directly coupled to) material 520.[0098] Material 520 can form a source (e.g., source terminal), a drain (e.g., drain terminal), and a channel region (e.g., write channel region) between the source and the drain of transistor T2 of memory cell 210. Thus, as shown in FIG. 5A, FIG 6A, and FIG. 6B, the source, channel region, and the drain of transistor T2 of memory cell 210 can be formed from a single piece of the same material (or alternatively, a single piece of the same combination of materials), such as material 520. Therefore, the source, the drain, and the channel region of transistor T2 of memory cell 210 can be formed from the same material (e.g., material 520) of the same conductivity type (e.g., either n-type or p-type). Other memory cells of memory device 200 can also include material 520 like memory cell 210.[0099] Material 520 can include a structure (e.g., a piece (e.g., a layer)) of semiconductor material. In the example where transistor T2 is an NFET (as described above), material 520 can include n-type semiconductor material (e.g., n- type silicon).[00100] In another example, the semiconductor material that forms material 520 can include a structure (e.g., a piece) of oxide material. Examples of the oxide material used for material 520 include semiconducting oxide materials, transparent conductive oxide materials, and other oxide materials.[00101] As an example, material 520 can include at least one of zinc tin oxide (ZTO), indium zinc oxide (IZO), zinc oxide (ZnOx), indium gallium zinc oxide (IGZO), indium gallium silicon oxide (IGSO), indium oxide (InOx, ImOa), tin oxide (SnOa), titanium oxide (TiOx), zinc oxide nitride (ZnxOyNz), magnesium zinc oxide (MgxZnyOz), indium zinc oxide (InxZnyOz), indium gallium zinc oxide (InxGayZnzOa), zirconium indium zinc oxide (ZrxInyZnzOa), hafnium indium zinc
oxide (HfxInyZnzOa), tin indium zinc oxide (SnxInyZnzOa), aluminum tin indium zinc oxide (AlxSnyInzZnaOd), silicon indium zinc oxide (SixInyZnzOa), zinc tin oxide (ZnxSnyOz), aluminum zinc tin oxide (AlxZnySnzOa), gallium zinc tin oxide (GaxZnySnzOa), zirconium zinc tin oxide (ZrxZnySnzOa), indium gallium silicon oxide (InGaSiO), and gallium phosphide (GaP).[00102] Using the materials listed above in memory device 200 provides improvement and benefits for memory device 200. For example, during a read operation, to read information from a selected memory cell (e.g., memory cell 210), charge from charge storage structure 202 of the selected memory cell may leak to transistor T2 of the selected memory cell. Using the material listed above for the channel region (e.g., material 520) of transistor T2 can reduce or prevent such a leakage. This improves the accuracy of information read from the selected memory cell and improves the retention of information stored in the memory cells of the memory device (e.g., memory device 200) described herein.[00103] The materials listed above are examples of material 520. However, other materials (e.g., a relatively high band-gap material) different from the abovelisted materials can be used.[00104] As shown in FIG. 5 A, FIG. 6A, and FIG. 6B, material 520 and charge storage structure 202 of memory cell 210 can be electrically coupled (e.g., directly coupled) to each other, such that material 520 can contact charge storage structure 202 of memory cell 210 without an intermediate material (e.g., without a conductive material) between charge storage structure 202 of memory cell 210 and material 520. In an alternative structure (not shown), material 520 can be electrically coupled to charge storage structure 202 of memory cell 210, such that material 520 is not directly coupled to (not contacting) charge storage structure 202 of memory cell 210, but material 520 is coupled to (e.g., indirectly contacting) charge storage structure 202 of memory cell 210 through an intermediate material (e.g., a conductive material) between charge storage structure 202 of memory cell 210 and material 520.
[00105] As shown in FIG. 5A, FIG. 6A, FIG. 6C, and FIG. 6D, memory cell 210 can include a material 510, which can include a structure (e.g., a piece (e.g., a layer)) of semiconductor material. Example materials for material 510 can include silicon, polysilicon (e.g., undoped or doped polysilicon), germanium, silicongermanium, or other semiconductor materials and semiconducting oxide materials (oxide semiconductors, e.g., SnO or other oxide semiconductors).[00106] As described above with reference to FIG. 2, transistor T1 of memory cell 210 includes a channel region (e.g., read channel region). In FIG. 5A, FIG. 6A, FIG. 6C, and FIG. 6D, the channel region of transistor T1 of memory cell 210 can include (e.g., can be formed from) material 510. Material 510 can be electrically coupled to (e.g., directly coupled to (contact) data line 271. As described above with reference to FIG. 2, memory cell 210 can include a read path. In FIG. 5A and FIG. 6A through FIG. 6D, material 510 (e.g., the read channel region of transistor T1 of memory cell 210) can be part of the read path of memory cell 210 that can carry a current (e.g., read current) during a read operation of reading information from memory cell 210. For example, during a read operation, to read information from memory cell 210, material 510 can conduct a current (e.g., read current (e.g., holes)) between data line 271 and ground connection 297 (through part of semiconductor material 596). The direction of the read current can be from data line 271 to ground connection 297 (through material 510 and part of semiconductor material 596). In the example where transistor T1 is a PFET and transistor T2 is an NFET, the material that forms material 510 can have a different conductivity type from material 520. For example, material 510 can include p-type semiconductor material (e.g., p-type silicon) regions, and material 520 can include n-type semiconductor material (e.g., n-type gallium phosphide (GaP)) regions.[00107] As shown in FIG. 5A and FIG. 6A, memory cell 210 can include dielectric materials 515A and 515B. Dielectric materials 515A and 515B can be gate oxide regions that electrically separate each of charge storage structure 202 and material 520 from material 510 (e.g., the channel region of transistor Tl).
Dielectric materials 515A and 515B can also electrically separate charge storage structure 202 from semiconductor material 596.[00108] Example materials for dielectric materials 515A and 515B include silicon dioxide, hafnium oxide (e.g., HfCE), aluminum oxide (e.g., AI2O3), or other dielectric materials. In an example structure of memory device 200, dielectric materials 515A and 515B include a high-k dielectric material (e.g., a dielectric material having a dielectric constant greater than the dielectric constant of silicon dioxide). Using such a high-k dielectric material (instead of silicon dioxide) can improve the performance (e.g., reduce current leakage, increase drive capability of transistor Tl, or both) of memory device 200.[00109] As shown in FIG. 5A and FIG. 6A, the memory cells (e.g., memory cells 210, 211, 218, and 219) of memory device 200 can share (e.g., can electrically couple to) semiconductor material 596. For example, as shown in FIG. 6A, the read channel regions of the memory cells (e.g., material 510 of each of memory cells 210, 211, 218, and 219) of memory device 200 can contact (e.g., can be electrically coupled to) semiconductor material 596.[00110] As shown in FIG. 5A and FIG. 6A, memory device 200 can include a conductive region 597 (e.g., a common conductive plate) under the memory cells (e.g., memory cells 210, 211, 216, and 217 in FIG. 6A) of memory device 200. Conductive region 597 can include at least one of the materials (e.g., doped polysilicon) of semiconductor material 596 and the material (e.g., metal or doped polysilicon) of ground connection 297. For example, conductive region 597 can include the material of semiconductor material 596, the material of ground connection 297, or the combination of the materials of semiconductor material 596 and ground connection 297. Thus, as shown FIG. 6A, the memory cells (e.g., memory cells 210, 211, 216, and 217) of memory device 200 can share conductive region 597 (which can include any combination of semiconductor material 596 and ground connection 297).[00111] As shown in FIG. 5A and FIG. 6A, access line 241 can be adjacent part of material 510 and part of material 520 and can span across (e.g., overlap in
the X-direction) part of material 510 and part of material 520. As described above, material 510 can form part of a read channel region of transistor T1 and material 520 can form part of a write channel region of transistor T2. Thus, as shown in FIG. 5A and FIG. 6A, access line 241 can span across (e.g., overlap) part of (e.g., on a side (e.g., front side) in the Y-direction) both read and write channels of transistors T1 and T2, respectively. As shown in FIG. 6A, access line 241 can also span across (e.g., overlap in the X-direction) part of material 510 (e.g., a portion of the read channel region of transistor Tl) and part of material 520 (e.g., a portion of write channel region of transistor T2) of other memory cells (e.g., memory cells 211, 218, and 219) of memory device 200. The spanning (e.g., overlapping) of access line 241 across material 510 and material 520 allows access line 241 (a single access line) to control (e.g., to turn on or turn off) both transistors Tl and T2 of memory cells 210, 211, 218, and 219.[00112] As shown in FIG. 6A, memory device 200 can include dielectric material (e.g., silicon dioxide) 526 that can form a structure (e.g., a dielectric) to electrically separate (e.g., isolate) parts of two adjacent (in the X-direction) memory cells of memory device 200. For example, dielectric material 526 between memory cells 210 and 211 can electrically separate material 520 (e.g., write channel region of transistor T2) of memory cell 210 from material 520 (e.g., write channel region of transistor T2) of memory cell 211, and electrically separate charge storage structure 202 of memory cell 210 from charge storage structure 202 of memory cell 211.[00113] As shown in FIG. 6A, memory device 200 can include dielectric portions 555. Material (e.g., read channel region) 510 of two adjacent memory cells (e.g., memory cells 211 and 218) can be electrically separated from each other by one of dielectric portions 555. Some of portions (e.g., materials) of the memory cells of memory device 200 can be formed adjacent (e.g., formed on) a side wall (e.g., vertical portion with respect to the Z-direction) of a respective dielectric portion among dielectric portions 555. For example, as shown in FIG. 6A, material 510 (e.g., semiconductor material portion) of memory cell 210 can be formed
adjacent (e.g., formed on) a side wall (not labeled) of dielectric portion 555 (on the left of memory cell 210). In another example, material 510 (e.g., semiconductor material portion) of memory cell 211 can be formed adjacent (e.g., formed on) a side wall (not labeled) of dielectric portion 555 between memory cells 211 and 216. [00114] Dielectric materials 545 can be the same as (or alternatively, different from) the material (or materials) of dielectric materials 515A and 515B. Example materials for dielectric materials 545 can include silicon dioxide, hafnium oxide (e.g., HfCE), aluminum oxide (e.g., AI2O3), or other dielectric materials.[00115] The above description focuses on the structure of memory cell 210. Other memory cells (e.g., memory cells 211, 218, and 219 in FIG. 6A) of memory device 200 can include elements structured in ways similar or the same as the elements of memory cell 210, described above. For example, as shown in FIG. 6A, memory cell 211 can include charge storage structure 202, material (e.g., write channel region) 520, material 510 (e.g., read channel region), and dielectric materials 525 A and 525B. The material (or materials) for dielectric materials 525 A and 525B can the same as the material (or materials) for dielectric materials 515A and 515B. Memory cells 218 and 219 can include elements structured in ways similar or the same as the elements of memory cells 210 and 211, respectively.[00116] FIG. 6C shows a top view (e.g., plan view) of a portion of memory device 200 of FIG. 2, FIG. 6A, and FIG. 6B. For simplicity, some elements of memory device 200 are omitted from FIG. 6C. FIG. 6C shows relative locations of data lines 271, 272, 273, and 274 (and associated signals BE1, BE2, BE3, and BE4), and access lines 241, 242, 243, and 244 (associated signals WEI, WL2, WL3, and WL4). FIG. 6C also shows relative locations of trenches 290, 291, 292, 293, and 294 (also shown in FIG. 6B).[00117] The following description describes data line 271. Other data lines (e.g., data lines 272, 273, and 274) of memory device 200 can have similar structure and material as data line 271. As shown in FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6B, and FIG. 6C, data line 271 (associated with signal BE1) can have a length in the Y- direction, a width in the X-direction, and a thickness in the Z-direction. Data line
271 can include a conductive material (or a combination of materials) that can be structured as a conductive line (e.g., conductive region) having a length in the Y- direction. Example materials for data line 271 include metal, conductively doped polysilicon, or other conductive materials. Other data lines 272, 273, and 274 (associated with signals BL2, BL3, and BL4, respectively) can have a length, a width, a thickness, and a material similar to or the same as data line 271.[00118] FIG. 6D shows a top view of memory device 200 including a common conductive structure (e.g., a common conductive plate) including semiconductor material 596 and ground connection 297 over substrate 599.[00119] FIG. 6E shows a top view of memory device 200E including separate conductive structures (e.g., separate conductive strips) unlike the common conductive structure (e.g., a common conductive plate) of FIG. 6D. As shown in FIG. 6E, semiconductor material 596 and ground connection 297 can be divided (e.g., patterned) into separate conductive structures having length along the Y- direction, which is also the direction of (e.g., parallel to) the length of each of data lines 271, 272, 273, and 274. Memory cells coupled to the same data line can share a respective conductive structure (formed under memory cells). In an alternative structure (not shown) of memory device 200E, semiconductor material 596 and ground connection 297 can be divided (e.g., patterned) into separate conductive structures having length along the X-direction, which is also the direction of (e.g., parallel to) the length of each of access lines 241, 242, 243, and 244. Each of the conductive strips having the length in the Y-direction in the structure shown in FIG. 6E (or having length in the X-direction (not shown) in an alternative structure) can be individually coupled ground during an operation (e.g., read or write operation) of memory device 200E.[00120] The structure of memory device 200 allows it to have a relatively smaller size (e.g., smaller footprint) and improved (e.g., reduced) power consumption (as result of using a single access line (e.g., word line) to control two transistors of a corresponding memory cell). Other improvements and benefits of memory device 200 are described below.
[00121] In the 2T memory cell structure of memory device 200, the threshold voltage (e.g., Vt2) of transistor T2 can be relatively high for proper operation of memory device 200. For example, the threshold voltage of transistor T2 can be relatively high, so that transistor T2 can be properly turn on (e.g., during a write operation) and properly turn off (e.g., during a read operation). Including conductive shield structures (e.g., conductive shield structures 261 and 262) in memory device 200 can allow transistor T2 to have a relatively more relaxed threshold voltage (e.g., a reduced Vt2).[00122] The conductive shield structures can also suppress or prevent potential leakage of current (e.g., leakage through transistor T2) in the memory cell. This can improve retention of information stored in the memory cell.[00123] Further, the conductive shield structures of memory device 200 can reduce capacitive coupling between adjacent access lines. This can mitigate disturbance between the charge storage structures of adjacent memory cells associated with different access lines.[00124] Moreover, the conductive shield structures may boost the capacitance of the charge storage structure (e.g., charge storage structure 202) of memory device 200. This can lead to improve operation (e.g., read operation) of memory device 200.[00125] FIG. 7 shows a memory device 700 including conductive shield structures 261 and 262 having respective heights (e.g., H2’) greater than the heights (e.g., Hl) of access lines 241 242, 243, and 244, according to some embodiments described herein. As shown in FIG. 7, each of heigh H2’ and Hl is measured (e.g., in nanometers) in the Z-direction and height H2’ is greater than heigh Hl (e.g., H2’ > Hl) as also described above with reference to FIG. 6A. Memory device 700 can have improvements and benefits similar to those of memory device 200 described above.[00126] FIG. 8 shows a memory device 800 including conductive shield structures 261 and 262 and access lines 241 242, 243, and 244 having the same thickness W3, according to some embodiments described herein. As shown in FIG.
8, memory device 800 can include trenches (not labeled but they can be like trenches 290, 291, 292, 293, and 294 in FIG. 6B) having respective widths TW1 and TW2 (like widths TW1 and TW2 in FIG. 6B). Alternatively, the trenches of memory device 800 can have the same (equal) width. For example, in an alternative structure (not shown) of memory device 800, trenches of memory device 800 can have the same width (e.g., width TW1 = TW2, not shown in FIG. 8). Memory device 800 can have improvements and benefits similar to those of memory device 200 described above.[00127] As described above with reference to FIG. 5A through FIG. 8, memory devices 200, 200E (FIG. 6E), 700, and 800 can have conductive shield structures 261 and 262, access lines 241 242, 243, and 244, and trenches 290, 292, 293, 294, and 295 with corresponding thicknesses and widths (e.g., Wl, W2, W3, Hl, H2, H2’, TW1, TWF, TW2, and TW2’) shown in FIG. 5A through FIG 8. However, the memory device described herein can be structured (e.g., can be formed) to include any combination of thicknesses and widths described above. For example, the thicknesses of widths of respective conductive shield structures 261 and 262, access lines 241 242, 243, and 244, and trenches 290, 292, 293, 294, and 295 can be any combination of Wl, W2, W3, Hl, H2, H2’, TW1, TWF, TW2, and TW2’.[00128] FIG. 9 through FIG. 22C show different views of elements during processes of forming a memory device 900, according to some embodiments described herein. Some or all of the processes used to form memory device 900 can be used to form memory devices 200, 200E, 700, and 800 described above with reference to FIG. 2 through FIG. 8.[00129] FIG. 9 shows memory device 900 after different levels (e.g., layers) of materials are formed in respective levels (e.g., layers) of memory device 900 in the Z-direction over a substrate 999. The different levels of materials include a dielectric material 930, a semiconductor material 996, and a conductive material 997. Dielectric material 930, semiconductor material 996, and conductive material 997 can be formed in a sequential fashion one material after another over substrate
999. For example, the processes used in FIG. 9 can include forming (e.g., depositing) conductive material 997 over substrate 999, forming (e.g., depositing) semiconductor material 996 over conductive material 997, and forming (e.g., depositing) dielectric material 930 over semiconductor material 996.[00130] Substrate 999 can be similar to or identical to substrate 599 of FIG.5. Conductive material 997 can include a material (or materials) similar to or identical to that of the material for ground connection 297 of memory device 200 (FIG. 5 through FIG. 8). For example, conductive material 997 can include metal, conductively doped polysilicon, or other conductive materials.[00131] Semiconductor material 996 includes a material (or materials) similar to or identical to that of the material for semiconductor material 596 of memory device 200 (FIG. 5 A and FIG. 6A). For example, semiconductor material 996 can include silicon, polysilicon, or other semiconductor material, and can include a doped region (e.g., p-type doped region). As described below in subsequent processes of forming memory device 900, semiconductor material 996 can be structured to form part of a channel region (e.g., read channel region) for a respective memory cell of memory device 900.[00132] Dielectric materials 930 of FIG. 9 can include a nitride material (e.g., silicon nitride (e.g., SiaN- ), oxide material (e.g., SiOa), or other dielectric materials. As described below in subsequent processes of forming memory device 900, dielectric material 930 can be processed into dielectric portions to form part of cell isolation structures to electrically isolate one memory cell from another memory cell of memory device 900.[00133] FIG. 10 shows memory device 900 after trenches (e.g., openings) 1001 and 1002 are formed. Forming trenches 1001 and 1002 can include removing (e.g., by patterning) part of dielectric material 930 (FIG. 9) at the locations of trenches 1001 and 1002 and leaving portions (e.g., dielectric portions) 1031, 1032, and 1033 (which are remaining portions of dielectric material 930) as shown in FIG. 10.
[00134] Each of trenches 1001 and 1002 can have a length in the Y-direction, a width (shorter than the length) in the X-direction, and a bottom (not labeled) resting on (e.g., bounded by) a respective portion of semiconductor material 996. Each of trenches 1001 and 1002 can include opposing side walls (e.g., vertical side walls) formed by respective portions 1031, 1032, and 1033. For example, trench 1001 can include a side wall 1011 (formed by portion 1031) and a side wall 1012 (formed by portion 1032). Trench 1002 can include a side wall 1013 (formed by portion 1032) and a side wall 1014 (formed by portion 1033).[00135] FIG. 11 shows memory device 900 after a material 1110’ and a material 1110” are formed (e.g., deposited) in trenches 1001 and 1002, respectively. As shown in FIG. 11, material 1110’ can be formed on side walls 1011 and 1012 and on the bottom (e.g., on a portion of semiconductor material 996) of trench 1001. Material 1110” can be formed on side walls 1013 and 1014 and on the bottom (e.g., on another portion of semiconductor material 996) of trench 1002.[00136] Materials 1110’ and 1110” can be the same material. An example of material 1110’ and material 1110” includes a semiconductor material. Materials 1110’ and 1110” can have the same properties as the materials that form portions 510A, 510B, 511A, and 51 IB (e.g., read channel regions) of transistors T1 of respective memory cells of memory device 200 of FIG. 5A and FIG. 6A. As described below in subsequent processes (e.g., FIG. 19A) of forming memory device 900, materials 1110’ and 1110” can be structured to form channel regions (e.g., read channel regions) of transistors (e.g., transistors Tl) of respective memory cells of memory device 900. Thus, each of materials 1110’ and 1110” can conduct a current (e.g., conduct holes) during an operation (e.g., a read operation) of memory device 900.[00137] The process of forming materials 1110’ and 1110” can include a doping process. Such a doping process can include introducing dopants into materials 1110’ and 1110” to allow a transistor (e.g., transistor Tl) of a respective memory cell of memory device 900 to include a specific structure. For example, the doping process used in FIG. 9 can include introducing dopants (e.g., using a laser
anneal process) with different dopant concentrations for different parts of materials 1110’ and 1110”, such that the transistor that includes material 1110’ (or material 1110”) can have a PFET structure. In such a PFET structure, part of material 1110’ (or material 1110”) can form a channel region (e.g., read channel region) to conduct currents (e.g., holes) during an operation (e.g., read operation) of memory device 900.[00138] FIG. 12 shows memory device 900 after dielectric materials (e.g., oxide materials) 1215’ and 1215” are formed (e.g., deposited) on materials 1110’ and 1110”, respectively. Dielectric materials 1215’ and 1215” can be deposited, such that dielectric materials 1215’ and 1215” can be conformal to materials 1110’ and 1110”, respectively. Materials 1215’ and 1215” can have the same properties as the materials (e.g., oxide materials) that form dielectric materials 515A, 515B, 525A, and 525B of memory device 200 of FIG. 5A and FIG. 6A.[00139] FIG. 13 shows memory device 900 after materials (e.g., charge storage materials) 1302’, 1302”, 1302”’, and 1302”” are formed on respective side walls of materials 1215’ and 1215”. Materials 1302’, 1302”, 1302’”, and 1302”” are electrically separated from each other. As described below in subsequent processes (FIG. 19) of forming memory device 900, each of materials 1302’, 1302”, 1302”’, 1302”” can be structured to form a charge storage structure of a respective memory cell of memory device 900. Materials 1302’, 1302”, 1302’”, 1302”” can include material (e.g., polysilicon) similar or identical to the material of charge storage structure 202 of the memory cells (e.g., memory cell 210 or 211) of memory device 200 (FIG. 5 A and FIG. 6A).[00140] FIG. 14 shows memory device 900 after dielectric materials 1426’ and 1426” are formed (e.g., filled) in opened spaces in trenches 1001 and 1002, respectively. Dielectric materials 1426’ and 1426” can include an oxide material. As described below in subsequent processes of forming memory device 900, dielectric materials 1426’ and 1426” can form part of an isolation structure that can electrically isolate parts of (e.g., charge storage structures) two adjacent (in the X- direction) memory cells of memory device 900.
[00141] FIG. 15 shows memory device 900 after dielectric materials 1526 and 1526” are formed at locations 1501 and 1502, respectively. Forming dielectric materials 1526’ and 1526” can include removing (e.g., by using an etch process) part (e.g., top part) of each of dielectric materials 1426’ and 1426” (FIG. 14), such that the remaining parts of dielectric materials 1426’ and 1426” are dielectric materials 1526’ and 1526” (FIG. 15), respectively.[00142] FIG. 16 shows memory device 900 after materials 1602’, 1602”, 1602’”, and 1602”” are formed at locations 1611 and 1612, respectively. Forming materials 1602’, 1602”, 1602’”, and 1602”” can include removing (e.g., by using an etch process) part (e.g., top part) of each of dielectric materials 1302’, 1302”, 1302’”, and 1302”” (FIG. 13), such that the remaining parts of materials 1302’, 1302”, 1302’”, and 1302”” are materials 1602’, 1602”, 1602’”, and 1602”” (FIG. 16), respectively.[00143] In FIG. 14, FIG. 15, and FIG. 16, part (e.g., top part) of dielectric materials 1426’ and 1426” (FIG. 14) and part (e.g., top part) of materials 1302’, 1302”, 1302’”, 1302”” (FIG. 14) were removed in separate processes (e.g., multiple steps) as described with reference to FIG. 15 and FIG. 16. However, a single process (e.g., single step) can be used to remove part of dielectric materials 1426’ and 1426” (FIG. 14) and part of materials 1302’, 1302”, 1302”’, 1302”” (FIG. 14).[00144] FIG. 17 shows memory device 900 after materials 1720’, 1721’, 1720”, and 1721” are formed. Forming materials 1720’, 1721’, 1720”, and 1721” can include depositing an initial material (or materials) on dielectric materials 1526’ and 1526” and materials 1602’, 1602”, 1602’”, and 1602””. Then, the process used in FIG. 17 can include removing (e.g., by using an etch process) a portion of the initial material al locations 1701 and 1702. Materials 1720’, 1721’, 1720”, and 1721” are the remaining portions of the initial material. As shown in FIG. 17, materials 1720’, 1721’, 1720”, and 1721” are electrically separated from each other. However, materials 1720’, 1721’, 1720”, and 1721” are electrically coupled
to (e.g., directly coupled to) materials 1602 , 1602 , 1602 , and 1602 , respectively.[00145] Materials 1720’, 1721’, 1720”, and 1721” can include materials similar or identical to material (e.g., write channel region) 520 (FIG. 5A and FIG. 6A) of transistor T2 of memory device 200 of FIG. 5A and FIG. 6A. As described below in subsequent processes (FIG. 19) of forming memory device 900, each of materials 1720’, 1721’, 1720”, and 1721” can form a channel region (e.g., write channel region) of a transistor (e.g., transistor T2) of a respective memory cell of memory device 900. Thus, each of materials 1720’, 1721’, 1720”, and 1721” can conduct a current (e.g., conduct electrons) during an operation (e.g., a write operation) of memory device 900.[00146] FIG. 18 shows memory device 900 after dielectric materials 1826’ and 1826” are formed at (e.g., filled in) locations 1701 and 1702. Dielectric materials 1826’ and 1826” can be the same as dielectric materials 1426’ and 1426”. As described below in subsequent processes of forming memory device 900, dielectric materials 1826’ and 1826” can form part of an isolation structure that can electrically isolate parts of (e.g., write channel regions) two adjacent (in the X- direction) memory cells of memory device 900.[00147] FIG. 19A shows memory device 900 after trenches 1911, 1912, and 1913 are formed (in the X-direction) across the materials of memory device 900. Each of trenches 1911, 1912, and 1913 can have a length in the X-direction, a width (shorter than the length) in the Y-direction, and a bottom (not labeled) resting on (e.g., bounded by) a respective portion of semiconductor material 996.Alternatively, each of trenches 1911, 1912, and 1913 can have a bottom (not labeled) resting on (e.g., bounded by) a respective portion of conductive material 997 (instead of semiconductor material 996). Forming trenches 1911, 1912, and 1913 can include removing (e.g., by cutting (e.g., etching) in the Z-direction) part of the materials of memory device 900 at locations of trenches 1911, 1912, and 1913 and leaving portions (e.g., slices) of the structure of memory device 900 shown in[00148] After portions (at the locations of trenches 1911, 1912, and 1913) of memory device 900 are removed (e.g., cut), the remaining portions can form parts of memory cells of memory device 900. For example, memory device 900 can include memory cells 210’, 211’, 210”, and 211” in one row along the X-direction, and cells 212’, 213’, 212”, and 213” in another row along the X-direction. Memory cells 210’ and 211’ can correspond to memory cells 210 and 211, respectively, of memory device 200 (FIG. 2 and FIG. 7). Memory cells 212’ and 213’ in FIG. 19A can correspond to memory cells 212 and 213, respectively, of memory device 200 (FIG. 2).[00149] For simplicity, only some of similar elements (e.g., portions) of memory device 900 in FIG. 19A are labeled. For example, memory device 900 can include dielectric portions (e.g., cell isolation structures) 1931, 1932, 1933, 1934, 1935, and 1936, and dielectric materials 1926 A and 1926B. Dielectric portions 1931 and 1932 can correspond to two respective dielectric portions 555 of memory device 200 of FIG. 6A.[00150] FIG. 19B shows an enlarged portion of memory device 900 of FIG. 19A. As shown in FIG. 19B, memory cell 210’ can include portions 1910A and 1910B (which can be part of the read channel region of memory cell 210’), dielectric materials 1915A and 1915B, material (e.g., write channel region) 1920, and charge storage structure 1902 (directly below material 1920). Memory cell 211’ can include portions 1911 A and 191 IB (which can be part of the read channel region of memory cell 211’), dielectric materials 1925A and 1925B, material (e.g., write channel region) 1921, and charge storage structure 1902 (directly below material 1921).[00151] As described above with reference to FIG. 9 through FIG. 19C, part of each of the memory cells of memory device 900 can be formed from a selfaligned process, which can include formation of trenches 1001 and 1002 (FIG. 10A) in the Y-direction and trenches 1911, 1912, and 1913 (FIG. 19A) in the X-direction. The self-aligned process can improve (e.g., increase) memory cell density, improve process (e.g., provide a higher process margin), or both.
[00152] FIG. 20 shows memory device 900 after dielectrics 2045 (e.g., oxide regions) are formed. Dielectrics 2045 can be concurrently formed (e.g., formed from the same process step and the same material). The material (or materials) for dielectrics 2045 can be the same as (or alternatively, different from) the material (or materials) of dielectric materials 515A, 515B, 525A, and 525B (FIG. 6A). Example materials for dielectrics 2045 can include silicon dioxide, hafnium oxide (e.g., HfO2), aluminum oxide (e.g., AI2O3), or other dielectric materials.[00153] FIG. 21 shows memory device 900 after access lines 2141 and 2142 and conductive shield structure 2161 are formed. Access lines 2141 and 2142 and conductive shield structure 2161 can be concurrently formed (e.g., formed from the same process step and the same material). As shown in FIG. 21, each of dielectric materials 2045 can be between respective memory cells and either an access line (e.g., access line 2141 or 2142) or a conductive shield structure (e.g., conductive shield structure 2161). Each of access lines 2141 and 2142 and conductive shield structure 261 can contact a respective dielectric material 2045.[00154] Access lines 2141 and 2412 can correspond to access lines 214 and 242, respectively, of memory device 200 (FIG. 2 through FIG. 6D). Conductive shield structure 2161 can correspond to conductive shield structure 261 memory device 200 (FIG. 2 through FIG. 6D). The processes associated with FIG. 21 can form other access lines and conductive shield structures of memory device 900 similar to or the same as the access lines and conductive shield structures of memory device 200 described above with reference to FIG. 2 to FIG. 6D.[00155] In FIG. 21, each of access lines 2141 and 2142 and conductive shield structure 2161 can include metal, conductively doped poly silicon, or other conductive materials. As shown in FIG. 21, access lines 2141 and 2142 and conductive shield structure 2161 are electrically separated from memory cells 210’, 211’, 210”, 211”, 212’, 213’, 212”, and 213” by respective dielectric materials 2045.[00156] Access line 2141 can be structured as a conductive line (e.g., conductive region) that can be used to control the read and write transistors (e.g.,
transistor T1 and T2, respectively) of respective memory cells 210 , 211 , 210 , and 211”. Access line 2142 can be structured as a conductive line (e.g., conductive region) that can be used to control the read and write transistors (e.g., transistor T1 and T2, respectively) of respective memory cells 212’, 213’, 212”, and 213”.[00157] Conductive shield structure 2161 is neither an access line (e.g., word line) of memory cells 210’, 211’, 210”, and 211” nor an access line (e.g., word line) of memory cells 212’, 213’, 212”, and 213”. Conductive shield structure 2161 can correspond to (operate in ways similar to) conductive shield structure 261 memory device 200 (FIG. 2 through FIG. 6D).[00158] FIG. 22A shows memory device 900 after a dielectric material 2235 is formed. Dielectric material 2235 can fill the structure of memory device 900 as shown in FIG. 22A. Portion 1910A and material 1920 (e.g., read channel region and write channel region, respectively) of respective memory cells 212’ and 213’ are exposed. Portion 1911A and material 1921 (e.g., read channel region and write channel region, respectively) of memory cell 211’ are exposed.[00159] FIG. 22B shows memory device 900 after a conductive material 2220 is formed. Conductive material 2220 can be formed (e.g., deposited) over exposed portion 1910A, material 1920, portion 1911A, and material 1921 (shown in FIG. 22 A) and over other elements of memory device 900.[00160] FIG. 22C shows memory device 900 after data lines 2271, 2272, 2273, and 2274 are formed. Data lines 2271, 2272, 2273, and 2274 can correspond to data lines data lines 221, 222, 223, and 224, respectively, of memory device 200 (FIG. 6A and FIG. 6C).[00161] Data lines 2271, 2272, 2273, and 2274 can be concurrently formed. For example, a process (e.g., patterning process) can be performed to remove a portion of conductive material 2200 (FIG. 22B). In FIG. 22C, data lines 2271, 2272, 2273, and 2274 are the remaining portion of conductive material 2200.[00162] As shown in FIG. 22C, data lines 2271, 2272, 2273, and 2274 are electrically separated from each other. Each of data lines 2271, 2272, 2273, and
2274 can have a length in the Y-direction, a width in the X-direction, and a thickness in the Z-direction.[00163] The description of forming memory device 900 with reference to FIG. 9 through FIG. 22C can include other processes to form a complete memory device. Such processes are omitted from the above description so as to not obscure the subject matter described herein.[00164] The process of forming memory device 900 as described above can have a relatively reduced number of masks (e.g., reduced number of critical masks) in comparison with some conventional processes. For example, by forming trenches 1001 and 1002 in the process associated with FIG. 10A, and forming trenches 1911, 1912, and 1913 in the process of FIG. 19A, the number of critical masks used to form the memory cells of memory device 900 can be reduced. The reduced number of masks can simplify the process, reduce cost, or both, of forming memory device 900. Further, the access lines (e.g., access lines 2141 and 2142) and the conductive shield structures (e.g., conductive shield structure 2161) of memory device 900 allows it to have improvements and benefits similar to those of memory device 200 (FIG. 2 through FIG. 6D).[00165] FIG. 23 A, FIG. 23B, and FIG. 23C show different views of a structure of a memory device 2300 including multiple decks of memory cells, according to some embodiments described herein. FIG. 23A shows an exploded view (e.g., in the Z-direction) of memory device 2300. FIG. 23B shows a side view (e.g., cross-sectional view) in the X-direction and the Z-direction of memory device 2300. FIG. 23C shows a side view (e.g., cross-sectional view) in the Y-direction and the Z-direction of memory device 2300.[00166] As shown in FIG. 23A, memory device 2300 can include decks (decks of memory cells) 23O5o, 23051, 23052, and 2305a that are shown separately from each other in an exploded view to help ease of viewing the deck structure of memory device 2300. In reality, decks 23O5o, 23051, 23052, and 2305a can be attached to each other in an arrangement where one deck can be formed (e.g., stacked) over another deck over a substrate (e.g., a semiconductor (e.g., silicon)
substrate) 2399. For example, as shown in FIG. 23B and FIG. 23C, decks 23O5o, 23051, 23052, and 2305a can be formed in the Z-direction perpendicular to substrate 2399 (e.g., formed vertically in the Z-direction with respect to substrate 2399).[00167] As shown in FIG. 23 A, FIG. 23B, and FIG. 23C, each of decks 23O5o, 23051, 23052, and 2305a can have memory cells arranged in the X-direction and the Y-direction (e.g., arranged in rows in the X-direction and in columns in the Y-direction). For example, deck 23O5o can include memory cells 231Oo, 231 lo, 2312o, and 2313o (e.g., arranged in a row), memory cells 232Oo, 2321o, 2322o, and 2323o (e.g., arranged in a row), and memory cells 233Oo, 233 lo, 2332o, and 2333o (e.g., arranged in a row).[00168] Deck 23051 can include memory cells 2310i, 231 li, 2312i, and23131 (e.g., arranged in a row), memory cells 23201, 23211, 23221, and 23231 (e.g., arranged in a row), and memory cells 23301, 23311, 23321, and 23331 (e.g., arranged in a row).[00169] Deck 23052 can include memory cells 23 IO2, 23112, 23122, and23132 (e.g., arranged in a row), memory cells 23202, 232h, 23222, and 23232 (e.g., arranged in a row), and memory cells 23302, 23312, 23322, and 23332 (e.g., arranged in a row).[00170] Deck 2305a can include memory cells 23 IO3, 23113, 2312a, and 2313a (e.g., arranged in a row), memory cells 2320a, 232h, 2322a, and 2323a (e.g., arranged in a row), and memory cells 2330a, 233 b, 2332a, and 2333a (e.g., arranged in a row).[00171] As shown in FIG. 23A, FIG. 23B, and FIG. 23C, decks 23O5o, 23051, 23052, and 2305a can be located (e.g., formed vertically in the Z-direction) on levels (e.g., portions) 2350, 2351, 2352, and 2353, respectively, of memory device 2300. The arrangement of decks 23O5o, 23051, 23052, and 2305a forms a 3-dimensional (3- D) structure of memory cells of memory device 2300 in that different levels of the memory cells of memory device 2300 can be located (e.g., formed) in different levels (e.g., different vertical portions) 2350, 2351, 2352, and 2353 of memory[00172] Decks 23O5o, 23051, 23052, and 2305a can be formed one deck at a time. For example, decks 23O5o, 23051, 23052, and 2305a can be formed sequentially in the order of decks 23O5o, 23051, 23052, and 2305a (e.g., deck 23051 is formed first and deck 2305a is formed last). In this example, the memory cell of one deck (e.g., deck 23051) can be formed either after formation of the memory cells of another deck (e.g., deck 23O5o) or before formation of the memory cells of another deck (e.g., deck 2305a) . Alternatively, decks 23O5o, 23051, 2305a, and 2305a can be formed concurrently (e.g., simultaneously), such that the memory cells of decks 23O5o, 23051, 2305a, and 2305a can be concurrently formed. For example, the memory cells in levels 2350, 2351, 2352, and 2353 of memory device 2300 can be concurrently formed.[00173] The structures decks 23O5o, 23051, 2305a, and 2305a can include the structures of the memory devices above with reference to FIG. 1 through FIG. 22C. For example, memory device 2300 can include data lines (e.g., bit lines) and access lines (e.g., word lines) to access the memory cells of decks 23O5o, 23051, 2305a, and 2305a. For simplicity, data lines and access lines of memory cells are omitted from FIG. 23 A. However, the data lines and access lines of memory device 2300 can be similar to the data lines and access lines, respectively, of the memory devices described above with reference to FIG. 1 through FIG. 22C.[00174] FIG. 23 A, FIG. 23B, and FIG. 23C show memory device 2300 including four decks (e.g., 23O5o, 23051, 2305a, and 2305a) as an example. However, the number of decks can be different from four. FIG. 23A shows each of decks 23O5o, 23051, 2305a, and 2305a including one level (e.g., layer) of memory cells as an example. However, at least one of the decks (e.g., one or more of decks 23O5o, 23051, 2305a, and 2305a) can have two (or more) levels of memory cells. FIG. 23 A shows an example where each of decks 23O5o, 23051, 2305a, and 2305a includes four memory cells (e.g., in a row) in the X-direction and three memory cells (e.g., in a column) in the Y-direction. However, the number of memory cells in a row, in a column, or both, can vary. Since memory device 2300 can include the structures of memory devices 200, 200E, 700, 800, and 900, memory device 2300
can also have improvements and benefits like memory devices 200, 200E, 700, 800, and 900.[00175] The illustrations of apparatuses (e.g., memory devices 100, 200, 200E, 700, 800, 900, and 2300) and methods (e.g., methods of forming memory device 900) are intended to provide a general understanding of the structure of various embodiments and are not intended to provide a complete description of all the elements and features of apparatuses that might make use of the structures described herein. An apparatus herein refers to, for example, either a device (e.g., any of memory devices 100, 200, 200E, 700, 800, 900, and 2300) or a system (e.g., an electronic item that can include any of memory devices 100, 200, 200E, 700, 800, 900, and 2300).[00176] Any of the components described above with reference to FIG. 1 through FIG. 23C can be implemented in a number of ways, including simulation via software. Thus, apparatuses (e.g., memory devices 100, 200, 200E, 700, 800, 900, and 2300) or part of each of these memory devices described above, may all be characterized as “modules” (or “module”) herein. Such modules may include hardware circuitry, single- and/or multi-processor circuits, memory circuits, software program modules and objects and/or firmware, and combinations thereof, as desired and/or as appropriate for particular implementations of various embodiments. For example, such modules may be included in a system operation simulation package, such as a software electrical signal simulation package, a power usage and ranges simulation package, a capacitance-inductance simulation package, a power/heat dissipation simulation package, a signal transmission-reception simulation package, and/or a combination of software and hardware used to operate or simulate the operation of various potential embodiments.[00177] The memory devices (e.g., memory devices 100, 200, 200E, 700, 800, 900, and 2300) described herein may be included in apparatuses (e.g., electronic circuitry) such as high-speed computers, communication and signal processing circuitry, single- or multi-processor modules, single or multiple embedded processors, multicore processors, message information switches, and
application-specific modules including multilayer, multichip modules. Such apparatuses may further be included as subcomponents within a variety of other apparatuses (e.g., electronic systems), such as televisions, cellular telephones, personal computers (e.g., laptop computers, desktop computers, handheld computers, tablet computers, etc.), workstations, radios, video players, audio players (e.g., MP3 (Motion Picture Experts Group, Audio Layer 3) players), vehicles, medical devices (e.g., heart monitor, blood pressure monitor, etc.), set top boxes, and others.[00178] The embodiments described above with reference to FIG. 1 through FIG. 23C include apparatuses and methods of forming and operating the apparatuses. One of the apparatuses includes a first memory cell including a first transistor including a first channel region and a first charge storage structure and a second transistor including a second channel region formed over the charge storage structure; a second memory cell adjacent the first memory cell, the second memory cell including a third transistor including a third channel region and a second charge storage structure, and a fourth transistor including a fourth channel region formed over the second charge storage structure; a first access line adjacent a side of the first memory cell; a second access line adjacent a side of the second memory cell; a first dielectric material adjacent the first channel region; a second dielectric material adjacent the third channel region; and a conductive structure between the first and second dielectric materials and adjacent the first and second dielectric materials. Other embodiments, including additional apparatuses and methods, are described. [00179] In the detailed description and the claims, the term “on” used with respect to two or more elements (e.g., materials), one “on” the other, means at least some contact between the elements (e.g., between the materials). The term “over” means the elements (e.g., materials) are in close proximity, but possibly with one or more additional intervening elements (e.g., materials) such that contact is possible but not required. Neither “on” nor “over” implies any directionality as used herein unless stated as such.
[00180] In the detailed description and the claims, a list of items joined by the term “at least one of’ can mean any combination of the listed items. For example, if items A and B are listed, then the phrase “at least one of A and B” means A only; B only; or A and B. In another example, if items A, B, and C are listed, then the phrase “at least one of A, B, and C” means A only; B only; C only; A and B (excluding C); A and C (excluding B); B and C (excluding A); or all of A, B, and C. Item A can include a single element or multiple elements. Item B can include a single element or multiple elements. Item C can include a single element or multiple elements.[00181] In the detailed description and the claims, a list of items joined by the term “one of’ can mean only one of the list items. For example, if items A and B are listed, then the phrase “one of A and B” means A only (excluding B), or B only (excluding A). In another example, if items A, B, and C are listed, then the phrase “one of A, B, and C” means A only; B only; or C only. Item A can include a single element or multiple elements. Item B can include a single element or multiple elements. Item C can include a single element or multiple elements.[00182] The above description and the drawings illustrate some embodiments of the inventive subject matter to enable those skilled in the art to practice the embodiments of the inventive subject matter. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Examples merely typify possible variations. Portions and features of some embodiments may be included in, or substituted for, those of others. Many other embodiments will be apparent to those of skill in the art upon reading and understanding the above description. |
The present disclosure includes apparatus (e.g., computing systems, memory systems, controllers, etc.) and methods for providing data integrity. One or more methods can include, for example: receiving a number of sectors of data to be written to a number of memory devices; appending first metadata corresponding to the number of sectors and including first integrity data to the number of sectors, the first metadata has a particular format; generating second integrity data to be provided in second metadata, the second integrity data corresponding to at least one of the number of sectors (wherein the second metadata has a second format); and generating third integrity data to be provided in the second metadata, the third integrity data including error data corresponding to the second integrity data and the at least one of the number of sectors. |
What is Claimed is: 1. An apparatus, comprising a controller configured to be coupled to a number of memory devices, the controller being configured to: receive a number of sectors of data to be written to the number of memory devices; append first metadata corresponding to the number of sectors and including first integrity data to the number of sectors, wherein the first metadata has a first format; generate second integrity data to be provided in second metadata having a second format, the second integrity data corresponding to at least one of the number of sectors; and generate third integrity data to be provided in the second metadata, the third integrity data including error data corresponding to the second integrity data and the at least one of the number of sectors. 2. The apparatus of claim 1, wherein the controller is configured to perform a check of the first integrity data prior to generating the second integrity data. 3. The apparatus of claim 1, wherein: the first integrity data of the first metadata corresponding to the number of sectors to be written to the memory devices includes error data on a per sector basis; and wherein the second integrity data of the second metadata corresponding to the at least one of the first number of sectors includes error data on a multi-sector basis. 4. The apparatus of claim 1, wherein the controller is configured to write the number of sectors and corresponding second metadata having the second format to the number of memory devices, and wherein the controller is further configured to: receive the first number of sectors and corresponding second metadata from the number of memory devices in association with a read operation;perform a check of the error data of the third integrity data; subsequently perform a check of at least one of error data and address data of the second integrity data; and adjust the metadata format associated with the first number of sectors from the second format back to the first format prior to providing the number of sectors to a host. 5. The apparatus of claim 4, wherein the controller being configured to adjust the metadata format associated with the first number of sectors from the second format back to the first format prior to providing the first number of sectors to the host comprises the controller being configured to: generate error data on a per sector basis for the number of sectors; and perform a check of the error data of each sector prior to providing the number of sectors to the host. 6. The apparatus of any one of claims 1 to 5, wherein the controller being configured to append the first metadata comprises the controller being configured to generate error data of the first integrity data on a per sector basis and insert the error data into the first metadata prior to generating the second integrity data in association with a write operation. 7. The apparatus of any one of claims 1 to 5, wherein the number of sectors of data comprises a first number of sectors of data and wherein the controller is further configured to, in association with a partial page write operation: receive a second number of sectors corresponding to a page of data from the number of memory devices, the second number of sectors being associated with metadata corresponding thereto and having the second format; adjust a metadata format associated with the sectors of the page from the number of memory devices from the second format to the first format; andform a merged page by replacing at least one of the second number of sectors corresponding to the page of data with at least one of the first number of sectors. 8. The apparatus of claim 7, wherein the controller is configured to perform a check of error data of the metadata corresponding to the second number of sectors prior to replacing the at least one of the second number of sectors corresponding to the page of data with the at least one of the first number of sectors. 9. The apparatus of any one of claims 1 to 5, wherein the number of sectors comprises a first number of sectors and wherein the controller is configured to write the first number of sectors and corresponding second metadata having the second format to the number of memory devices as a page of data, and wherein the controller is further configured to, in association with a partial page read operation: receive a second number of sectors corresponding to a page of data from the number of memory devices, the second number of sectors being associated with metadata corresponding thereto and having the second format; perform a check of error data of the metadata corresponding to a selected one or more of the second number of sectors associated with the partial page read operation; adjust the metadata format associated with the selected one or more of the second number of sectors to the first format such that the metadata corresponding to the selected sectors include integrity data on a per sector basis; and perform a check of the integrity data corresponding to the selected sectors prior to providing the selected sectors of the partial page to a host. 10. The apparatus of claim 9, wherein the controller is further configured to, in association with the partial page read operation, perform a check of integrity data of the metadata corresponding the selected sectors received from the number of memory devices prior to performing the check of the integrity data corresponding to the selected sectors. 11. The apparatus of any one of claims 1 to 5, wherein the controller comprises a solid state drive memory controller and the number of memory devices comprise a number of solid state memory devices. 12. A method for providing data integrity, comprising: receiving a number of sectors of data to be written to a number of memory devices, the number of sectors having first metadata including first integrity data associated therewith, wherein the first metadata has a particular metadata format; generating second integrity data to be provided in second metadata having an adjusted metadata format, the second integrity data corresponding to at least one of the number of sectors; and generating third integrity data to be provided in the second metadata, the third integrity data including error data corresponding to the second integrity data and the at least one of the number of sectors. 13. The method of claim 12, wherein the particular metadata format includes the first integrity data on a per sector basis and the adjusted metadata format includes the second integrity data on a multi-sector basis. 14. The method of claim 12, further comprising replacing the first metadata with the second metadata. 15. The method of claim 12, wherein the second metadata includes the first integrity data. 16. The method of claim 12, wherein the second metadata does not include the first integrity data. 17. The method of any one of claims 12 to 16, including writing the number of sectors and corresponding second metadata to the memory device. 18. The method of any one of claims 12 to 16, wherein the method includes, in association with a read operation associated with one or more of the number of sectors written to the memory device, performing a check of the third integrity data provided in the second metadata corresponding to the one or more sectors. 19. The method of claim 18, wherein the method includes, in association with the read operation associated with one or more of the number of sectors written to the memory device: subsequent to the check of the third integrity data, performing a check of the second integrity data including error data corresponding to the one or more of the number of sectors; and adjusting the adjusted metadata format associated with the one or more of the number of sectors back to the particular metadata format. 20. The method of claim 19, wherein adjusting the adjusted metadata format associated with the one or more of the number of sectors back to the particular metadata format includes replacing the second metadata with third metadata, the third metadata including fourth integrity data corresponding to the one or more of the number of sectors on a per sector basis. 21. The method of claim 20, wherein the method includes comparing the fourth integrity data corresponding to the one or more of the number of sectors with the first integrity data corresponding to the one or more of the number of sectors prior to providing the one or more of the number of sectors to a host. 22. The method of any one of claims 12 to 16, wherein the method includes: prior to generating the second integrity data, reading a page from the memory device, the page including multiple sectors and corresponding metadata having the adjusted metadata format; performing a check of integrity data associated with the page;adjusting the adjusted metadata format associated with the multiple sectors to the particular metadata format; and forming a write page by replacing one or more of the multiple sectors associated with the read page with one or more of the received number of sectors prior to writing the write page to the memory device. 23. The method of any one of claims 12 to 16, including appending the first integrity data of the first metadata to the number of sectors using a controller coupled to the number of memory devices. 24. A method for providing data integrity, comprising: receiving a number of sectors of data to be written to a memory device as at least part of a page of data, wherein each of the number of sectors have first integrity data associated therewith; forming a number of groups of data associated with the page, with each of the number of groups including at least one of the number of sectors and metadata corresponding thereto, wherein: the metadata associated with at least one of the number of groups includes second integrity data corresponding to the at least one of the number of sectors of the at least one of the number of groups as well as at least one sector of the number of sectors of at least one different group of the number of groups; and the metadata associated with each of the number of groups includes third integrity data corresponding to the at least one of the number of sectors of the respective at least one of the number of groups as well as the second integrity data corresponding to the respective group itself. 25. The method of claim 24, wherein the method includes: writing the number of groups to the memory device; and performing a check of the third integrity data of the metadata associated with a particular one of the number of groups responsive to a read operation associated with the at least one sector of the particular one of the number of groups. 26. The method of claim 25, wherein the method includes, subsequent to performing the check of the third integrity data of the metadata associated with the particular one of the number of groups: performing a check of the second integrity data of the metadata associated with the particular one of the number of groups; generating, on a per sector basis, fourth integrity data corresponding to the at least one sector of the particular one of the number of groups; and replacing the metadata associated with the particular one of the number of groups with adjusted metadata corresponding to the at least one sector of the particular one of the number of groups, the adjusted metadata including the fourth integrity data. 27. The method of claim 26, wherein the method includes performing a check of the fourth integrity data of the adjusted metadata corresponding to the at least one sector of the particular one of the number of groups 28. The method of claim 27, wherein performing the check of the fourth integrity data includes comparing the fourth integrity data of the adjusted metadata corresponding to the at least one sector of the particular one of the number of groups with the first integrity data corresponding to the at least one sector. 29. The method of claim 28, wherein the method includes removing the adjusted metadata corresponding to the at least one sector prior to providing the at least one sector to a host. 30. The method of any one of claims 24 to 29, wherein the first integrity data includes address integrity data and error data corresponding to each of the respective sectors, and wherein the method further comprises performing a check of the address integrity data and the error data for each of the number of sectors. |
APPARATUS AND METHODS FOR PROVIDING DATA INTEGRITY Technical Field [0001] The present disclosure relates generally to semiconductor memory devices, methods, and systems, and more particularly, to apparatus and methods for providing data integrity. Background [0002] Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and includes random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnetoresistive random access memory (MRAM), among others. [0003] Memory devices can be combined together to form a storage volume of a memory system such as a solid state drive (SSD). A solid state drive can include non-volatile memory, e.g., NAND flash memory and NOR flash memory, and/or can include volatile memory, e.g., DRAM and SRAM, among various other types of non- volatile and volatile memory. Flash memory devices, including floating gate flash devices and charge trap flash (CTF) devices using semiconductor-oxide-nitride-oxide-semiconductor and metal-oxide-nitride-oxide- semiconductor capacitor structures that store data in charge traps in the nitride layer, may be utilized as non-volatile memory for a wide range of electronic applications.Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. [0004] An SSD can be used to replace hard disk drives as the main storage device for a computer, as the solid state drive can have advantages over hard drives in terms of performance, size, weight, ruggedness, operating temperature range, and power consumption. For example, SSDs can have superior performance when compared to magnetic disk drives due to their lack of moving parts, which may avoid seek time, latency, and other electro-mechanical delays associated with magnetic disk drives. SSD manufacturers can use non- volatile flash memory to create flash SSDs that may not use an internal battery supply, thus allowing the drive to be more versatile and compact. [0005] An SSD can include a number of memory devices, e.g., a number of memory chips. A memory device can include a number of dies and/or logical units (LUNs). Each die can include a number of memory arrays and peripheral circuitry thereon, and the memory arrays can include a number of blocks of memory cells organized into a number of physical pages. [0006] An SSD can receive commands from a host in association with memory operations such as read and write operations to transfer data (e.g., user data and associated integrity data such as error data and address data, etc.) between the memory devices and the host. It can be beneficial to provide end-to-end integrity of the data transferred between the memory devices and the host in association with such operations in order to provide confidence that the data has not been corrupted during the transfer, for instance. [0007] Figure 1 is a functional block diagram of a computing system including at least one memory system in accordance with one or more embodiments of the present disclosure. [0008] Figure 2 is a functional block diagram of a memory controller associated with providing integrity of transferred data in accordance with one or more embodiments of the present disclosure.[0009] Figure 3 illustrates a metadata format associated with providing integrity of transferred data in accordance with one or more embodiments of the present disclosure. [0010] Figure 4 illustrates a metadata format associated with providing integrity of transferred data in accordance with one or more embodiments of the present disclosure. [0011] Figure 5 illustrates a metadata format associated with providing integrity of transferred data in accordance with one or more embodiments of the present disclosure. [0012] Figure 6 is a functional block diagram of a memory controller in accordance with one or more embodiments of the present disclosure. Detailed Description [0013] The present disclosure includes apparatus (e.g., computing systems, memory systems, controllers, etc.) and methods for providing data integrity. One or more method embodiments can include, for example: receiving a number of sectors of data to be written to a number of memory devices (e.g., a single memory device); appending first metadata corresponding to the number of sectors and including first integrity data to the number of sectors, the first metadata has a particular format; generating second integrity data to be provided in second metadata, the second integrity data corresponding to at least one of the number of sectors (wherein the second metadata has a second format); and generating third integrity data to be provided in the second metadata, the third integrity data including error data corresponding to the second integrity data and the at least one of the number of sectors. [0014] One or more embodiments of the present disclosure provide a flexible architecture for providing end-to-end data integrity within a memory system, for example. For instance, one or more embodiments can perform a metadata format conversion, which can provide the ability to adapt to different page sizes and or available metadata sizes associated with different types of memory devices, among other benefits. One or more embodiments include a controller thatcan perform error recovery operations while maintaining data integrity in accordance with embodiments described herein, which can provide benefits such as reducing the amount of device manufacturing testing that is done (e.g., testing performed pre-shipping prior to being provided to a consumer in the field), among other benefits. [0015] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designator "N," particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included with one or more embodiments of the present disclosure. As used herein, "a number of something can refer to one or more of such things (e.g., a number of memory devices can refer to one or more memory devices). [0016] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 108 may reference element "08" in Figure 1, and a similar element may be referenced as 208 in Figure 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, as will be appreciated, the proportion and the relative scale of the elements provided in the figures are intended to illustrate the embodiments of the present invention, and should not be taken in a limiting sense. [0017] Figure 1 is a functional block diagram of a computing system 100 including at least one memory system 104 in accordance with one or moreembodiments of the present disclosure. The memory system 104 can be a solid state drive (SSD), for instance, and can include a physical host interface 106, a memory system controller 108 (e.g., a processor and/or other control circuitry), and one or more memory devices 110-1, . . ., 110-N (e.g., solid state memory devices such as NAND flash devices), which provide a storage volume for the memory system 104. [0018] As illustrated in Figure 1, the physical host interface 106 is coupled to the controller 108 and can be used to communicate data between the memory system 104 and a host 102. The interface 106 can be in the form of a standardized interface. For example, when the memory system 104 is used for data storage in a computing system 100, the physical host interface 106 can be a serial advanced technology attachment (SAT A), peripheral component interconnect express (PCIe), or a universal serial bus (USB), among other connectors and interfaces. In general, however, physical host interface 106 can provide an interface for passing control, address, data, and other signals between the memory system 104 and a host 102 having compatible receptors for the physical host interface 106. [0019] Host 102 can be a host system such as a personal laptop computer, a desktop computer, a digital camera, a mobile telephone, or a memory card reader, among various other types of hosts. Host 102 can include a system motherboard and/or backplane and can include a memory access device (e.g., a number of processors). [0020] The controller 108 can communicate with the memory devices 110-1, . . ., 110-N to read, write, and erase data, among other operations. The controller 108 can be, for example, circuitry and/or firmware that may include a number of components (e.g., one or more integrated circuits). For example, the controller 108 can include control circuitry for controlling access across the memory devices 110- 1, . . ., 110-N and circuitry for providing a translation layer between host 102 and the memory system 104. Thus, the memory controller 108 could selectively couple an I/O connection (not shown in Figure 1) of a memory device 110-1, . . ., 110-N to receive the appropriate signal at the appropriate I/O connection at the appropriate time. Similarly, the communication protocol between the host 102 and the memory system 104 may be different than what is used to access a memory device 110-1, . .., 110-N. Controller 108 could then translate the commands received from the host 102 into the appropriate commands to achieve the desired access to the number of memory devices 110-1, . . ., 110-N. [0021] The memory devices 110-1, . . ., 110-N can include one or more arrays of memory cells (e.g., non-volatile memory cells). The arrays can be flash arrays with a NAND architecture, for example. However, embodiments are not limited to a particular type of memory array or array architecture. [0022] The memory devices 110-1, . . ., 110-N can include a number of memory cells that can be grouped, for instance, into a number of blocks including a number of physical pages. A number of blocks can be included in a plane of memory cells and an array can include a number of planes. As one example, a memory device may include 4320 bytes (B) of data per page, 128 pages per block, 2048 blocks per plane, and 16 planes per device. [0023] In operation, data can be written to and/or read from a memory device of a memory system (e.g., memory devices 110-1, . . ., 110-N of system 104) as a page of data. As such, a page of data can be referred to as a data transfer size of the memory system. Data can be transferred to/from a host (e.g., host 102) on a sector basis. As such, a sector of data can be referred to as a data transfer size of the host. [0024] Although a page of data can include a number of bytes of user data (e.g., a data payload including a number of sectors of data) as well as metadata corresponding thereto, a size of a page of data often can refer only to the number of bytes used to store the user data. As an example, a page of data having a page size of 4KB may include 4KB used to store user data (e.g., 8 sectors assuming a sector size of 512B) as well as a number of bytes (e.g., 32B, 54B, 224B, etc.) used to store metadata corresponding to the user data. The metadata can include integrity data such as error data (e.g., error detecting and/or correcting code data) and/or address data (e.g., logical address data), among other metadata corresponding to the user data. [0025] Different types of memory devices (e.g., devices 110-1, . . ., 110-N) can provide different page sizes and/or may have different amounts of metadatabytes available in association with a stored page. Also, different memory device types can have different bit error rates, which can lead to different amounts of metadata necessary to ensure integrity of the page of data (e.g., a memory device with a higher bit error rate may require more bytes of error correction code data than a memory device with a lower bit error rate). As an example, a multi-level cell (MLC) NAND flash device may have a higher bit error rate than a single level cell (SLC) NAND flash device. As such, the MLC device may use more metadata bytes for error data than the SLC device. In some instances, the amount of metadata necessary to provide a desired integrity of a page of data may exceed the metadata bytes provided by a memory device. That is, the available amount of metadata bytes may be less than the amount desirable to provide adequate end-to-end data integrity of the sectors (e.g., user data) corresponding to the page. [0026] Figure 2 is a functional block diagram of a memory controller 208 associated with providing integrity of transferred data in accordance with one or more embodiments of the present disclosure. The controller 208 can be a component of a memory system such as memory system 104 illustrated in Figure 1. It will be appreciated by one of ordinary skill in the art that additional circuitry and components can be provided beyond those illustrated in Figure 2, and that the detail of Figure 2 has been reduced to facilitate ease of illustration. Embodiments are not limited to the example illustrated in Figure 2. For instance, one or more of the illustrated components of controller 208 can be integrated as a single component. Also, one or more of the illustrated components can be optional in a number of embodiments of the present disclosure. [0027] As shown in Figure 2, memory controller 208 includes control circuitry used to transfer data between a host interface 206 and a number of memory devices 210. The memory devices 210 can be memory devices such as memory devices 110-1, . . ., 110-N illustrated in Figure 1. [0028] In a number of embodiments, the control circuitry is configured to adjust (e.g., change) a metadata format between a first format and a second format to provide integrity of the data transferred between the host interface 206 and the number of memory devices 210. As an example, a first metadata format can includeintegrity data on a per sector basis and a second metadata format can include integrity data on a multi-sector basis. As described further below, providing integrity of the transferred data can include: performing a check of first integrity data (e.g., error data and/or address data) of metadata having a first format (e.g., metadata 341 shown in Figure 3) and corresponding to a number of sectors (e.g., a sector located in data field 348 shown in Figure 3) received from a host (e.g., in connection with a write request from the host); generating second integrity data (e.g., integrity data 454-0/454-1 shown in Figure 4 and/or 552/554 shown in Figure 5) of second metadata (e.g., metadata 447 shown in Figure 4 and/or metadata 547- 0/547-1 shown in Figure 5), the second integrity data corresponding to at least one of the number of sectors; and generating third integrity data (e.g., integrity data 458 shown in Figure 4 and/or 558-0/558-1 shown in Figure 5) of the second metadata. The third integrity data includes error data corresponding to the second integrity data as well as the at least one of the number of sectors. [0029] The controller 208 can write a number of sectors of data (e.g., user data) received from the host (e.g., responsive to a write request from the host) as a page. In various instances, the number of sectors received by the controller 208 may be fewer than the number of sectors corresponding to a page size of the number of memory devices 210. In such instances, the controller 208 can perform a partial page write operation, which can include, for example, forming a merged page by merging the sectors received from the host with sectors of a page read from the memory device 210 (e.g., by replacing sectors of the read page with the sectors received from the host), and then writing the merged page to the memory device 210. Similarly, the controller 208 can perform a partial page read operation (e.g., in response to a host read request for a number of sectors that is less than the number of sectors corresponding to the page size of the memory device 210). A partial page read operation can include reading a page of data from the memory device 210 and providing only those data sectors requested to the host. As described further below, embodiments of the present disclosure can provide integrity of transferred data in association with such partial page write and/or read operations.[0030] As illustrated in Figure 2, the controller 208 can include a data integrity (DI) insertion component 220. The component 220 can append metadata having a particular metadata format to sectors of data received from a host. The particular metadata format can be referred to as a host format. An example host metadata format is illustrated in Figure 3, which illustrates a sector of user data located in data field 348 (which may be referred to as sector field 348) and having metadata 341 appended thereto. In the example shown in Figure 3, the size of the sector field 348 is 512B (e.g., the sector size is 512B) and metadata 341 is an 8B data integrity field (DIF) including subfields 342, 344, and 346 providing integrity data associated with the sector corresponding to sector field 348. Embodiments are not limited to this example. For instance, the size of sector field 348 can be larger or smaller than 512B and the size of metadata field 341 can be larger or smaller than 8B. As one example, a host sector size may be 4096B such that size of data field 348 is 4096 bytes and the size of metadata field 341 can be 128B (e.g., each host sector can have 128B of metadata associated therewith). In the example illustrated, field 346 includes error data corresponding to the sector of user data corresponding to sector field 348. In this example, the error data of field 346 is a 2B cyclic redundancy check (CRC) corresponding to the sector corresponding to sector field 348. Field 344 is a 2B application tag that can include further integrity data, and field 342 is a 4B reference field that can include address data associated with the sector corresponding to sector field 348. [0031] In a number of embodiments, metadata 341 is appended to each sector of user data received from the host (e.g., the integrity data of metadata 341 is on a per sector basis). As an example, component 220 can calculate error data 346 for each sector on a per sector basis. Each of a number of sectors and corresponding metadata 341 (e.g., each of a number of data groups 340) can be transferred to an integrity component 224 of controller 208 (e.g., via a data buffer 222 in association with a write operation). In a number of embodiments, the metadata 341 associated with each data group 340 can originate from the host interface 206 and component 220 can be used to insert additional integrity data such as that illustrated in Figure 4 (e.g., 452, 454-0, 454-1, etc.). In a number of embodiments, the data associatedwith fields 342, 344, and 346 of metadata 341 can be appended to the sectors of data (e.g., using component 220) received from a host or the metadata 341 can be appended to the sectors of data prior to being received by controller 208. [0032] In the example illustrated in Figure 2, the controller 208 includes a merge component 228 to which the data groups 340 can be transferred, prior to being transferred to integrity component 224 (e.g., if the number of sectors is less than the number of sectors corresponding to a full page such that the write operation is a partial page write operation). In the example illustrated in Figure 2, a page size associated with memory devices 210 is 4KB (e.g., eight 512B sectors). [0033] The merge component 228 can, in association with a partial page write operation, receive a second number of sectors (e.g., eight sectors) corresponding to a page of data from the number of memory devices 210. The second number of sectors can include metadata corresponding thereto in accordance with a second format (e.g., a format associated with the memory device 210 and which can be referred to herein as a memory metadata format or an adjusted format). Example adjusted formats are described further below in connection with Figures 4 and 5. In a number of embodiments, the merge component 228 can form a merged page by replacing at least one of the second number of sectors corresponding to the page of data received from the memory devices 210 with at least one of the first number of sectors received from the host (e.g., sectors corresponding to sector field 348). The merge component 228 can also change a metadata format associated with the sectors of the merged page from the adjusted format to the host format prior to providing the sectors of the merged page to the integrity component 224. In a number of embodiments, the merge component 228 is configured to perform a check of error data of the metadata corresponding to the second number of sectors (e.g., a CRC check) prior to replacing the at least one of the second number of sectors corresponding to the page of data with the at least one of the first number of sectors corresponding to sector field 348. [0034] The integrity component 224 is configured to perform a check of integrity data of the first metadata 341 corresponding to the number of sectors to be written to device 210 as a page. For instance, the integrity component 224 can, foreach of the sectors, calculate error data (e.g., a CRC) and compare it to the error data 346 of metadata 341, which can provide an indication of the integrity of the data as transferred from component 220 to component 224. The integrity component 224 can include a CRC engine and/or an error correction code (ECC) engine, among other circuitry configured to perform integrity checks of received data. [0035] The integrity component 224 can also generate second metadata (e.g., metadata 447 shown in Figure 4 and/or metadata 547-0/547-1 shown in Figure 5) having a second metadata format (e.g., memory metadata format such as those illustrated in Figures 4 and 5 and associated with the memory devices 210). That is, component 224 change the metadata format associated with the first number of sectors (e.g., the number of sectors corresponding to sector field 348) to an adjusted metadata format. The second metadata format can be a format such as those illustrated in Figures 4 and 5 and can depend on the particular characteristics of memory device 210 (e.g., device type, error detection and/or correction characteristics, page size, and/or amount of metadata bytes available), among other characteristics. The second metadata format can include second integrity data (e.g., integrity data 454-0/454-1 shown in Figure 4 and/or 552/554 shown in Figure 5) corresponding to at least one of the first number of sectors. The second integrity data can include error data and/or address data in addition to, or as a replacement for, the first integrity data of the first metadata (e.g., 341). In a number of embodiments, the second metadata can also include third integrity data, which can include error data corresponding to the second integrity data as well as the at least one of the first number of sectors. For instance, the third integrity data can be an error correcting code such as a BCH error correcting code (ECC) that can cover (e.g., protect) one or more sectors of user data as well as the second integrity data corresponding thereto. The integrity component 224 can, in association with changing the metadata format associated with the user data to be written to the memory devices 210, generate the second integrity data of the second metadata. [0036] In a number of embodiments, the controller 208 includes a component 226 configured to generate the third integrity data of the secondmetadata. In the example illustrated in Figure 2, the component 226 is configured to generate the third integrity data in the form of a BCHECC code corresponding to the second integrity data of the second metadata, as well as the at least one of the number of sectors of user data. The component 226 can include an ECC engine or other component suitable for generating error data. [0037] The controller 208 is configured to write the number of sectors and corresponding second metadata to the memory device 210 (e.g., as a page of data). As illustrated in Figure 2, the controller 208 can include a component 230 configured to, in association with a read operation associated with one or more of the number of sectors written to the memory device, perform a check of the third integrity data associated with the second metadata corresponding to the one or more sectors (e.g., a check of the BCHECC corresponding to the one or more sectors and to the second integrity data corresponding thereto to detect the existence of erroneous data). The component 230 can also correct a number of errors (if any) associated with the one or more sectors and/or second integrity data corresponding thereto. The checked (and possibly corrected) sectors and corresponding second integrity data having the adjusted metadata format can then be provided to a second integrity component 234 of the controller 208. If the read operation is a full page read operation, the data received by component 230 can be transferred directly to the integrity component 234 via a buffer 232. If the read operation is a partial page read operation, the data received by component 230 can be transferred to the merge component 228 prior to being transferred to the integrity component 234 via buffer 232. In this example, a partial page read operation would be a read operation in which the number of 512B sectors requested by the host is less than eight sectors since the page size is 4KB (e.g., each full 4KB page includes eight 512B sectors). The component 226 can include an ECC engine or other component suitable for detecting and/or correcting errors in data. [0038] The integrity component 234 can perform a check of the second integrity data of the second metadata corresponding to received user data sectors read from the memory devices 210. As an example, the second integrity data can include error data in the form of a CRC corresponding to one or more sectors. Theintegrity component 234 can generate a CRC for the one or more sectors that can be compared to the CRC of the second integrity data (e.g., second integrity data which was previously generated by integrity component 224 and inserted in the second metadata prior to the one or more sectors being written to memory devices 210). As described further below, the second integrity information can include error data (e.g., a CRC) on a multi-sector basis. [0039] The integrity component 234 can also adjust (e.g., convert) the metadata format associated with the sectors read from the memory devices 210 from the adjusted metadata format back to the host format (e.g., the format shown in Figure 3). Converting the adjusted metadata format back to the host format can include replacing the second metadata corresponding to the number of sectors read from the memory devices 210 with third metadata. The third metadata can include fourth integrity data, such as error data (e.g., a CRC) corresponding to the number of sectors, which the integrity component can generate on a per sector basis. The third metadata can have a format such as that associated with metadata 341 such that the fourth integrity data can be calculated and included in a data field such as field 346 illustrated in Figure 3. [0040] In a number of embodiments, the second integrity component 234 can provide the number of sectors having the host format to a component 236, which can perform a check of the fourth integrity data (e.g., the CRC generated by component 234) corresponding to the number of sectors prior to providing the requested number of sectors to the host. In this manner, the integrity of the data received from the host (e.g., as a number of sectors corresponding to sector field 348) and written to the memory devices 210 can be confirmed in association with a read request associated with providing the number of sectors back to the host. In a number of embodiments, the component 236 can remove (e.g., strip) the metadata corresponding to the sectors received thereto prior to providing the sectors associated with the read request back to the host via the host interface 206. However, embodiments are not so limited. For instance, in a number of embodiments, one or more portions of the metadata (e.g., integrity data) corresponding to the host format (e.g., the format shown in Figure 3) and receivedby component 236 can be sent to the host (e.g., via host interface 206) without being removed. The unremoved integrity data may be rechecked by the host interface 206 or by the host itself (e.g., at the application level). [0041] The controller 208 illustrated in Figure 2 can adjust a metadata format associated with a number of sectors between different formats while maintaining the ability to provide data integrity associated with the number of sectors as well as the metadata corresponding thereto. As described further below, the adjusted metadata format can depend on the type of memory device (e.g., memory device 210) to which the sectors will be written to and/or read from. [0042] In a number of embodiments, data stored in the memory devices 210 (e.g., in accordance with an adjusted metadata format such as that illustrated in Figures 4 and 5) can be transferred to/from the memory devices 210 in association with operations (e.g., read and/or write operations) that do not originate from a host. For instance, a memory management component (e.g., 613 shown in Figure 6) associated with controller 208 can write data to and/or read data from the memory devices 210 in association with a data reclamation process. In such instances, components such as integrity component 224 and/or merge component 228 may check the integrity data associated with the adjusted metadata format (e.g., instead of first converting the metadata from the adjusted metadata format back to the host format and then checking the integrity data associated with the host format. [0043] Figure 4 illustrates a metadata format associated with providing integrity of transferred data in accordance with one or more embodiments of the present disclosure. The metadata format illustrated in Figure 4 is a metadata format associated with a particular memory device (e.g., memory device 210 shown in Figure 2). As one example, the metadata format illustrated in Figure 4 can be associated with an SLC NAND memory device. [0044] The metadata format illustrated in Figure 4 is an adjusted metadata format as compared to the host metadata format illustrated in Figure 3, for example. Figure 4 illustrates a data group 460 that can be formed, for instance, by integrity component 224 of controller 208 described above in connection with Figure 2. The data group 460 can be one of multiple data groups that each include at least one of anumber of sectors received from a host in connection with a write operation, for example, along with metadata corresponding thereto. The data group 460 can be referred to herein as a codeword 460. [0045] In the example illustrated in Figure 4, the codeword 460 includes a payload portion 445, which includes two sector data fields 448-0 and 448-1. In this example, the codeword 460 includes two 512B sectors (e.g., USER SECTOR0 corresponding to field 448-0 and USER SECTOR1 corresponding to field 448-1), as well as metadata 447. Although not illustrated in Figure 4, the reader will appreciate that a 4KB page of data can include four codewords 460 (e.g., with a first additional codeword 460 including a third and fourth user sector, a second additional codeword 460 including a fifth and sixth user sector, and a third additional codeword 460 including a seventh and eighth user sector, along with respective corresponding metadata 447). In this example, the metadata 447 includes 56B of data. As such, each codeword 460 includes a 1KB payload (e.g., two 512B sectors) and 56B of available metadata. Therefore, writing a page of data to the memory devices can include writing 4320B of data (e.g., 4KB of user data corresponding to a 4KB page size, as well as 320B of metadata). [0046] The metadata 447 associated with codeword 460 includes a number of data fields. The arrows illustrated in Figure 4 are used to indicate which portions of data are covered by particular integrity data fields. In this example, field 450-0 is an integrity data field that includes the 8B DIF data associated with the user sector of field 448-0 (e.g., SECTOR0 DIF). That is, field 450-0 can include the metadata (e.g., 341), which was appended to SECTOR0 (e.g., via insertion component 220 shown in Figure 2) prior to SECTOR0 being provided to the integrity component 224. Similarly, field 450-1 is an integrity data field that includes the 8B DIF data associated with the user sector of field 448-1 (e.g., SECTOR1 DIF). That is, field 450-1 can include the metadata (e.g., 341), which was appended to SECTOR1 (e.g., via insertion component 220 shown in Figure 2) prior to SECTOR 1 being provided to the integrity component 224. [0047] Data field 452 of metadata 447 is a 4B data integrity field that includes address data associated with a page of data to which codeword 460corresponds. The address data (shown as FLASH LBA) can be a logical block address (LBA) associated with the logical page of data to which the user data sectors of fields 448-0 and 448-1 correspond. The field 452 of different codewords 460 corresponding to the same page can be checked to determine whether the integrity of the address data is maintained across the number of codewords corresponding to the page. [0048] Data field 454-0 of metadata 447 is a 2B error data field. In this example, the error data corresponding to field 454-0 (shown as FLASH CRC SECTOR0) is a CRC covering the data corresponding to fields 448-0, 450-0, and 452 (e.g., a CRC covering the 512B of USER SECTOR0, the 8B SECTOR0 DIF, and the 4B FLASH LBA of the page to which SECTOR0 corresponds). Similarly, data field 454-1 of metadata 447 is a 2B error data field. In this example, the error data corresponding to field 454-1 (shown as FLASH CRC SECTORl) is a CRC covering the data corresponding to fields 448-1, 450-1, and 452 (e.g., a CRC covering the 512B of USER SECTORl, the 8B SECTORl DIF, and the 4B FLASH LBA of the page to which SECTORl corresponds). The data fields 452, 454-0, and 454-1 of the adjusted metadata format illustrated in Figure 4 can include data referred to as second integrity data as described above in connection with Figure 2. [0049] Data field 456 of metadata 447 is a 4B reserved data field, in this example. The field 456 may be used for various purposes in association with data transfer within a memory system (e.g., memory system 104 shown in Figure 1). For instance, the field 456 can include additional error data and/or other integrity data associated with payload 445. However, the field 456 may be used for purposes other than data integrity. [0050] Data field 458 of metadata 447 is a 28B error data field. In this example, the error data corresponding to field 458 (shown as BCHECC16) is a 16 bit error correcting code (ECC) covering the 1024B of user data corresponding to payload 445 (e.g., USER SECTOR0 and USER SECTORl) as well as the 56B of metadata 447 associated therewith. As such, the error data corresponding to field 458 supports 16 bit correction per codeword 460 (e.g., per 1080B). Embodiments are not limited this example. The data field 458 of the adjusted metadata formatillustrated in Figure 4 can be referred to as third integrity data as described above in connection with Figure 2. [0051] The adjusted metadata format illustrated in Figure 4 can be implemented via a controller and can be used to provide data integrity in association with operations such as write and read requests from a host as described above in connection with Figure 2. For instance, the integrity component 224 can convert a metadata format associated with sectors received from a host from a first format (e.g., a host format such as that shown in Figure 3) to a second format (e.g., an adjusted format such as that shown in Figure 4). Codewords (e.g., 460) written to the memory devices 210 can include an error data field (e.g., 458) that includes error data generated by component 226 that covers the other metadata (e.g., the data of metadata 447 corresponding to data fields other than field 458) and payload (e.g., 445) of the codewords. The error data covering the other metadata and payload can be checked via component 230 when the codewords are read from the memory devices 210 (e.g., in response to a read request from a host). Whether the read request is a partial page read request or a full page read request, the controller 208 is configured to convert the metadata format associated with the codewords read from the device from the adjusted metadata format back to the host format prior to delivering the requested sectors to the host, while maintaining data integrity of the sectors of data. [0052] Figure 5 illustrates a metadata format associated with providing integrity of transferred data in accordance with one or more embodiments of the present disclosure. The metadata format illustrated in Figure 5 is a metadata format associated with a particular memory device (e.g., memory device 210 shown in Figure 2). As one example, the metadata format illustrated in Figure 5 can be associated with an MLC NAND memory device. [0053] The metadata format illustrated in Figure 5 is an adjusted metadata format as compared to the host metadata format illustrated in Figure 3, for example. Figure 5 illustrates a first data group 560-0 and a second data group 560-1 that can be formed, for instance, by integrity component 224 and component 226 ofcontroller 208 described above in connection with Figure 2. The data groups 560-0 and 560-1 can be referred to as codewords 560-0 and 560-1. [0054] In the example illustrated in Figure 5, the codewords 560-0 and 560- 1 include respective payload portions 545-0 and 545-1, as well as respective metadata 547-0 and 547-1. In this example, each of the codewords 560-0 and 560-1 include two sector data fields (e.g., codeword 560-0 includes sector fields 548-0 and 548-1 and codeword 560-1 includes sector fields 548-2 and 548-3). The codeword 560-0 includes two 512B sectors (e.g., USER SECTOR0 corresponding to field 548-0 and USER SECTOR1 corresponding to field 548-1). The codeword 560-1 includes two 512B sectors (e.g., USER SECTOR2 corresponding to field 548-2 and USER SECTOR3 corresponding to field 548-3). Although not illustrated in Figure 5, the reader will appreciate that a 4KB page of data can include four codewords (e.g., an additional codeword 560-0 including a fifth and sixth user sector and an additional codeword 560-1 including a seventh and eighth user sector, along with corresponding metadata 547-0 and 547-1, respectively). In this example, the metadata 547-0/547-1 includes 56B of data. As such, each codeword includes a 1024B payload (e.g., two 512B sectors) and 56B of available metadata. Therefore, writing a page of data to the memory devices can include writing 4320B of data (e.g., 4KB of user data corresponding to a 4KB page size, as well as 320B of metadata). [0055] The metadata 547-0 associated with codeword 560-0 and the metadata 547-1 associated with codeword 560-1 each include a number of data fields. In the example illustrated in Figure 5, the adjusted metadata format is such that the data fields associated with metadata 547-0 of codeword 560-0 are different than the data fields associated with metadata 547-1 of codeword 560-1 (e.g., the 56B of available metadata are used differently). Similar to Figure 4, the arrows illustrated in Figure 5 are used to indicate which portions of data are covered by particular integrity data fields. [0056] Data field 552 of metadata 547-0 is a 4B data integrity field that includes address data associated with a page of data to which codewords 560-0 and 560-1 correspond. The address data (shown as FLASH LBA) can be a logical blockaddress (LBA) associated with the logical page of data to which the user data sectors of fields 548-0, 548-1, 548-2, and 548-3 correspond. The field 552 of different codewords 560-0 corresponding to the same page can be checked to determine whether the integrity of the address data is maintained across the number of codewords corresponding to the page. [0057] Data field 554 of metadata 547-1 is a 4B error data field. In this example, the error data corresponding to field 554 (shown as FLASH CRC) is a CRC (e.g., a 32 bit CRC) covering the data corresponding to fields 548-0, 548-1, 548-2, 548-3, and 552 (e.g., a CRC covering the 512B of USER SECTOR0, USER SECTOR1, USER SECTOR2, and USER SECTOR3, as well as the 4B FLASH LBA of the page to which the user sectors correspond). The data fields 552 and 554 of the adjusted metadata format illustrated in Figure 5 can include data referred to as second integrity data as described above in connection with Figure 2. In this example, the metadata 547-1 associated with codeword 560-1 includes second integrity data (e.g., error data in the form a CRC corresponding to data integrity field 554) corresponding to sectors associated with codeword 560-1 (e.g., USER SECTOR2 and USER SECTOR3) as wells as to sectors associated with codeword 560-0 (e.g., USER SECTOR0 and USER SECTOR1). The metadata 547-0 associated with codeword 560-0 also includes second integrity data (e.g., address data corresponding to data integrity field 552) corresponding to sectors associated with different codewords. That is, the second integrity data 552 of metadata 547-0 corresponds to sectors within the codeword 560-0 as well as to sectors within a different codeword (e.g., 560-1). As such, the adjusted metadata format associated with codewords 560-0/560-1 includes second integrity data (e.g., integrity data corresponding to fields 552 and 554) corresponding to multiple sectors (e.g., sectors 0 through 3 in this example). [0058] Data fields 558-0 of metadata 547-0 and 558-1 of metadata 547-1 are each a 52B error data field. In this example, the error data corresponding to fields 558-0 and 558-1 (shown as BCHECC29) is a 29 bit error correcting code (ECC) covering the 1024B of user data corresponding to payloads 545-0 (e.g., USER SECTOR0 and USER SECTOR1) and 545-1 (e.g., USER SECTOR2 and USERSECTOR3), respectively, as well as the respective 56B of metadata 547-0 and 547-1 associated therewith. As such, the error data corresponding to fields 558-0 and 558- 1 supports 29 bit correction per codeword (e.g., per 1080B). Embodiments are not limited this example. The data fields 558-0 and 558-1 of the adjusted metadata format illustrated in Figure 5 can be referred to as third integrity data as described above in connection with Figure 2. [0059] The codewords 560-0 and 560-1 can be written to a memory device (e.g., memory device 210) along with additional codewords corresponding to the page of data. The third integrity data of metadata 547-0 and 547-1 (e.g., the ECC codes of integrity data fields 558-0 and 558-1) can be checked responsive to a read operation, for instance. The check of the third integrity data can be performed by a component such as component 230 shown in Figure 2. In a number of embodiments, subsequent to performing the check of the third integrity data, a check of second integrity data of the metadata 547-0 and 547-1 (e.g., the CRC of data integrity field 554) can be performed (e.g., by integrity component 234). In a number of embodiments, the integrity component 234 can generate, on a per sector basis, fourth integrity data corresponding to at least one sector of a particular one of the number of codewords (e.g., USER SECTOR0 of codeword 560-0). As described above, the component 234 can replace the metadata associated with the particular one of the number of codewords (e.g., metadata 547-0 associated with codeword 560-0) with adjusted metadata (e.g., metadata having a host format such as metadata 341 shown in Figure 3) corresponding to the at least one sector (e.g., USER SECTOR0) and including the fourth integrity data (e.g., error data such as a CRC corresponding to USER SECTOR0). [0060] In a number of embodiments, a component such as component 236 can perform a check of the fourth integrity data of the adjusted (e.g., modified) metadata corresponding to the at least one sector of the particular one of the number of codewords. The check of the fourth integrity data can include comparing the fourth integrity data of the modified metadata corresponding to the at least one sector of the particular one of the number of codewords with first integrity data corresponding to the at least one sector. For instance, the component 236 cancompare the fourth integrity data (e.g., the CRC generated by integrity component 234) with a CRC generated previously (e.g., by insertion component 220 and stored in a buffer of the controller, for example). In a number of embodiments, the adjusted metadata corresponding to the at least one sector can be removed (e.g., via component 236) prior to forwarding the at least one sector to the host. However, as indicated above, in a number of embodiments, one or more portions of the modified metadata received by the component 236 (e.g., from component 234) may be provided to the host interface 206 without being removed. [0061] The third integrity data of metadata 547-0 and 547-1 (e.g., the ECC codes of integrity data fields 558-0 and 558-1) can also be checked responsive to a partial read operation, for instance. In such embodiments, the check of the third integrity data can be performed by a component such as component 230 shown in Figure 2. In a number of embodiments, subsequent to performing the check of the third integrity data, a check of second integrity data of the metadata 547-0 and 547- 1 (e.g., the CRC of data integrity field 554) can be performed (e.g., by merge component 228). Subsequent to checking the second integrity data of metadata 547- 0 and 547-1, the merge component 228 is configured to convert the metadata from the adjusted format shown in Figure 5 to a host format (e.g., a format including integrity data on a per sector basis as shown in Figure 3). Merge component 228 can replace one or more of the sectors of the page read from the memory devices 210 with one or more host sectors received in association with the partial read operation, and can transfer the data and associated metadata to the integrity component 224, which is configured to receive data in accordance with the host format as described above. In a number of partial read operations, the merge component 228 may not perform a check of the second integrity data of each of the codewords corresponding to the page read from memory devices 210 (e.g., codewords 560-0 and 560-1, as well as other codewords corresponding to the read page). For instance, if the partial read includes two sectors read from the memory devices (e.g., USER SECTOR0 and USER SECTOR1) being merged with six host sectors, then the merge component 228 can check the integrity data of integrity data field 554 (e.g., the CRC corresponding to fields 548-1, 548-2, 548-3, 548-4, and552). However, since the other codewords corresponding to the read page do not include sectors to be merged with host sectors, the merge component 228 can be configured to not perform a check of second integrity data associated with those codewords. Similarly, in association with this partial read example, the merge component 228 can be configured to forgo conversion of the metadata associated with those codewords of the read page that do not include sectors to be merged with host sectors from the adjusted format to the host format. Forgoing conversion of the metadata of those codewords not including sectors associated with the partial read operation and/or forgoing checking of second integrity data associated therewith can save processing resources, among other benefits. [0062] Figure 6 is a functional block diagram of a memory controller 608 in accordance with one or more embodiments of the present disclosure. The controller 608 can be a controller such as controller 108 described in connection with Figure 1 or a controller such as controller 208 described in connection with Figure 2. [0063] In the example illustrated in Figure 6, the controller 608 includes a memory management component 613 and a memory control component 611. The memory management component 613 includes components (e.g., circuitry and/or firmware) associated with various memory management functions such as wear leveling (e.g., garbage collection and/or reclamation), error detection and/or correction, and/or block retirement, among various other memory management functions associated with memory devices 610-1, . . ., 610-N. The memory management component 613 can parse and/or format host commands (e.g., commands received from a host via host interface 606) into device commands (e.g., commands associated with operating the memory devices 610-1, . . ., 610-N). The memory management component 613 can also generate device commands (e.g., to accomplish various memory management functions). The memory management component 613 is configured to provide device commands to the memory control component 611. [0064] The memory control component 611 is configured to control memory operations associated with writing data to the number of memory devices 610-1, . . ., 610-N, reading data from the memory devices 610-1, . . ., 610-N, and erasing data(e.g., blocks) within the memory devices 610-1, . . ., 610-N. The memory operations can be operations (e.g., reads and/or writes) based on host commands (e.g., host commands received to controller 608 via host interface 606) and/or can be based on internally generated device commands initiated by control component 611 and/or memory management component 613 (e.g., in association with wear leveling, error detection and/or correction, etc.). [0065] The memory devices 610-1, . . ., 610-N coupled to control component 611 can be nonvolatile memory devices such as devices 110-1, . . ., 1 10- N described in Figure 1. In the example illustrated in Figure 6, the memory devices 610-1, . . ., 610-N are NAND flash memory devices. As described above, the memory cells within memory devices 610-1, . . ., 610-N can be organized into a number of blocks having a number of physical pages associated therewith. [0066] The memory control component 611 includes an error correction component 619 that can include an ECC engine or other circuitry configured to detect and/or correct errors associated with data being written to and/or read from the memory devices 610-1, . . ., 610-N. In a number of embodiments, the error correction component 619 can employ a BCHECC (e.g., such as that described above in connection with Figures 2, 4, and 5), to detect bit errors in data provided thereto. The detected bit errors may or may not be correctable via error correction component 619 (e.g., depending on the number of erroneous bits detected and the type of ECC used, etc.). If the number of erroneous bits is correctable via component 619, then the operation is a correctable bit error, which the component 619 can proceed to correct in association with completion of the particular operation. If the number of erroneous bits is not correctable via component 619 (e.g., the number of erroneous bits exceeds a threshold number of correctable bits associated with the component 619), then the operation is an uncorrectable bit error. The memory control component 611 can be configured to report correctable bit errors as well as uncorrectable bit errors to the memory management component 613. [0067] The memory management component 613 includes a number of management tables 615. The tables 615 can maintain various informationassociated with the memory devices 610-1, . . ., 610-N. For instance, the tables 615 can include information regarding block age and/or block erase count for blocks within the devices 610-1, . . ., 610-N. The tables 615 can include information regarding error history associated with blocks and/or pages associated with the memory devices 610-1, . . ., 610-N. For instance, the tables 615 can maintain a number of error counts (e.g., a write operation error count, a read bit error count, a read operation error count, and/or an erase error count, among others) associated with the devices 610-1, . . ., 610-N. A write operation error refers to a write operation (e.g., host initiated or device initiated) which fails to perform. A read operation error refers to a read operation (e.g., host initiated or device initiated) which fails to perform. A read bit error can refer to a read operation that results in detection of a number of error bits associated with the data (e.g., page) being read. As described above, the number of detected errors may or may not be correctable via an error correction component (e.g., 619). If the number of detected errors is above a threshold number of errors correctable via the error correction component (e.g., 619), the bit error is referred to as an uncorrectable bit error. The tables 615 can maintain a count of correctable and/or uncorrectable read bit errors experienced by blocks associated with the memory devices 610-1, . . ., 610-N. The tables 615 can also include LBA tables, among others. [0068] The memory management component 613 of controller 608 includes a redundancy management component 617, which can be a RAID (redundant array of independent disks, where the term "disks" is simply a carryover from prior implementations using hard disk drives) unit 617. The RAID unit 617 can be used to provide data reliability through redundancy in association with operating the memory devices as a RAID. The RAID unit 617 can include, for example, RAID exclusive or (XOR) circuitry, among other circuitry associated with various RAID levels. [0069] In a number of embodiments, the controller 608 is configured to actively detect and recover from error occurrences (e.g., bit errors and/or operational errors) associated with various operations, such as read and write operations, while maintaining integrity of data transferred between the host interface 606 and memorydevices 610-1, . . ., 610-N. The controller 608 is configured to remove failing memory resources (e.g., pages, blocks, and/or devices 610-1, . . ., 610-N) from use to prevent future uncorrectable errors. [0070] For instance, the memory management component 613 can initiate erase commands to the memory control component 611 (e.g. to prepare a block within the memory devices 610-1, . . ., 610-N for writing). The control component 611 can detect the occurrence of an erase error in association with the erase operation and can report the error occurrence to the memory management component 613. In the case of an erase error, data recovery is not necessary since any valid data within the block would have been moved to a different block prior to performance of the erase operation. The block associated with the erase error can be retired when the number of erase errors reaches a threshold number. In a number of embodiments, the threshold number of erase errors is one; however, embodiments are not so limited. [0071] In various embodiments, the controller 608 can perform various memory management functions such as wear leveling functions (e.g., garbage collection and/or reclamation), error detection/correction functions (e.g., associated with ECC), page and/or block retirement functions, and/or RAID functions, among various other memory management functions associated with memory devices 610- 1, . . ., 610-N, while the controller is performing various device operations and/or host operations (e.g., read and/or write operations, etc.). As such, the various memory management functions can be performed without a perceived impact to a user. The ability of the controller 608 to perform error recovery operations while maintaining data integrity in accordance with embodiments described herein can reduce the amount of device manufacturing testing that is done (e.g., testing performed pre-shipping prior to being provided to a consumer in the field). For instance, embodiments of the present disclosure can reduce or prevent the occurrence of device manufacturing testing associated with determining location of bad blocks or bad memory devices (e.g., 610-1, . . ., 610-N), since such testing can be performed in the field via hardware components such as those component illustrated in Figure 6 (e.g., 611, 613, 615, 617, and 619). That is, the memorydevices 610-1, . . ., 610-N can be untested with respect to bad block locations associated with the memory devices 610-1, . . ., 610-N. Reducing and/or eliminating pre-shipping testing of memory devices (e.g., 610-1, . . ., 610-N) by shifting such testing to hardware components of the controller 608 can reduced costs associated with pre-shipping testing of the devices while producing little, if any, perceived impact to the user. [0072] As an example, in connection with a host read operation, data read from the memory devices 610-1, . . ., 610-N can be checked using error correction component 619 to determine (e.g., detect) bit errors in the data. If a number of bit errors (e.g., if any) are detected and the number of bit errors is below the threshold number of bit errors correctable by the component 619 (e.g., the bit error is a correctable bit error), the data will be corrected and provided to the host. The occurrence of the correctable bit error is reported to the memory management component 613, which can maintain a bit error count (e.g., a count of correctable bit errors) associated with the block from which the data was read (e.g., in a table 615), and can retire the block if the bit error count exceeds a particular bit error count threshold. If the number of detected bit errors is above the threshold number of bit errors correctable by the component 619 (e.g., the bit error is an uncorrectable bit error), the requested read data can be automatically recovered using the RAID unit 617 and can be provided to the host. In a number of embodiments, the memory management component 613 automatically initiates the recovery of the read data responsive to a received indication of the occurrence of the uncorrectable bit error (e.g., via a report from error correction component 619). The memory management component 613 can maintain a bit error count (e.g., a count of uncorrectable bit errors) associated with the block from which the data was read (e.g., in a table 615), and can retire the block if the bit error count exceeds a particular bit error count threshold. Prior to retiring a block (e.g., in response to the bit error count exceeding the bit error count threshold) the data associated with the block can be recovered using RAID unit 617 and the data can be moved to a new block (e.g., an available good block).[0073] Bit errors associated with a device read operation (e.g., a read operation initiated by the control component 611 is association with a memory management process such as reclamation) can be handled similarly. For instance, if a correctable bit error is detected by component 619 the data can be corrected and moved to a new block within the memory devices 610-1, . . ., 610-N (as opposed to being provided to the host in association with a host read operation). The occurrence of the correctable bit error is reported to the memory management component 613, which can maintain the bit error count associated with the block from which the data was read, and can retire the block if the bit error count exceeds a particular bit error count threshold. If the number of detected bit errors is above the threshold number of bit errors correctable by the component 619 (e.g., the bit error is an uncorrectable bit error), the requested read data can be immediately recovered using the RAID unit 617 and can be moved to a new block. The occurrence of the uncorrectable bit error is reported to the memory management component 613, which can maintain the bit error count associated with the block from which the data was read, and can retire the block if the bit error count exceeds a particular bit error count threshold. Prior to retiring a block (e.g., in response to the bit error count exceeding the bit error count threshold) the data associated with the block can be recovered using RAID unit 617 and the data can be moved to a new block (e.g., an available good block). [0074] Operational read errors (e.g., operational host read errors and operations device read errors) can be handled in a similar manner to that described above in connection with bit errors corresponding to host and device read operations. For example, if a read request fails, an indication of the failure can be provided to memory management component 613, which can automatically initiate recovery of the requested read data (e.g., via RAID unit 617). [0075] Operational write errors (e.g., operational host write errors and operations device write errors) can be handled in a similar manner to that described above in connection with bit errors corresponding to host and device read operations and/or with operational read errors. For example, if a write request fails, an indication of the failure can be provided to memory management component 613,which can automatically initiate recovery of the requested write data (e.g., via RAID unit 617). In a number of embodiments, the memory management component 613 automatically initiates the recovery of the write data responsive to a received indication of the occurrence of the write error (e.g., via a report from control component 611). The memory management component 613 can maintain a write error count associated with the block to which the data was to be written (e.g., in a table 615), and can retire the block if the write error count exceeds a particular write error count threshold. In connection with automatic recovery of the write data (e.g., via RAID unit 617), the recovered write data can be moved to a new block (e.g., an available good block). Conclusion [0076] The present disclosure includes apparatus (e.g., computing systems, memory systems, controllers, etc.) and methods for providing data integrity. One or more method embodiments can include, for example: receiving a number of sectors of data to be written to a number of memory devices (e.g., a single memory device); appending first metadata corresponding to the number of sectors and including first integrity data to the number of sectors, the first metadata has a particular format; generating second integrity data to be provided in second metadata, the second integrity data corresponding to at least one of the number of sectors (wherein the second metadata has a second format); and generating third integrity data to be provided in the second metadata, the third integrity data including error data corresponding to the second integrity data and the at least one of the number of sectors. [0077] It will be understood that when an element is referred to as being "on," "connected to" or "coupled with" another element, it can be directly on, connected, or coupled with the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly connected to" or "directly coupled with" another element, there are no intervening elements or layers present. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.[0078] As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. As used herein the term "or," unless otherwise noted, means logically inclusive or. That is, "A or B" can include (only A), (only B), or (both A and B). In other words, "A or B" can mean "A and/or B" or "one or more of A and B." [0079] It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. Thus, a first element could be termed a second element without departing from the teachings of the present disclosure. [0080] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled. [0081] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
Memory eviction that recognizes not all evictions have an equal cost on system performance. A management device keeps a weight and/or a count associated with each portion of memory. Each memory portion is associated with a source agent that generates requests to the memory portion. The management device adjusts the weight by a cost factor indicating a latency impact that could occur if the evicted memory portion is again requested after being evicted. The latency impact is a latency impact for the associated source agent to replace the memory portion. In response to detecting an eviction trigger for the memory device, the management device can identify a memory portion having a most extreme weight, such as a highest or lowest value weight. The management device replaces the identified memory portion with a memory portion that triggered the eviction. |
CLAIMSWhat is claimed is:1. A method for managing eviction from a memory device, comprising:initializing a count for one of multiple memory portions in a memory device, including associating the count with a source agent that accesses the one memory portion; adjusting the count based on access to the one memory portion by the associated source agent;adjusting the count based on a dynamic cost factor for the associated source agent, where the dynamic cost factor represents a latency impact to performance of the source agent to replace the memory portion; andcomparing the count to counts for others of the multiple portions to determine which memory portion to evict in response to an eviction trigger for the memory device.2. The method of claim 1, wherein the memory device comprises a main memory resource for a host system.3. The method of claim 2, wherein the comparing comprise comparing with a memory controller device.4. The method of claim 2, wherein initializing the count comprises initializing the count in response to receiving a request from a lower-level memory requesting data.5. The method of claim 1, wherein comparing the count further comprises identifying for eviction one of the multiple memory portions having a lowest cost.6. The method of claim 5, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending for the associated source agent.7. The method of claim 1, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor.8. A memory management device, comprising:a queue to store requests for access to a memory device managed by the memory management device;an eviction table to store a weight associated with each of multiple memory portions of the memory device, each of the multiple memory portions having an associated source agent that generates requests for data stored in the memory portion, wherein each weight is factored based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; andan eviction processor configured to initialize a count for one of the memory portions; adjust the count based on access to the one memory portion by the associated source agent; adjust the count based on a dynamic cost factor for the associated source agent; and compare the count to counts for others of the multiple memory portions to determine which memory portion to evict in response to an eviction trigger for the memory device.9. The memory management device of claim 8, wherein the memory device comprises a DRAM (dynamic random access memory) resource for a host system, wherein the DRAM is a highest level memory of a multilevel memory (M LM) system, wherein the eviction processor is to detect the eviction trigger in response to a page fault occurring in response to servicing a request from a cache of the MLM.10. The memory management device of claim 8, wherein the eviction processor is to identify the memory portion having a lowest cost to evict.11. The memory management device of claim 10, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending in the queue for the associated source agent.12. An electronic device with a memory subsystem, comprising:an SDRAM (synchronous dynamic random access memory) including a memory array to store multiple memory portions, each of the multiple memory portions having an associated source agent that generates requests for data stored in the SDRAM, wherein each weight is computed based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; anda memory controller to control access to the SDRAM, the memory controller includinga queue to store requests for access to the SDRAM;an eviction table to store a weight associated with each of multiple memory portions; andan eviction processor configured to initialize a count for one of the memory portions; adjust the count based on access to the one memory portion by the associated source agent; adjust the count based on a dynamic cost factor for the associated source agent; and compare the count to counts for others of the multiple memory portions to determine which memory portion to evict in response to an eviction trigger for the memory device; anda touchscreen display coupled to generate a display based on data accessed from the SDRAM.13. An article of manufacture comprising a computer readable storage medium having content stored thereon, which when accessed causes a computing device to perform operations for managing eviction from a memory device in accordance with any of claims 1 to 7.14. An apparatus for managing eviction from a memory device comprising means for performing operations for managing eviction from a memory device in accordance with any of claims 1 to 7.15. A method for managing eviction from a memory device, comprising:detecting an eviction trigger in a memory device, where the eviction trigger indicates one of multiple portions of memory should be removed from the memory device, each memory portion having an associated weight and an associated source agent that generates requests for data stored in the memory portion; identifying a memory portion having a most extreme weight, wherein each weight is computed based on access history for the memory portion and adjusted by a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; andreplacing the memory portion identified as having the most extreme weight, with a memory portion that triggered the eviction.16. The method of claim 15, wherein the memory device comprises a main memory resource for a host system.17. The method of claim 16, wherein detecting the eviction trigger comprises detecting the eviction trigger with a memory controller device.18. The method of claim 16, wherein detecting the eviction trigger comprises receiving a request from a lower-level memory requesting data that causes a miss in the memory device.19. The method of claim 15, wherein identifying the memory portion having the most extreme weight comprises identifying the memory portion having a lowest cost to evict.20. The method of claim 15, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending for the associated source agent.21. The method of claim 15, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor.22. A memory management device, comprising:a queue to store requests for access to a memory device managed by the memory management device;an eviction table to store a weight associated with each of multiple memory portions of the memory device, each of the multiple memory portions having an associated source agent that generates requests for data stored in the memory portion, wherein each weight is factored based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; andan eviction processor configured to detect an eviction trigger indicating one of the multiple memory portions should be removed from the memory device; identify a memory portion having a most extreme weight in the eviction table; and, replace the memory portion identified as having the most extreme weight with a memory portion that triggered the eviction.23. An electronic device with a memory subsystem, comprising:an SDRAM (synchronous dynamic random access memory) including a memory array to store multiple memory portions, each of the multiple memory portions having an associated source agent that generates requests for data stored in the SDRAM, wherein each weight is computed based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; anda memory controller to control access to the SDRAM, the memory controller includinga queue to store requests for access to the SDRAM;an eviction table to store a weight associated with each of multiple memory portions; andan eviction processor configured to detect an eviction trigger indicating one of the multiple memory portions should be removed from the SDRAM; identify a memory portion having a most extreme weight in the eviction table; and, replace the memory portion identified as having the most extreme weight with a memory portion that triggered the eviction; anda touchscreen display coupled to generate a display based on data accessed from the SDRAM.24. An article of manufacture comprising a computer readable storage medium having content stored thereon, which when accessed causes a computing device to perform operations for managing eviction from a memory device in accordance with any of claims 15 to 21.25. An apparatus for managing eviction from a memory device comprising means for performing operation for managing eviction from a memory device in accordance with any of claims 15 to 21. |
COST-AWARE PAGE SWAP AND REPLACEMENT IN A MEMORYFIELD[0001] Embodiments of the invention are generally related to memory management, and more particularly to cost aware page swap and replacement in a memory.COPYRIGHT NOTICE/PERMISSION[0002] Portions of the disclosure of this patent document may contain material that is subject to copyright protection. The copyright owner has no objection to the reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The copyright notice applies to all data as described below, and in the accompanying drawings hereto, as well as to any software described below: Copyright © 2014, Intel Corporation, All Rights Reserved.BACKGROUND[0003] When a memory device stores data near capacity or at capacity, it will need to replace data to be able to store new data in response to additional data access requests from running applications. Some running applications are more sensitive to latency while others are more sensitive to bandwidth constraints. A memory manager traditionally determines what portion of memory to replace or swap in an attempt to reduce the number of faults or misses. However, reducing the total number of faults or misses may not be best for performance, seeing that some faults are more costly than others from the point of view of the running application workload.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The following description includes discussion of figures having illustrations given by way of example of implementations of embodiments of the invention. The drawings should be understood by way of example, and not by way of limitation. As used herein, references to one or more "embodiments" are to be understood as describing a particular feature, structure, and/or characteristic included in at least one implementation of the invention. Thus, phrases such as "in one embodiment" or "in an alternate embodiment" appearing herein describe various embodiments and implementations of the invention, and do not necessarily all refer to the same embodiment. However, they are also not necessarily mutually exclusive.[0005] Figure 1A is a block diagram of an embodiment of a system that implements memory eviction with a cost-based factor.[0006] Figure IB is a block diagram of an embodiment of a system that implements memory eviction at a memory controller with a cost-based factor.[0007] Figure 2 is a block diagram of an embodiment of a system that implements memory eviction with a cost-based factor in a multilevel memory system.[0008] Figure 3 is a block diagram of an embodiment of a system that implements memory eviction based on a count having an LRU factor and a cost-based factor.[0009] Figure 4 is a flow diagram of an embodiment of a process for managing eviction from a memory device.[0010] Figure 5 is a flow diagram of an embodiment of a process for selecting an eviction candidate.[0011] Figure 6 is a flow diagram of an embodiment of a process for managing an eviction count.[0012] Figure 7 is a block diagram of an embodiment of a computing system in which cost-based eviction management can be implemented.[0013] Figure 8 is a block diagram of an embodiment of a mobile device in which cost- based eviction management can be implemented.[0014] Descriptions of certain details and implementations follow, including a description of the figures, which may depict some or all of the embodiments described below, as well as discussing other potential embodiments or implementations of the inventive concepts presented herein.DETAILED DESCRIPTION[0015] As described herein, memory eviction accounts for the different costs of eviction on system performance. Instead of merely keeping a weight or a value based on recency and/or use of a particular portion of memory, the memory eviction can be configured to evict memory portions that have a lower cost impact on system performance. In one embodiment, a management device keeps a weight and/or a count associated with each memory portion, which includes a cost factor. Each memory portion is associated with an application or a source agent that generates requests to the memory portion. The cost factor indicates a latency impact on the source agent that could occur if an evicted memory portion is again requested after being evicted or a latency impact to replace the evicted memory portion. In response to detecting an eviction trigger for the memory device, the management device can identify a memory portion having a most extreme weight, such as a highest or lowest weight. The system can be configured to make a lowest weight or a highest weight correspond to a highest cost of eviction. In one embodiment, the management device keeps memory portions that have a higher cost of eviction, and replaces the memory portion having a lowest cost of eviction. Thus, the system can be configured to evict the memory portions that will have the least effect on system performance. In one embodiment, using the cost-based approach described can improve latency in a system that has latency-sensitive workloads.[0016] It will be understood that different memory architectures can be used. Single level memories (SLMs) have a single level of memory resources. A memory level refers to devices that have the same or substantially similar access times. A multilevel memory (MLM) includes multiple levels of memory resources. Each level of the memory resources has a different access time, with faster memories closer to the processor or processor core, and slower memories further from the core. Typically, in addition to being faster the closer memories tend to be smaller and the slower memories tend to have more storage space. In one embodiment, the highest level of memory in a system is referred to as main memory, while the other layers can be referred to as caches. The highest level of memory obtains data from a storage resource.[0017] The cost-based approach described herein can be applied to an SLM or an M LM. While architectures and implementations may differ, in one embodiment, eviction in an SLM can be referred to as occurring in connection with page replacement and eviction in an MLM can be referred to as occurring in connection with page swap. As will be understood by those skilled in the art, page replacement and page swap refer to evicting or removing data from a memory resource to make room for data from a higher level or from storage. In one embodiment, all memory resources in an SLM or an MLM are volatile memory devices. In one embodiment, one or more levels of memory include nonvolatile memory. Storage is nonvolatile memory. [0018] In one embodiment, memory management associates a weight to every page or memory portion to implement a cost-aware page or portion replacement. It will be understood that implementing weights is one non-limiting example. Traditionally, weights associated with memory pages are derived solely from the recency information (e.g., LRU (least recently used) information only). As described herein, memory management can associate a weight or other count with every page based on recency information (e.g., LRU information) and modify or adjust the weight or count based on cost information. Ideally, pages or portions that are more recently accessed, and that are associated with high cost would not be selected for replacement or swap. Instead, the memory management would select an eviction candidate from a page that is not recent and also associated with low cost.[0019] In one embodiment, the memory management generates a cost measurement that can be expressed as:Weight = Recency + a(Cost)[0020] The weight is the result to store or the count to use to determine candidacy for eviction. In one embodiment, the memory management computes Recency for a page or portion in accordance with a known LRU algorithm. In one embodiment, the memory management computes cost for a page or portion in accordance with an amount of parallelism for the source agent associated with the page or portion. For example, in one embodiment, the cost is inversely proportional to the number of requests made over a period of time, or a number of requests currently pending in a request queue. The factor a can be used to increase or reduce the weight of the cost-based factor relative to the recency factor. It will be seen that when a = 0, the weight of a page or portion can be solely decided based on recency information.[0021] In one embodiment, a is a dynamically adjustable factor. The value of a is should be trained to give the proper weight for the cost. In one embodiment, training is performed offline based on a list of applications running on a defined architecture to the find the proper value of a for specific pending queue counts, on average across all applications. In one embodiment, the value of a can be modified based on a performance or condition of the system that performs the cache management.[0022] Reference to memory devices can apply to different memory types. Memory devices generally refer to volatile memory technologies. Volatile memory is memory whose state (and therefore the data stored on it) is indeterminate if power is interrupted to the device. Nonvolatile memory refers to memory whose state is determinate even if power is interrupted to the device. Dynamic volatile memory requires refreshing the data stored in the device to maintain state. One example of dynamic volatile memory includes DRAM (dynamic random access memory), or some variant such as synchronous DRAM (SDRAM). A memory subsystem as described herein may be compatible with a number of memory technologies, such as DDR3 (dual data rate version 3, original release by JEDEC (Joint Electronic Device Engineering Council) on June 27, 2007, currently on release 21), DDR4 (DDR version 4, initial specification published in September 2012 by JEDEC), LPDDR3 (low power DDR version 3, JESD209-3B, Aug 2013 by JEDEC), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD209-4, originally published by JEDEC in August 2014), WI02 (Wide I/O 2 (Widel02), JESD229-2, originally published by JEDEC in August 2014), HBM (H IGH BANDWIDTH MEMORY DRAM, JESD235, originally published by JEDEC in October 2013), DDR5 (DDR version 5, currently in discussion by JEDEC), LPDDR5 (currently in discussion by JEDEC), WI03 (Wide I/O 3, currently in discussion by JEDEC), H BM2 (H BM version 2), currently in discussion by JEDEC), and/or others, and technologies based on derivatives or extensions of such specifications.[0023] In addition to, or alternatively to, volatile memory, in one embodiment, reference to memory devices can refer to a nonvolatile memory device whose state is determinate even if power is interrupted to the device. In one embodiment, the nonvolatile memory device is a block addressable memory device, such as NAN D or NOR technologies. Thus, a memory device can also include a future generation nonvolatile devices, such as a three dimensional crosspoint memory device, or other byte addressable nonvolatile memory device. In one embodiment, the memory device can be or include multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, or spin transfer torque (STT)-MRAM, or a combination of any of the above, or other memory.[0024] Figure 1A is a block diagram of an embodiment of a system that implements memory eviction with a cost-based factor. System 102 represents elements of a memory subsystem. The memory subsystem includes at least memory management 120 and memory device 130. Memory device 130 includes multiple portions of memory 132. In one embodiment, each portion 132 is a page (e.g., 4k bytes in certain computing systems). In one embodiment, each portion 132 is a different size than a page. The page size can be different for different implementations of system 102. The page can refer to a basic unit of data referenced at a time within memory 130.[0025] Host 110 represents a hardware and software platform for which memory 130 stores data and/or code. Host 110 includes processor 112 to execute operations within system 102. In one embodiment, processor 112 is a single-core processor. In one embodiment, processor 112 is a multicore processor. In one embodiment, processor 112 represents a primary computing resource in system 102 that executes a primary operating system. In one embodiment, processor 112 represents a graphics processor or peripheral processor. Operations by processor 112 generate requests for data stored in memory 130.[0026] Agents 114 represent programs executed by processor 112, and are source agents for access requests to memory 130. In one embodiment, agents 114 are separate applications, such as end-user applications. In one embodiment, agents 114 include system applications. In one embodiment, agents 114 represent threads or processes or other units of execution within host 110. Memory management 120 manages access by host 110 to memory 130. In one embodiment, memory management 120 is part of host 110. In one embodiment, memory management 120 can be considered part of memory 130. Memory management 120 is configured to implement eviction of portions 132 based at least in part on a cost factor associated with each portion. In one embodiment, memory management represents a module executed by a host operating system on processor 112.[0027] As illustrated, memory management 120 includes processor 126. Processor 126 represents hardware processing resources that enable memory management 120 to compute a count or weight for memory portions 132. In one embodiment, processor 126 is or is part of processor 112. In one embodiment, processor 126 executes an eviction algorithm. Processor 126 represents computing hardware that enables memory management 120 to compute information that is used to determine which memory portion 132 to evict in response to an eviction trigger. Thus, in one embodiment, processor 126 can be referred to as an eviction processor, referring to computing the counts or weights used to select an eviction candidate. [0028] Memory management 120 bases eviction or swap from memory 130 at least in part on a cost to an associated agent 114 for the specific eviction candidate. Thus, memory management 120 will preferably evict or swap out a low cost page. In a latency-constrained system, high cost is associated with a memory portion (e.g., a page) that would cause a more significant performance hit for a miss of that memory portion. Thus, if the memory portion was evicted and a subsequent request required the memory portion to be accessed again, it would have a more significant impact on performance if it caused more delay than another memory portion.[0029] In one embodiment, the cost is proportional to how much parallelism in requests is supported by the application. Certain memory requests require access to and operation on certain data prior to being able to request additional data, which increases how serial the requests are. Some memory requests can be performed in parallel with other requests, or they are not dependent on operation with respect to the memory portion prior to accessing another portion. Thus, parallel requests can have a lower cost relative to latency, and serial requests have higher latency cost.[0030] Consider a stream of cache misses passed down a memory hierarchy. Memory management 120 can send parallel cache misses PI, P2, P3, and P4 down the memory hierarchy. The memory management can also send serial cache misses SI, S2, and S3.Parallel cache misses can be sent down the memory hierarchy in parallel and hence share the cost of the cache miss (i.e., hide the memory latency well). In contrast, the serial misses will be sent down the memory hierarchy serially and cannot share the latency. Thus, the serial misses are more sensitive to memory latency, making cache blocks accessed by these misses more costly than those accessed by parallel misses.[0031] From the level of memory 130, if a page fault (for SLM) or a page miss (for M LM) occurs, the page fault/miss can share the cost of the page fault or page swap if there are many requests from the same source agent 114 pending. An agent 114 with a low number of requests would be more sensitive to the latency. Thus, agents 114 with higher memory level parallelism (MLP) can hide latency by issuing many requests to main memory 130. Portions or pages 132 associated with such agents 114 that are higher MLP applications should be less costly to replace than an agent 114 that is an application that does not show high level of MLP (such as pointer chasing applications). When MLP is low, the agent sends fewer parallel requests to memory 130, which makes the program more sensitive to latency. [0032] Similar to what is described above, memory management 120 can implement cost-aware replacement by computing a cost or a weight associated with each portion 132. System 102 illustrates memory management 120 with queue 122. Queue 122 represents a pending memory access request from agents 114 to memory 130. The depth of queue 122 is different for different implementations. The depth of queue 122 can affect what scaling factor a (or equivalent for different weight calculations) should be used to add a cost-based contribution to the weight. In one embodiment herein, the expression eviction count can be used to refer to a value or a weight computed for a memory portion that includes a cost portion. In one embodiment, memory management 120 implements the equation described above, where a weight is computed as a sum of recency information and a scaled version of the cost. As described previously, in one embodiment, the cost factor is scaled in accordance with trained information for the architecture of system 102. It will be understood that the example does not represent all ways memory management 120 can implement cost-aware eviction/replacement. The trained information is information gathered during offline training of the system, where the system is tested under different loads, configurations, and/or operations to identify anticipated performance/behavior. Thus, the cost factor can be made to scale in accordance with observed performance for a specific architecture or other condition.[0033] Recency information can include an indication of how recently a certain memory portion 132 was accessed by an associated agent 114. Techniques for keeping recency information are understood in the art, such as techniques used in LRU (least recently used) or MRU (most recently used) implementations, or similar techniques. In one embodiment, recency information can be considered a type of access history information. For example, access history can include an indication of when a memory portion was last accessed. In one embodiment, access history can include an indication of how frequently the memory portion has been accessed. In one embodiment, access history can include information that both indicates when the memory portion was last used, as well as how often the memory portion has been used (e.g., how "hot" a memory portion is). Other forms of access history are known.[0034] In one embodiment, memory management 120 can dynamically adjust the scaling factor a based on an implementation of system 102. For example, memory management 120 may perform different forms of prefetching. In one embodiment, in response to different levels of aggressiveness in the prefetching, memory management 120 can adjust the scaling factor a to be used to compute cost to determine eviction candidates. For example, aggressive prefetching may provide a false appearance of MLP at the memory level.[0035] In one embodiment, memory management 120 includes prefetch data in queue 122, which includes requests for data not yet requested by an application, but which is expected to be needed in the near future subsequent to the requested data. In one embodiment, memory management 120 ignores prefetch requests when computing a weight or count to use to determine eviction candidates. Thus, memory management 120 can treat prefetch requests as requests for purposes of computing a cost, or can ignore the prefetch requests for purposes of computing a cost. It may be preferable to have memory management 120 take prefetch requests into account when computing a weight if system 102 includes a well-trained prefetcher.[0036] It will be understood that certain agents 114 may be CPU (central processing unit) bound applications with low count of memory references. In one embodiment, such agents will be perceived to have low MLP, which could result in a high cost. However, by including a recency factor in the count or weight, it will also be understood that such CPU bound applications can have a low recency component, which can offset the impact of the high cost. In one embodiment, the weight or count is a count that includes a value indicating how recently a memory portion 132 was accessed.[0037] In one embodiment, table 124 represents information maintained by memory management 120 to manage eviction. In different implementations, table 124 can be referred to as an eviction table, as a weight table, as an eviction candidate table, or others. In one embodiment, table 124 includes a count or a weight for each memory portion 132 cached in memory 130. In one embodiment, reference could be made to memory management 120 "storing" certain pages or memory portions 132 of data. It will be understood that memory management 120 is not necessarily part of the memory where the actual data is stored. However, such a statement expresses the fact that memory management 120 can include table 124 and/or other mechanism to track the data elements stored in memory 130. Additionally, when items are removed from monitoring by memory management 120, the data is overwritten in memory 130 or at least is made available to be overwritten. [0038] In one embodiment, memory management 120 computes a cost factor or a cost component of the weight by incrementing a cost counter by 1/N, where N is the number of parallel requests currently queued for the source agent 114 associated with the portion. In one embodiment, the memory management increments the cost by 1/N for every clock cycle of a clock associated with memory 130. Thus, for example, consider two agents 114, labeled for this example as AgentO and Agentl. Assume that AgentO has a single request pending in queue 122. Assume further that Agent 1 has 100 requests pending in queue 122. If the agents must wait 100 clock cycles for a return of data from a cache miss, both AgentO and Agentl will see 100 cycles. However, Agentl has 100 requests pending, and so the latency can be seen as effectively approximately 1 cycle per request, and AgentO sees an effective of approximately 100 cycles per request. It will be understood that different calculations can be used. While different calculations can be used, in one embodiment, memory management 120 computes a cost factor that indicates the ability of a source agent 114 to hide latency or latency due to waiting for service to a memory access request in operation of system 102.[0039] Figure IB is a block diagram of an embodiment of a system that implements memory eviction at a memory controller with a cost-based factor. System 104 represents components of a memory subsystem, and can be one example of a system in accordance with system 102 of Figure 1A. Like reference numbers between systems 104 and 102 can be understood to identify similar components, and the descriptions above can apply equally well to these components.[0040] In one embodiment, system 104 includes memory controller, which is a circuit or chip that controls access to memory 130. In one embodiment, memory 130 is a DRAM device. In one embodiment, memory 130 represents multiple DRAM devices, such as all devices associated with memory controller 140. In one embodiment, system 104 includes multiple memory controllers, each associated with one or more memory devices. Memory controller 140 is or includes memory management 120.[0041] In one embodiment, memory controller 140 is a standalone component of system 104. In one embodiment, memory controller 140 is part of processor 112. In one embodiment, memory controller 140 includes a controller or processor circuit integrated onto a host processor or host system on a chip (SoC). The SoC can include one or more processors as well as other components, such as memory controller 140 and possible one or more memory devices. In one embodiment, system 104 is an MLM system, with cache 116 representing a small, volatile memory resource close to processor 112. In one embodiment, cache 116 is located on-chip with processor 112. In one embodiment, cache 116 is part of an SoC with processor 112. For cache misses in cache 116, host 110 sends a request to memory controller 140 for access to memory 130.[0042] Figure 2 is a block diagram of an embodiment of a system that implements memory eviction with a cost-based factor in a multilevel memory system. System 200 represents a multilevel memory system architecture for components of a memory subsystem. In one embodiment, system 200 is one example of a memory subsystem in accordance with system 102 of Figure 1A, or system 104 of Figure IB. System 200 includes host 210, multilevel memory 220, and storage 240. Host 210 represents a hardware and software platform for which the memory devices of MLM 220 stores data and/or code. Host 210 includes processor 212 to execute operations within system 200. Operations by processor 212 generate requests for data stored in MLM 220. Agents 214 represent programs or source agents executed by processor 212, and their execution generates requests for data from M LM 220. Storage 240 is a nonvolatile storage resource from which data is loaded into MLM 220 for execution by host 210. For example, storage 240 can include a hard disk driver (HDD), semiconductor disk drive (SDD), tape drive, nonvolatile memory device such as Flash, NAND, PCM (phase change memory), or others.[0043] Each of the N levels of memory 230 includes memory portions 232 and management 234. Each memory portion 232 is a segment of data that is addressable within the memory level 232. In one embodiment, each level 230 includes a different number of memory portions 232. In one embodiment, level 230[0] is integrated onto processor 212 or integrated onto an SoC of processor 212. In one embodiment, level 230[N-1] is main system memory (such as multiple channels of SDRAM), which directly requests data from storage 140 if a requests at level 230[N-1] results in a miss.[0044] In one embodiment, each memory level 230 includes separate management 234. In one embodiment, management 234 at one or more memory levels 230 implements cost- based eviction determinations. In one embodiment, each management 234 includes a table or other storage to maintain a count or weight for each memory portion 232 stored at that memory level 220. In one embodiment, any one or more management 234 (such as management 234[N-1] of a highest level memory or main memory 230[N-1] accounts for access history to the memory portions 232 stored at that level of memory as well as cost information as indicated by a parallelism indicator.[0045] Figure 3 is a block diagram of an embodiment of a system that implements memory eviction based on a count having an LRU factor and a cost-based factor. System 300 illustrates components of a memory subsystem, including memory management 310 and memory 320. System 300 can be one example of a memory subsystem in accordance with any embodiment described herein. System 300 can be an example of system 102 of Figure 1A, system 104 of Figure IB, or system 200 of Figure 2. In one embodiment, memory 320 represents a main memory device for a computing system. In one embodiment, memory 320 stores multiple pages 322. Each page includes a block of data, which can include many bytes of data. Each of N pages 322 can be said to be addressable within memory 320.[0046] In one embodiment, memory management 310 is or includes logic to manage the eviction of pages 322 from memory 320. In one embodiment, memory management 310 is executed as management code on a processor configured to execute the memory management. In one embodiment, memory management 310 is executed by a host processor or primary processor in the computing device of which system 300 is a part. Algorithm 312 represents the logical operations performed by memory management 310 to implement eviction management. The eviction management can be in accordance with any embodiment described herein of maintaining counts or weights, and determining an eviction candidate, and associated operations.[0047] In one embodiment, algorithm 312 is configured to execute a weight calculation in accordance with the equation provided above. In one embodiment, memorymanagement 310 includes multiple counts 330 to manage eviction candidates. Counts 330 can be the weights referred or some other count used to determine which page 322 should be evicted in response to a trigger to perform an eviction. In one embodiment, memory management 310 includes a count 330 for each page 322 in memory 320. In one embodiment, count 330 includes two factors or two components: LRU factor 332, and cost factor 334.[0048] LRU factor 332 refers to an LRU calculation or other calculation that takes into account the recent access history of each page 322. Cost factor 334 refers to a count or computed value or other value used to indicate the relative cost of replacing an associated page. In one embodiment, algorithm 312 includes a scaling factor that enables memory management 310 to change weight or contribution of cost factor 334 to count 330. In one embodiment, memory management 310 keeps a counter (not specifically shown) for computing LRU factor 332. For example, in one embodiment, each time an associated page 322 is accessed memory management 310 can update LRU factor 332 with the value of the counter. Thus, a higher number can represent more recent use. In one embodiment, memory management 310 increments count 330 by an amount that accounts for a level of parallelism of a source agent associated with the page the count is for. For example, cost factor 334 can include an increment each clock cycle of one divided by a number of pending memory access requests. Thus, a higher number can represent higher cost to replace. Both examples for both LRU factor 332 and cost factor 334 are described in which higher values indicate a preference to keep a particular memory page 322. Thus, memory management 310 can be configured to evict a page with the lowest count 330. Additionally, it will be understood by those skilled in the art that each factor or component described could alternatively be oriented to the negative or to subtract or add a reciprocal, or perform other operation(s) that would make a low number indicate a preference to be kept, causing the page with the highest count 330 to be evicted.[0049] Figure 4 is a flow diagram of an embodiment of a process for managing eviction from a memory device. Process 400 can be one example of a process for eviction management implemented in accordance with any embodiment of memory management herein. Process 400 illustrates one embodiment of how to measure the cost of a particular memory portion to enable cost-aware eviction and replacement.[0050] In one embodiment, a memory controller receives a request for data and adds the request to a pending queue of the memory controller, 402. The memory controller can determine if the request is a cache hit, or if the request is for data that is already stored in memory, 404. If the request is a hit, 406 YES branch, in one embodiment, the memory controller can update the access history information for the memory portion, 408, and service and return the data, 410.[0051] If the request is a miss, 406 NO branch, in one embodiment the memory controller can evict a memory portion from memory to make room for the requested portion to be loaded into memory. Thus, the requested memory portion can trigger eviction or replacement of a memory portion. In addition, the memory controller will access the requested data and can associate a count with the newly access memory portion for use in later determining an eviction candidate for a subsequent eviction request. For the requested memory portion, in one embodiment, the memory controller initializes a new cost count to zero, 412. Initializing a cost count to zero can include associating a cost count with the requested memory portion and resetting the value for the memory or table entry used for the cost count. In one embodiment, the memory controller can initialize the count to a non-zero value.[0052] The memory controller accesses the memory portion from a higher level memory or from storage and stores it in the memory, 414. In one embodiment, the memory controller associates a cost count or a cost counter with the memory portion, 416. The memory controller can also associate the memory portion with a source agent that generates the request that caused the memory portion to be loaded. In one embodiment, the memory controller increments the cost count or cost counter for each clock cycle that the memory portion is stored in the memory, 418.[0053] For determining an eviction candidate, in one embodiment, the memory controller compares the counts of memory portions stored in the memory, 420. The counts or weights can include an access history factor and a cost-based factor in accordance with any embodiment described herein. In one embodiment, the memory controller identifies the memory portion with a lowest count as a replacement candidate, 422. It will be understood that the memory controller can be configured to identify a memory portion with the other extreme count (i.e., a lowest count, or whatever extreme value corresponds to a lowest cost) as a candidate for eviction and replacement/swap. The memory controller can then evict the identified memory portion, 424. In one embodiment, the eviction of a memory portion from memory can occur prior to accessing a new portion to service or satisfy the request that caused the eviction trigger.[0054] Figure 5 is a flow diagram of an embodiment of a process for selecting an eviction candidate. Process 500 can be one example of a process by memory management to select a candidate for replacement or swap in accordance with any embodiment described herein. An agent executing on a host executes an operation that results in a memory access, 502. The host generates a memory access request, which is received by the memory controller or memory management, 504. The memory management determines if the request result in a cache hit, 506. If the request results in a hit, 508 YES branch, the memory management can service the request and return the data to the agent, which will keep on executing, 502.[0055] In one embodiment, if the request results in a miss or fault, 508 NO branch, the memory management triggers an eviction of data from the memory to free space to load the request data, 510. In one embodiment, the memory management computes eviction counts for cached pages in response to the eviction trigger. Computing the eviction count can include computing a total weight for a page based on an access history or LRU count for the page adjusted by a cost factor for the associated agent, 512. In one embodiment, the memory management keeps a history count factor for each page, and cost factor information for each agent. The cost factor can then be accessed and added to a count for each page when determining which page to evict. In one embodiment, the memory management can first select among a predetermined number of candidates based on access history or LRU information alone, and then determine which of those candidates to evict based on cost. Thus, the eviction and replacement can be accomplished in multiple layers. The memory management can identify the most extreme eviction count (i.e., lowest or highest, depending on the system configuration), 514, and evict the page with the extreme count or weight, 516.[0056] Figure 6 is a flow diagram of an embodiment of a process for managing an eviction count. Process 600 can be one example of a process to manage a count used by memory management to determine eviction or page replacement/page swap, in accordance with any embodiment described herein. In conjunction with processing a request for data, memory management adds a page to memory, 602. In one embodiment, the memory management associates the page with an agent executing on the host, 604. The associated agent is the agent whose data request caused the page to be loaded into memory.Associating the agent with the page can include information in a table or tagging the page, or the use of other metadata.[0057] The memory management initializes a count for the page, where the count can include an access history count field, and a cost count field, 606. The fields can be two different table entries for the page, for example. In one embodiment, the cost count field is associated with the agent (and thus shared with all pending pages for that agent), and added to the count when computed. The memory management can monitor the page and maintain a count for the page and other cached pages, 608. [0058] If there is an access count event to update the access count field, 610 YES branch, the memory management can increment or otherwise update (e.g., overwrite) access count field information, 612. An access event can include access to the associated page. When there is no access count event, 610 NO branch, the memory management can continue to monitor for such events.[0059] If there is a cost count event to update the cost count field, 614 YES branch, the memory management can increment or otherwise update (e.g., overwrite) cost count field information, 616. A cost count event can include a timer or clock cycling or reaching a scheduled value where counts are updated. When there is no cost count event, 610 NO branch, the memory management can continue to monitor for such events.[0060] In one embodiment, the memory management updates eviction counts for cached pages, including access count information and cost count information, 618. The memory management uses the eviction count information to determine which cached page to evict in response to an eviction trigger, 620. In one embodiment, the computation mechanisms for updating or incrementing count information and the computation mechanisms for determining eviction candidates are separate computation mechanisms.[0061] Figure 7 is a block diagram of an embodiment of a computing system in which cost-based eviction management can be implemented. System 700 represents a computing device in accordance with any embodiment described herein, and can be a laptop computer, a desktop computer, a server, a gaming or entertainment control system, a scanner, copier, printer, routing or switching device, or other electronic device. System 700 includes processor 720, which provides processing, operation management, and execution of instructions for system 700. Processor 720 can include any type of microprocessor, central processing unit (CPU), processing core, or other processing hardware to provide processing for system 700. Processor 720 controls the overall operation of system 700, and can be or include, one or more programmable general-purpose or special-purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs), or the like, or a combination of such devices.[0062] Memory subsystem 730 represents the main memory of system 700, and provides temporary storage for code to be executed by processor 720, or data values to be used in executing a routine. Memory subsystem 730 can include one or more memory devices such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM), or other memory devices, or a combination of such devices. Memory subsystem 730 stores and hosts, among other things, operating system (OS) 736 to provide a software platform for execution of instructions in system 700. Additionally, other instructions 738 are stored and executed from memory subsystem 730 to provide the logic and the processing of system 700. OS 736 and instructions 738 are executed by processor 720. Memory subsystem 730 includes memory device 732 where it stores data, instructions, programs, or other items. In one embodiment, memory subsystem includes memory controller 734, which is a memory controller to generate and issue commands to memory device 732. It will be understood that memory controller 734 could be a physical part of processor 720.[0063] Processor 720 and memory subsystem 730 are coupled to bus/bus system 710. Bus 710 is an abstraction that represents any one or more separate physical buses, communication lines/interfaces, and/or point-to-point connections, connected by appropriate bridges, adapters, and/or controllers. Therefore, bus 710 can include, for example, one or more of a system bus, a Peripheral Component Interconnect (PCI) bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (commonly referred to as "Firewire"). The buses of bus 710 can also correspond to interfaces in network interface 750.[0064] System 700 also includes one or more input/output (I/O) interface(s) 740, network interface 750, one or more internal mass storage device(s) 760, and peripheral interface 770 coupled to bus 710. I/O interface 740 can include one or more interface components through which a user interacts with system 700 (e.g., video, audio, and/or alphanumeric interfacing). Network interface 750 provides system 700 the ability to communicate with remote devices (e.g., servers, other computing devices) over one or more networks. Network interface 750 can include an Ethernet adapter, wireless interconnection components, USB (universal serial bus), or other wired or wireless standards-based or proprietary interfaces.[0065] Storage 760 can be or include any conventional medium for storing large amounts of data in a nonvolatile manner, such as one or more magnetic, solid state, or optical based disks, or a combination. Storage 760 holds code or instructions and data 762 in a persistent state (i.e., the value is retained despite interruption of power to system 700). Storage 760 can be generically considered to be a "memory," although memory 730 is the executing or operating memory to provide instructions to processor 720. Whereas storage 760 is nonvolatile, memory 730 can include volatile memory (i.e., the value or state of the data is indeterminate if power is interrupted to system 700).[0066] Peripheral interface 770 can include any hardware interface not specifically mentioned above. Peripherals refer generally to devices that connect dependently to system 700. A dependent connection is one where system 700 provides the software and/or hardware platform on which operation executes, and with which a user interacts.[0067] In one embodiment, memory subsystem 730 includes cost-based manager 780, which can be memory management in accordance with any embodiment described herein. In one embodiment, cost-based manager 780 is part of memory controller 734. Manager 780 keeps and computes a count or weight for each page or other memory portion stored in memory 732. The weight or count includes cost information for each page, where the cost indicates a performance impact for replacing the page in memory. The cost information can include or can be combined with access history information for the page. Based on the count or weight including the cost-based information, manager 780 can select a candidate for eviction from memory 732.[0068] Figure 8 is a block diagram of an embodiment of a mobile device in which cost- based eviction management can be implemented. Device 800 represents a mobile computing device, such as a computing tablet, a mobile phone or smartphone, a wireless- enabled e-reader, wearable computing device, or other mobile device. It will be understood that certain of the components are shown generally, and not all components of such a device are shown in device 800.[0069] Device 800 includes processor 810, which performs the primary processing operations of device 800. Processor 810 can include one or more physical devices, such as microprocessors, application processors, microcontrollers, programmable logic devices, or other processing means. The processing operations performed by processor 810 include the execution of an operating platform or operating system on which applications and/or device functions are executed. The processing operations include operations related to I/O(input/output) with a human user or with other devices, operations related to power management, and/or operations related to connecting device 800 to another device. The processing operations can also include operations related to audio I/O and/or display I/O.[0070] In one embodiment, device 800 includes audio subsystem 820, which represents hardware (e.g., audio hardware and audio circuits) and software (e.g., drivers, codecs) components associated with providing audio functions to the computing device. Audio functions can include speaker and/or headphone output, as well as microphone input. Devices for such functions can be integrated into device 800, or connected to device 800. In one embodiment, a user interacts with device 800 by providing audio commands that are received and processed by processor 810.[0071] Display subsystem 830 represents hardware (e.g., display devices) and software (e.g., drivers) components that provide a visual and/or tactile display for a user to interact with the computing device. Display subsystem 830 includes display interface 832, which includes the particular screen or hardware device used to provide a display to a user. In one embodiment, display interface 832 includes logic separate from processor 810 to perform at least some processing related to the display. In one embodiment, display subsystem 830 includes a touchscreen device that provides both output and input to a user. In one embodiment, display subsystem 830 includes a high definition (HD) display that provides an output to a user. High definition can refer to a display having a pixel density ofapproximately 100 PPI (pixels per inch) or greater, and can include formats such as full H D (e.g., 1080p), retina displays, 4K (ultra high definition or UHD), or others.[0072] I/O controller 840 represents hardware devices and software components related to interaction with a user. I/O controller 840 can operate to manage hardware that is part of audio subsystem 820 and/or display subsystem 830. Additionally, I/O controller 840 illustrates a connection point for additional devices that connect to device 800 through which a user might interact with the system. For example, devices that can be attached to device 800 might include microphone devices, speaker or stereo systems, video systems or other display device, keyboard or keypad devices, or other I/O devices for use with specific applications such as card readers or other devices.[0073] As mentioned above, I/O controller 840 can interact with audio subsystem 820 and/or display subsystem 830. For example, input through a microphone or other audio device can provide input or commands for one or more applications or functions of device 800. Additionally, audio output can be provided instead of or in addition to display output. In another example, if display subsystem includes a touchscreen, the display device also acts as an input device, which can be at least partially managed by I/O controller 840. There can also be additional buttons or switches on device 800 to provide I/O functions managed by I/O controller 840.[0074] In one embodiment, I/O controller 840 manages devices such as accelerometers, cameras, light sensors or other environmental sensors, gyroscopes, global positioning system (GPS), or other hardware that can be included in device 800. The input can be part of direct user interaction, as well as providing environmental input to the system to influence its operations (such as filtering for noise, adjusting displays for brightness detection, applying a flash for a camera, or other features). In one embodiment, device 800 includes power management 850 that manages battery power usage, charging of the battery, and features related to power saving operation.[0075] Memory subsystem 860 includes memory device(s) 862 for storing information in device 800. Memory subsystem 860 can include nonvolatile (state does not change if power to the memory device is interrupted) and/or volatile (state is indeterminate if power to the memory device is interrupted) memory devices. Memory 860 can store application data, user data, music, photos, documents, or other data, as well as system data (whether long- term or temporary) related to the execution of the applications and functions of system 800. In one embodiment, memory subsystem 860 includes memory controller 864 (which could also be considered part of the control of system 800, and could potentially be considered part of processor 810). Memory controller 864 includes a scheduler to generate and issue commands to memory device 862.[0076] Connectivity 870 includes hardware devices (e.g., wireless and/or wired connectors and communication hardware) and software components (e.g., drivers, protocol stacks) to enable device 800 to communicate with external devices. The external device could be separate devices, such as other computing devices, wireless access points or base stations, as well as peripherals such as headsets, printers, or other devices.[0077] Connectivity 870 can include multiple different types of connectivity. To generalize, device 800 is illustrated with cellular connectivity 872 and wireless connectivity 874. Cellular connectivity 872 refers generally to cellular network connectivity provided by wireless carriers, such as provided via GSM (global system for mobile communications) or variations or derivatives, CDMA (code division multiple access) or variations or derivatives, TDM (time division multiplexing) or variations or derivatives, LTE (long term evolution - also referred to as "4G"), or other cellular service standards. Wireless connectivity 874 refers to wireless connectivity that is not cellular, and can include personal area networks (such as Bluetooth), local area networks (such as WiFi), and/or wide area networks (such as WiMax), or other wireless communication. Wireless communication refers to transfer of data through the use of modulated electromagnetic radiation through a non-solid medium. Wired communication occurs through a solid communication medium.[0078] Peripheral connections 880 include hardware interfaces and connectors, as well as software components (e.g., drivers, protocol stacks) to make peripheral connections. It will be understood that device 800 could both be a peripheral device ("to" 882) to other computing devices, as well as have peripheral devices ("from" 884) connected to it. Device 800 commonly has a "docking" connector to connect to other computing devices for purposes such as managing (e.g., downloading and/or uploading, changing, synchronizing) content on device 800. Additionally, a docking connector can allow device 800 to connect to certain peripherals that allow device 800 to control content output, for example, to audiovisual or other systems.[0079] In addition to a proprietary docking connector or other proprietary connection hardware, device 800 can make peripheral connections 880 via common or standards-based connectors. Common types can include a Universal Serial Bus (USB) connector (which can include any of a number of different hardware interfaces), DisplayPort includingMiniDisplayPort (MDP), High Definition Multimedia Interface (HDMI), Firewire, or other type.[0080] In one embodiment, memory subsystem 860 includes cost-based manager 866, which can be memory management in accordance with any embodiment described herein. In one embodiment, cost-based manager 866 is part of memory controller 864. Manager 7866 keeps and computes a count or weight for each page or other memory portion stored in memory 862. The weight or count includes cost information for each page, where the cost indicates a performance impact for replacing the page in memory. The cost information can include or can be combined with access history information for the page. Based on the count or weight including the cost-based information, manager 866 can select a candidate for eviction from memory 862. [0081] In one aspect, a method for managing eviction from a memory device includes: initializing a count for one of multiple memory portions in a memory device, including associating the count with a source agent that accesses the one memory portion; adjusting the count based on access to the one memory portion by the associated source agent; adjusting the count based on a dynamic cost factor for the associated source agent, where the dynamic cost factor represents a latency impact to performance of the source agent to replace the memory portion; and comparing the count to counts for others of the multiple portions to determine which memory portion to evict in response to an eviction trigger for the memory device.[0082] In one embodiment, wherein the memory device comprises a main memory resource for a host system. In one embodiment, wherein the comparing comprise comparing with a memory controller device. In one embodiment, wherein initializing the count comprises initializing the count in response to receiving a request from a lower-level memory requesting data. In one embodiment, wherein comparing the count further comprises identifying for eviction one of the multiple memory portions having a lowest cost. In one embodiment, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending for the associated source agent. In one embodiment, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor.[0083] In one aspect, a memory management device includes: a queue to store requests for access to a memory device managed by the memory management device; an eviction table to store a weight associated with each of multiple memory portions of the memory device, each of the multiple memory portions having an associated source agent that generates requests for data stored in the memory portion, wherein each weight is factored based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; and an eviction processor configured to initialize a count for one of the memory portions; adjust the count based on access to the one memory portion by the associated source agent; adjust the count based on a dynamic cost factor for the associated source agent; and compare the count to counts for others of the multiple memory portions to determine which memory portion to evict in response to an eviction trigger for the memory device. [0084] In one embodiment, wherein the memory device comprises a DRAM (dynamic random access memory) resource for a host system. In one embodiment, wherein the eviction processor comprises a processor of a memory controller device. In oneembodiment, wherein the DRAM is a highest level memory of a multilevel memory (MLM) system, wherein the eviction processor is to detect the eviction trigger in response to a page fault occurring in response to servicing a request from a cache of the MLM. In one embodiment, wherein the eviction processor is to identify the memory portion having a lowest cost to evict. In one embodiment, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending in the queue for the associated source agent. In oneembodiment, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor.[0085] In one aspect, an electronic device with a memory subsystem includes: an SDRAM (synchronous dynamic random access memory) including a memory array to store multiple memory portions, each of the multiple memory portions having an associated source agent that generates requests for data stored in the SDRAM, wherein each weight is computed based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; and a memory controller to control access to the SDRAM, the memory controller including a queue to store requests for access to the SDRAM; an eviction table to store a weight associated with each of multiple memory portions; and an eviction processor configured to initialize a count for one of the memory portions; adjust the count based on access to the one memory portion by the associated source agent; adjust the count based on a dynamic cost factor for the associated source agent; and compare the count to counts for others of the multiple memory portions to determine which memory portion to evict in response to an eviction trigger for the memory device; and a touchscreen display coupled to generate a display based on data accessed from the SDRAM.[0086] In one embodiment, wherein the memory controller comprises a memory controller circuit integrated onto a host processor system on a chip (SoC). In one embodiment, wherein the SDRAM is a highest level memory of a multilevel memory (MLM) system, wherein the eviction processor is to detect the eviction trigger in response to a page fault occurring in response to servicing a request from a cache of the MLM. In one embodiment, wherein the eviction processor is to identify for eviction the memory portion having a lowest count. In one embodiment, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending in the queue for the associated source agent. In oneembodiment, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor.[0087] In one aspect, a method for managing eviction from a memory device includes: detecting an eviction trigger in a memory device, where the eviction trigger indicates one of multiple portions of memory should be removed from the memory device, each memory portion having an associated weight and an associated source agent that generates requests for data stored in the memory portion; identifying a memory portion having a most extreme weight, wherein each weight is computed based on access history for the memory portion and adjusted by a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; and replacing the memory portion identified as having the most extreme weight, with a memory portion that triggered the eviction.[0088] In one embodiment, wherein the memory device comprises a main memory resource for a host system. In one embodiment, wherein detecting the eviction trigger comprises detecting the eviction trigger with a memory controller device. In one embodiment, wherein detecting the eviction trigger comprises receiving a request from a lower-level memory requesting data that causes a miss in the memory device. In one embodiment, wherein identifying the memory portion having the most extreme weight comprises identifying the memory portion having a lowest cost to evict. In oneembodiment, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending for the associated source agent. In one embodiment, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor.[0089] In one aspect, a memory management device includes: a queue to store requests for access to a memory device managed by the memory management device; an eviction table to store a weight associated with each of multiple memory portions of the memory device, each of the multiple memory portions having an associated source agent that generates requests for data stored in the memory portion, wherein each weight is factored based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; and an eviction processor configured to detect an eviction trigger indicating one of the multiple memory portions should be removed from the memory device; identify a memory portion having a most extreme weight in the eviction table; and, replace the memory portion identified as having the most extreme weight with a memory portion that triggered the eviction.[0090] In one embodiment, wherein the memory device comprises a DRAM (dynamic random access memory) resource for a host system. In one embodiment, wherein the eviction processor comprises a processor of a memory controller device. In oneembodiment, wherein the DRAM is a highest level memory of a multilevel memory (MLM) system, wherein the eviction processor is to detect the eviction trigger in response to a page fault occurring in response to servicing a request from a cache of the MLM. In one embodiment, wherein the eviction processor is to identify the memory portion having a lowest cost to evict. In one embodiment, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending in the queue for the associated source agent. In oneembodiment, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor.[0091] In one aspect, an electronic device with a memory subsystem includes: an SDRAM (synchronous dynamic random access memory) including a memory array to store multiple memory portions, each of the multiple memory portions having an associated source agent that generates requests for data stored in the SDRAM, wherein each weight is computed based on access history for the memory portion as well as a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; and a memory controller to control access to the SDRAM, the memory controller including a queue to store requests for access to the SDRAM; an eviction table to store a weight associated with each of multiple memory portions; and an eviction processor configured to detect an eviction trigger indicating one of the multiple memory portions should be removed from the SDRAM; identify a memory portion having a most extreme weight in the eviction table; and, replace the memory portion identified as having the most extreme weight with a memory portion that triggered the eviction; and a touchscreen display coupled to generate a display based on data accessed from the SDRAM. [0092] In one embodiment, wherein the memory controller comprises a memory controller circuit integrated onto a host processor system on a chip (SoC). In one embodiment, wherein the cost factor includes a replacement cost factor 1/N added to a least recently used (LRU) factor, where N is a number of parallel requests currently pending in the queue for the associated source agent. In one embodiment, wherein the cost factor is dynamically adjustable by a scaling factor to provide more or less weight to the cost factor. In one embodiment, wherein the SDRAM is a highest level memory of a multilevel memory (MLM) system, wherein the eviction processor is to detect the eviction trigger in response to a page fault occurring in response to servicing a request from a cache of the M LM. In one embodiment, wherein the eviction processor is to identify the memory portion having a lowest cost to evict.[0093] In one aspect, an article of manufacture comprising a computer readable storage medium having content stored thereon, which when accessed causes a computing device to perform operations for managing eviction from a memory device, including: initializing a count for one of multiple memory portions in a memory device, including associating the count with a source agent that accesses the one memory portion; adjusting the count based on access to the one memory portion by the associated source agent; adjusting the count based on a dynamic cost factor for the associated source agent, where the dynamic cost factor represents a latency impact to performance of the source agent to replace the memory portion; and comparing the count to counts for others of the multiple portions to determine which memory portion to evict in response to an eviction trigger for the memory device. Any embodiment described with respect to the method for managing eviction from a memory device can also apply to the article of manufacture.[0094] In one aspect, an apparatus for managing eviction from a memory device including: means for initializing a count for one of multiple memory portions in a memory device, including associating the count with a source agent that accesses the one memory portion; means for adjusting the count based on access to the one memory portion by the associated source agent; means for adjusting the count based on a dynamic cost factor for the associated source agent, where the dynamic cost factor represents a latency impact to performance of the source agent to replace the memory portion; and means for comparing the count to counts for others of the multiple portions to determine which memory portion to evict in response to an eviction trigger for the memory device. Any embodiment described with respect to the method for managing eviction from a memory device can also apply to the apparatus.[0095] In one aspect, an article of manufacture comprising a computer readable storage medium having content stored thereon, which when accessed causes a computing device to perform operations for managing eviction from a memory device, comprising: detecting an eviction trigger in a memory device, where the eviction trigger indicates one of multiple portions of memory should be removed from the memory device, each memory portion having an associated weight and an associated source agent that generates requests for data stored in the memory portion; identifying a memory portion having a most extreme weight, wherein each weight is computed based on access history for the memory portion and adjusted by a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; and replacing the memory portion identified as having the most extreme weight, with a memory portion that triggered the eviction. Any embodiment described with respect to the method for managing eviction from a memory device can also apply to the article of manufacture.[0096] In one aspect, an apparatus for managing eviction from a memory device includes: means for detecting an eviction trigger in a memory device, where the eviction trigger indicates one of multiple portions of memory should be removed from the memory device, each memory portion having an associated weight and an associated source agent that generates requests for data stored in the memory portion; means for identifying a memory portion having a most extreme weight, wherein each weight is computed based on access history for the memory portion and adjusted by a cost factor that indicates a latency impact on the associated source agent to replace the memory portion; and means for replacing the memory portion identified as having the most extreme weight, with a memory portion that triggered the eviction. Any embodiment described with respect to the method for managing eviction from a memory device can also apply to the apparatus.[0097] Flow diagrams as illustrated herein provide examples of sequences of various process actions. The flow diagrams can indicate operations to be executed by a software or firmware routine, as well as physical operations. In one embodiment, a flow diagram can illustrate the state of a finite state machine (FSM), which can be implemented in hardware and/or software. Although shown in a particular sequence or order, unless otherwise specified, the order of the actions can be modified. Thus, the illustrated embodiments should be understood only as an example, and the process can be performed in a different order, and some actions can be performed in parallel. Additionally, one or more actions can be omitted in various embodiments; thus, not all actions are required in every embodiment. Other process flows are possible.[0098] To the extent various operations or functions are described herein, they can be described or defined as software code, instructions, configuration, and/or data. The content can be directly executable ("object" or "executable" form), source code, or difference code ("delta" or "patch" code). The software content of the embodiments described herein can be provided via an article of manufacture with the content stored thereon, or via a method of operating a communication interface to send data via the communication interface. A machine readable storage medium can cause a machine to perform the functions or operations described, and includes any mechanism that stores information in a form accessible by a machine (e.g., computing device, electronic system, etc.), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, etc.). A communication interface includes any mechanism that interfaces to any of a hardwired, wireless, optical, etc., medium to communicate to another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, etc. The communication interface can be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide a data signal describing the software content. The communication interface can be accessed via one or more commands or signals sent to the communication interface.[0099] Various components described herein can be a means for performing the operations or functions described. Each component described herein includes software, hardware, or a combination of these. The components can be implemented as software modules, hardware modules, special-purpose hardware (e.g., application specific hardware, application specific integrated circuits (ASICs), digital signal processors (DSPs), etc.), embedded controllers, hardwired circuitry, etc.[00100] Besides what is described herein, various modifications can be made to the disclosed embodiments and implementations of the invention without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow. |
A method (100) for assembling die packages includes attaching contacts (101) on a first side of a plurality of first die to substrate pads on a top surface of a composite carrier. The composite carrier includes a package substrate including at least one embedded metal layer having its bottom surface secured to a semiconductor wafer. The composite carrier minimizes effects of the coefficient of thermal expansion (CTE) mismatch between the die and the package substrate during assembly reduces warpage of the die. After the attaching, the semiconductor wafer is removed (103) from the package substrate. Electrically conductive connectors are attached (104) to the bottom surface of the package substrate, and the package substrate is sawed (105) to form a plurality of singulated die packages. |
CLAIMS What is claimed is: 1. A method for assembling die packages, comprising: attaching contacts on a first side of a plurality of first die to substrate pads on a top surface of a composite carrier, wherein said composite carrier comprises a package substrate including at least one embedded metal layer having its bottom surface secured to a semiconductor wafer; after said attaching, removing said semiconductor wafer from said package substrate; attaching a plurality of electrically conductive connectors to said bottom surface of said package substrate, and sawing said package substrate to form a plurality of singulated die packages. 2. The method of claim 1, wherein said plurality of first die comprises die that include through- substrate vias; and said contacts include contacts to said through-substrate vias. 3. The method of claim 6, further comprising: thinning a second side of said plurality of die to expose said through- substrate vias to provide exposed through- substrate via areas, and attaching a plurality of singulated second die to through- substrate via contacts coupled to said exposed areas to form a plurality of die stacks on said package substrate. 4. A method for assembling stacked die packages, comprising: attaching contacts on a first side of a plurality of first die to substrate pads on a top surface of a composite carrier, wherein said composite carrier comprises a package substrate including at least one embedded metal layer having its bottom surface secured to a semiconductor wafer; attaching a plurality of singulated second die to said first die to form a plurality of die stacks on said package substrate; removing said semiconductor wafer from said package substrate; 1attaching a plurality of electrically conductive connectors to said bottom surface of said package substrate, and sawing said package substrate to form a plurality of singulated stacked die packages. 5. The method of claim 4, wherein said plurality of first die comprises dies with through- substrate vias; and said contacts include contacts to said through- substrate vias. 6. The method of claim 5, further comprising: thinning a second side of said plurality of dies with said through- substrate vias to expose said through-substrate vias to provide exposed through- substrate via areas, and attaching a plurality of singulated second die to said through- substrate via contacts coupled to said exposed areas to form a plurality of die stacks on said package substrate. 7. A method for assembling stacked die packages, comprising: attaching a topside of a plurality of singulated through-substrate via die having embedded through-substrate vias including topside pads coupled to said through- substrate vias to substrate pads on a top surface of a composite carrier, said composite carrier comprising an organic substrate including at least one embedded metal layer having its bottom surface secured to a silicon wafer; thinning a bottomside of said plurality of singulated through- substrate via die to provide exposed through- substrate via areas; forming bottomside through-substrate via contacts on said exposed through- substrate via areas; attaching a plurality of singulated second die to said bottomside through- substrate via contacts to form a plurality of die stacks on said organic substrate; removing said silicon wafer from said organic substrate; attaching a plurality of electrically conductive connectors to said bottom surface of said organic substrate, and sawing through said organic substrate to form a plurality of singulated stacked die packages. 2 8. The method of claim 7, wherein said plurality of through- substrate via die are disposed on a through-substrate via wafer. 9. An electronic assembly, comprising: a composite carrier comprising an organic package substrate including at least one embedded metal layer having its bottom surface secured to a semiconductor wafer, and a plurality of first die having a thickness of 20 to 100 μιη having their topside contacts attached to topside substrate pads on a top surface of said package substrate. 10. The electronic assembly of claim 9, wherein said plurality of first die comprise die that include through- substrate vias that have said topside contacts coupled to said through- substrate vias attached to topside substrate pads of said package substrate. 11. The electronic assembly of claim 10, further comprising a plurality of singulated second die attached to bottomside contacts that are coupled to said through- substrate vias. 12. The electronic assembly of claim 10, wherein said plurality of first die are disposed on a wafer. 3 |
TCE COMPENSATION FOR IC PACKAGE SUBSTRATES FOR REDUCED DIE WARPAGE ASSEMBLY [0001] Disclosed embodiments relate to integrated circuit ("IC") packaging, and more particularly to die assembly. BACKGROUND [0002] As known in the art, the term "die bonding" or "die attach" describes the operation of attaching semiconductor die either to a package substrate or to some other substrate such as tape carrier for tape automated bonding. The die is first picked from a separated wafer or waffle tray, aligned to a target pad on the carrier or substrate, and then permanently attached, usually by a solder or epoxy bond. [0003] Die attach temperatures during assembly of IC die is generally performed at a temperature of least 150 °C, and can be performed at temperatures of 375 °C or more for eutectic die attach. Assembly of very thin die (< 100 μιη thick, e.g., 20 to 80 μιη) to some package substrates, such as organic substrates, is known to be difficult due to the warpage of the die caused by the large coefficient of thermal expansion ("CTE") mismatch between the die and the package substrate. For example, in the case of a silicon die, the CTE of the die may be about 3 ppm/°C, and the CTE of the organic substrate may be about 20 ppm/°C or higher. This problem can be further aggravated by thin package substrates (e.g., about 100-200 μιη thick) that may lack of rigidity over temperature. [0004] Even minimal die warpage can cause alignment and resulting die attach problems in the case of small area and/or dense die contacts. Misaligned joints reduces contact area that increases contact resistance of the joints, and can even cause open circuited contacts. For example, contacts associated with through-substrate vias (abbreviated as "TSVs" and referred to as through- silicon vias in the case of a silicon substrate) can be very small in area. Similarly, if other contact structures such as pillars (e.g., copper pillars) or studs (e.g., gold studs) become small enough and/or dense enough, warpage can become a significant problem. Warpage is also especially problematic for die stacks when one of the die has contacts on both sides, for example, involving flip chip package substrate connections on one side of the die and small area TSV connections on the other side of the die. [0005] One known method for addressing the above described warpage problem is using low CTE package substrates that provide improved CTE matching relative to the die. Forexample, ceramic substrates and some specialized polymer substrates may provide improved CTE matching with the die. However, low CTE package substrates are generally significantly more expensive as compared to conventional epoxy-glass resin-based (e.g., BT resin) organic substrates. What is needed is new packaging methodology for minimizing warpage and resulting effects of the CTE mismatch between the die and package substrate during assembly to allow use of conventional polymer substrates. SUMMARY [0006] Disclosed embodiments describe new packaging methodology for minimizing effects of CTE mismatch between the die and the package substrate during assembly that notably allows use of low cost conventional polymer substrates while providing reduced warpage of the die. A composite carrier comprising a package substrate including at least one embedded metal layer that has its bottom surface secured to a semiconductor wafer controls the CTE mismatch between the die and the substrate. The Inventor has recognized that the CTE of the composite carrier will be largely driven by the CTE of the semiconductor carrier wafer which is selected to match the CTE of the die so that despite the CTE mismatch between the die and the package substrate, the package substrate will have little impact on ACTE driven warpage during assembly. In one embodiment, the die and the wafer carrier can both comprise silicon. [0007] The package substrate is generally a polymer substrate, such as an organic substrate. In a typical embodiment, the package substrate has a TCE that is at least is 10 ppm/°C different (typically being higher) as compared to the CTE of the die. [0008] The composite carrier can be provided prior to the start of the assembly process. Die attach processing is performed on the package substrate while the semiconductor wafer is attached thereto that acts as a carrier wafer. The semiconductor wafer may be removed later in the assembly flow after all die attachment is complete at which time the need for flat die surfaces is no longer generally important. Following removal of the carrier wafer, a plurality of electrically conductive connectors (e.g., a BGA) can then be attached to the bottom surface of the package substrate. Sawing through the package substrate forms a plurality of die packages. [0009] Disclosed embodiments include assembly of single die packages and stacked die packages that include two or more stacked die. The die can include TSV die.BRIEF DESCRIPTION OF THE DRAWINGS [0010] FIG. 1 shows an example method for assembling die packages in accordance with principles of the invention. [0011] FIG. 2 shows an example method for assembling stacked die packages. [0012] FIG. 3 shows an example method for assembling stacked die packages that include die with through- substrate vias (TSVs). [0013] FIGS. 4A-4G are cross-sectional views illustrating steps in the example method of FIG. 3. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS. [0014] FIG. 1 shows an example embodiment of a method 100 for assembling die packages. Step 101 comprises attaching contacts that are on a first side of a plurality of first die to substrate pads on a top surface of a composite carrier. The first die may be attached face down (i.e., flip chip) or face up (i.e., circuit side up)(e.g., for later wire bonding, or using through- substrate via (TSV) die). The coefficient of thermal expansion (CTE) difference between the die (e.g., about 3 ppm/°C for a silicon die) and the package substrate is generally at least 10 ppm/°C. In a typical embodiment, step 101 comprises die attach and underfill of a plurality of singulated die to a polymer package substrate via reflow soldering of solder bumps, copper pillars, gold studs, or other suitable attachment method. The plurality of first die can be provided in wafer form, instead of singulated die form, so that the wafer is attached to the package substrate. [0015] The composite carrier comprises a package substrate including one or more embedded metal layers secured at a bottom surface to a semiconductor wafer. The package substrate can be a polymer substrate, such as an organic substrate. The package substrate can also be a ceramic substrate or other substrate. The package substrate can be a thin package substrate, such as an organic substrate that has a thickness of < 200 μηα, such as about 100 to 200 μιη. As noted above, the CTE of the composite carrier will be largely driven by the CTE of the semiconductor carrier wafer which is selected to match the CTE of the die. Accordingly, despite the CTE mismatch between the die and the package substrate, the package substrate will have little impact on ACTE driven warpage during assembly. [0016] Step 102 comprises an optional overmolding step that can comprise overmolding with an appropriate material (e.g., mold compound, adhesive). Step 103 comprises removing thesemiconductor carrier wafer from the package substrate. Release methods can include thermal, solvent or laser aided methods. Step 104 comprises attaching a plurality of electrically conductive connectors (e.g., a ball grid array (BGA)) to the bottom surface of the package substrate. Step 105 comprises sawing through the package substrate to form a plurality of singulated die packages. [0017] FIG. 2 shows an example method 200 for assembling stacked die packages, according to a disclosed embodiment. Step 201 comprises attaching contacts on a first side of a plurality of first die to substrate pads on a top surface of a composite carrier. The composite carrier comprises a package substrate including at least one embedded metal layer having its bottom surface secured to a semiconductor wafer. In a typical embodiment, step 201 comprises die attach and underfill of a plurality of singulated first die to a polymer package substrate via reflow soldering of solder bumps, copper pillars, gold studs, or other suitable attachment method. As described above, the plurality of first die can be provided in wafer form so that the wafer is attached to the package substrate. [0018] In step 202 a plurality of singulated second die are attached to the first die to form a plurality of die stacks on the package substrate. In a typical embodiment singulated second die are attached using soldering or copper bonding, and are then underfilled. [0019] Step 203 comprises an optional overmolding step that can comprise overmolding with an appropriate material (e.g., mold compound, adhesive). Step 204 comprises removing the semiconductor carrier wafer from the package substrate. As described above, release methods can include thermal, solvent or laser aided methods. Step 205 comprises attaching a plurality of electrically conductive connectors (e.g., BGA) to the bottom surface of the package substrate. Step 206 comprises sawing through the package substrate to form a plurality of singulated stacked die packages. [0020] FIG. 3 shows an example method 300 for assembling stacked die packages that include TSV die, according to a disclosed embodiment. Step 301 comprises attaching a topside of a plurality of first TSV die having embedded TSVs including topside pads coupled to substrate pads on a top surface of a composite carrier. In a typical embodiment step 301 comprises die attach and underfill singulated first TSV die to a polymer package substrate via reflow soldering of solder bumps, copper pillars, or other suitable attachment method. The plurality of first TSV die can be provided in wafer form referred to herein as a TSV wafer.[0021] In step 302 the plurality of first TSV die are thinned to expose the TSVs to provide exposed bottomside TSV areas. Methods for thinning can include backgrind, chemical mechanical polishing (CMP), and/or chemical etch. Bottomside TSV contacts to the exposed TSV areas can then be formed. Step 303 comprises attaching a plurality of singulated second die to the bottomside TSV contacts of the first TSV die to form a plurality of die stacks on the package substrate. In a typical embodiment singulated second die are attached using soldering or copper bonding, and are then underfilled. [0022] Step 304 comprises an optional overmolding step that can comprise overmolding with an appropriate material (e.g., mold compound, adhesive). Step 305 comprises removing the semiconductor carrier wafer from the package substrate. As described above, release methods can include thermal, solvent or laser aided methods. Step 306 comprises attaching a plurality of electrically conductive connectors (e.g., BGA) to the bottom surface of the package substrate. Step 307 comprises sawing through the package substrate to form a plurality of singulated stacked die packages. [0023] FIGs. 4A-G show successive cross sectional depictions that result from steps in the example method described relative to FIG. 3. FIG. 4A is a cross sectional depiction following die attach and underfill of singulated TSV die (shown as TSV Die 1) to a multi-layer substrate 201 that is adhered to a semiconductor wafer 202 (e.g., a silicon wafer) that together constitute composite carrier 205. TSV die 1 are shown flip chip attached. Topside pads 206 of TSV die 1 are shown coupled to substrate pads 207 on the package substrate 201. TSV Die 1 are generally at least 500 μπι thick. [0024] FIG. 4B is a cross sectional depiction showing an electronic assembly 400 following thinning of the bottomside of the TSV Die 1 to form a thinned TSV die 410 by an appropriate method, such as backgrind, CMP, and/or substrate (e.g., a silicon) etch to expose embedded TSVs 215. The thinned TSV die 1 are generally < 150 μιη thick, typically 20 to 80 μιη thick. TSV contact pads 211 (e.g. copper pads) are shown on the exposed portion of TSVs 215. At least a portion of the TSVs 215 are coupled to the topside pads 206. FIG. 4C is a cross sectional depiction showing an electronic assembly 450 following die attach and underfill of singulated 2nd die (shown as Die 2) to thinned TSV die 410 via a suitable method such as soldering, or copper bonding. FIG. 4D is a cross sectional depiction following overmolding with an appropriate material 425 such as mold compound or an adhesive. FIG. 4E is a cross sectionaldepiction following removal of the semiconductor wafer 202 from bottom of the polymer package substrate 201. In one embodiment Die 2 is a memory die and TSV Die 1 is a processor die. Although not shown, additional die may be stacked on Die 2. [0025] FIG. 4F is a cross sectional depiction following attaching of BGA package solder balls 218 to the package substrate 201. FIG. 4G is a cross sectional depiction following sawing through the overmold 425 and package substrate 201 to singulate the stacked die packages. [0026] Although the composite carrier has been described above as comprising a package substrate on a semiconductor wafer, the package substrate can comprise entirely the semiconductor (e.g., silicon, to match the semiconductor die) to achieve the same controlled warpage during the assembly process. [0027] The active circuitry formed on the top semiconductor surface comprises circuit elements that generally include transistors, diodes, capacitors, and resistors, as well as signal lines and other electrical conductors that interconnect these various circuit elements. [0028] Disclosed embodiments can be integrated into a variety of process flows to form a variety of devices and related products. The semiconductor substrates may include various elements therein and/or layers thereon. These can include barrier layers, other dielectric layers, device structures, active elements and passive elements including source regions, drain regions, bit lines, bases, emitters, collectors, conductive lines, conductive vias, etc. Moreover, disclosed embodiments can be used in a variety of processes including bipolar, CMOS, BiCMOS and MEMS processes. [0029] Embodiments having different combinations of one or more of the features or steps described in the context of example embodiments having all or just some of such features or steps are intended to be covered hereby. Those skilled in the art will appreciate that many other embodiments and variations are also possible within the scope of the claimed invention. |
A method of providing a halo implant region in a substrate of a MOS device having a gate electrode thereon and defining source/drain regions, a MOS device fabricated according to the above method, and a system comprising the MOS device. The method comprises: defining undercut recesses in the substrate at the source/drain regions thereof, the undercut recesses extending beneath the gate electrode; creating a halo implant region beneath the gate electrode between the recesses; and providing raised source/drain structures in the undercut recesses after creating the halo implant region. |
WHAT IS CLAIMED IS; 1. A method of providing a halo implant region in a substrate of a MOS device having a gate electrode thereon and defining source/drain regions, the method comprising: defining undercut recesses in the substrate at the source/drain regions thereof, the undercut recesses extending beneath the gate electrode; creating a halo implant region beneath the gate electrode between the recesses; and providing raised source/drain structures in the undercut recesses after creating the halo implant region. 2. The method of claim 1, wherein defining undercut recesses comprises etching the substrate at the source/drain regions. 3. The method of claim 1, wherein the undercut recesses have a depth ranging from about 10 nm to about 50 nm. 4. The method of claim 1, wherein the undercut recesses have a depth ranging from about 60 nm to about 90 nm. 5. The method of claim 1 , wherein an extent of undercut of the undercut recesses ranges from about 0 nm to about 40 nm. 6. The method of claim 5, wherein an extent of undercut of the undercut recesses ranges from about 20 nm to about 25 nm. 7. The method of claim 1, wherein creating the halo implant region comprises effecting tilt-angle implantation of dopants directed toward the recesses. 8. The method of claim 7, wherein effecting tilt-angle implantation comprises tilt- angle implanting at an angle ranging from about 20 degrees to about 50 degrees. 9. The method of claim 8, wherein effecting tilt-angle implantation comprises tilt- angle implanting at an angle ranging from about 30 degrees to about 40 degrees. 10. The method of claim 7, wherein tilt-angle implantation comprises tilt-angle implanting at an implantation energy level between about 5 KeV to about 60 KeV. 11. The method of claim 7, wherein effecting tilt-angle implantation comprises tilt- angle implanting n-type dopants selected from the group consisting of arsenic, phosphorus and antimony, or p-type dopants selected from the group consisting of as boron and indium. 12. The method of claim 7, wherein effecting tilt-angle implantation comprises tilt- angle implanting dopants in concentrations ranging from Ix 10<13> atoms/cm<3> to about 5xlO<14> atoms/cm<3>. 13. The method of claim 7, wherein effecting tilt-angle implantation comprises tilt- angle implanting dopants in concentrations ranging from about 2x10<13> atoms/cm<3> to about 5xlO<13> atoms/cm<3>. 14. The method of claim 7, wherein effecting tilt-angle implantation comprises tilt- angle implanting dopants identical to dopants used to create a well of the MOS device. 15. The method of claim 1, wherein providing raised source/drain structures comprises effecting epitaxial deposition of the raised source/drain structures. 16. The method of claim 15, wherein effecting epitaxial deposition comprises effecting a low temperature selective epitaxial deposition of selectively doped silicon to provide in- situ doped raised source/drain structures. 17. A method of providing a MOS device, comprising: providing a partially fabricated transistor structure including a substrate and a gate electrode disposed on the substrate; defining undercut recesses in the substrate at the source/drain regions thereof, the undercut recesses extending beneath the gate electrode; creating a halo implant region beneath the gate electrode between the recesses; providing raised source/drain structures in the undercut recesses after creating the halo implant region; and utilizing CMOS flow to complete fabrication of the MOS device after providing raised source/drain structures. 18. The method of claim 17, wherein defining undercut recesses comprises etching the substrate at the source/drain regions. 19. The method of claim 17, wherein creating the halo implant region comprises effecting tilt-angle implantation of dopants directed toward the recesses. 20. The method of claim 17, wherein providing raised source/drain structures comprises effecting epitaxial deposition of the raised source/drain structures. 21. The method of claim 20, wherein effecting epitaxial deposition comprises effecting a low temperature selective epitaxial deposition of selectively doped silicon to provide in- situ doped raised source/drain structures. 22. A MOS device comprising: a semiconductor substrate; a gate electrode disposed on the semiconductor substrate, the semiconductor substrate further defining undercut recesses extending beneath the gate electrode at each side of the gate electrode; a halo implant region disposed beneath the gate electrode between the recesses; and raised source/drain structures disposed in the recesses at each side of the gate electrode. 23. The MOS device of claim 22, wherein the undercut recesses have a depth ranging from about 10 nm to about 50 nm. 24. The MOS device of claim 22, wherein an extent of undercut of the undercut recesses ranges from about 0 nm to about 40 nm. 25. The MOS device of claim 22, wherein a dopant concentration of the halo implant region ranges from about IxIO<18> atoms/cm<3> to about IxIO<19> atoms/cm<3>. 26. The MOS device of claim 22, wherein dopants in the halo implant region are n- type dopants selected from the group consisting of arsenic, phosphorus and antimony, or p-type dopants selected from the group consisting of as boron and indium. 27. The MOS device of claim 22, wherein dopants in the halo implant region are dopants of species identical to dopants used to create a well of the MOS device. 28. A system comprising: an electronic assembly including an integrated circuit having a MOS device, the MOS device comprising: a semiconductor substrate; a gate electrode disposed on the semiconductor substrate, the semiconductor substrate further defining undercut recesses extending beneath the gate electrode at each side of the gate electrode; a halo implant region disposed beneath the gate electrode between the recesses; and raised source/drain structures disposed in the recesses at each side of the gate electrode; and a graphics processor coupled to the electronic assembly. 29. The system of claim 28, wherein the raised source/drain structure are epitaxial structures. |
IMPROVING SHORT CHANNEL EFFECT OF MOS DEVICES BYRETROGRADE WELL ENGINEERING USING TILTED DOPANTIMPLANTATION INTO RECESSED SOURCE/DRAIN REGIONSFIELD[0001] Embodiments of the present invention relate to the manufacture of semiconductor devices, and, in particular, to methods of improving short channel effects in MOS devices, and to MOS devices made according to such methods.BACKGROUND[0002] Conventionally, the reduction of undesirable short channel effects in MOS devices has been accomplished by using halo implantation to increase the amount of doping in the MOS wells in order to sustain smaller gate length when the device is in operation. Halo implantation leads to a non-uniform doping of the well, that is, to higher doping around the edges of the MOS gate. Halo implantation will reinforce the well concentration, in this way displacing the source/well and drain/well junction far away with respect to the edges of the gate, thus allowing a more ready control of the leakage current when the gate length is reduced. A disadvantage of prior art methods involving halo implantation is that they lead to a degradation in the mobility of carriers, and consequently of the drive current of the MOS device.BRIEF DESCRIPTION OF THE DRAWINGS[0003] Embodiments of the present invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which: [0004] Fig. Ia is a schematic cross-sectional side-elevational view of a transistor structure partially fabricated based on standard CMOS flow up to spacer formation; [0005] Fig. Ib is a schematic cross-sectional side-elevational view of the partially fabricated transistor structure of Fig. Ia, exhibiting undercut recesses according to an embodiment of the present invention;[0006] Fig. Ic is a schematic cross-sectional side-elevational view of the partially fabricated transistor structure of Fig. Ib, showing the structure as undergoing tilt-angle implantation according to an embodiment of the present invention;[0007] Fig. Id a schematic cross-sectional side-elevational view of the partially fabricated transistor structure of Fig. Ic, exhibiting a halo implant region underneath the gate electrode of the partially fabricated transistor structure according to an embodiment of the present invention;[0008] Fig. 2 is a flow diagram of a method of providing a retrograde well profile in a MOS device according to an embodiment of the present invention;[0009] Fig. 3a a schematic cross-sectional side-elevational view of a portion of partially fabricated transistor structure exhibiting an undercut recess and prior to tilt-angle implantation according to an embodiment of the present invention;[0010] Fig. 3b a schematic cross-sectional side-elevational view of the portion of the partially fabricated transistor structure of Fig. 3a, exhibiting a retrograde well profile according to an embodiment of the present invention;[0011] Fig. 4 is a graph plotting dopant concentration versus depth before and after tilt-angle implantation for the partially fabricated transistor structure of Figs. 3a and 3b;[0012] Fig. 5 is a graph plotting threshold voltage versus gate length for a given leakage target for a MOS device of the prior art and for a MOS device fabricated according to an embodiment of the present invention; and[0013] Fig. 6 is a schematic diagram depicting a system incorporating a MOS device fabricated according to embodiments of the present invention.DETAILED DESCRIPTION[0014] A method of providing a halo implant region in a MOS device, a MOS device exhibiting a halo implant region, and a system incorporating a MOS device exhibiting a halo implant region are disclosed herein. Embodiments of the present invention advantageously allow the fabrication of MOS devices, such as, for example, sub 100 nanometer MOS devices, which exhibit improved short channel effects as compared with MOS devices of the prior art. [0015] Various aspects of the illustrative embodiments will be described using terms commonly employed by those skilled in the art to convey the substance of their work to others skilled in the art. However, it will be apparent to those skilled in the art that embodiments of the present invention may be practiced with only some of the described aspects. For purposes of explanation, specific numbers and configurations are set forth in order to provide a thorough understanding of the illustrative embodiments. However, it will be apparent to one skilled in the art that embodiments the present invention may be practiced without the specific details provided herein. In other instances, well-known features are omitted or simplified in order not to obscure the illustrative embodiments. [0016] Various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding embodiments of the present invention, however, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. [0017] The phrase "embodiment" is used repeatedly. The phrase generally does not refer to the same embodiment, however, it may. The terms "comprising", "having" and "including" are synonymous, unless the context dictates otherwise.[0018] Figs. la-Id illustrate, by way of example, transistor structures in various stages of fabrication of a MOS device according to an embodiment of the present invention.[0019] A partially fabricated transistor structure 10 is shown in an initial stage of fabrication at Figure Ia, where transistor structure 10 includes a gate electrode 12 disposed on surface of a semiconductor substrate 14 in which shallow isolation trenches 16 marked "ST" on the figures have been created. By "partially fabricated transistor structure," what is meant in the context of the present description is a transistor structure in an intermediate stage of fabrication having at least a gate electrode, including gate electrode spacers, having a substrate doped to define either an n-well or a p-well disposed beneath the gate electrode, and source/drain extensions. Referring back to Fig. Ia, Source/Drain or S/D regions 20 are provided on the substrate at each side of the gate electrode 12. The S/D regions correspond to regions where raised S/D structures are to be eventually deposited. Substrate 14 may be part of a test chip on a starting p-type Si substrate where the MOS device being fabricated is a PMOS device, or on a starting n-type Si substrate where the MOS device being fabricated is an NMOS device. Fig. Ia shows partially fabricated transistor structure 10 after standard CMOS flow through the definition of spacers 18.[0020] Referring next to Fig. Ib, partially fabricated transistor structure 10 is shown at an intermediate stage of fabrication according to embodiments of the present invention, in which S/D regions 20 previously shown in Fig. Ia have been selectively removed, as would be within the knowledge of a person skilled in the art. A key feature according to embodiments of the present invention is to extend the etched regions close enough to the edge of the gate, so that a lower energy can be used to implant the dopant beneath the channel in order to obtain a retrograde well. The selective removal of S/D regions 20 results in the formation of undercut recesses 22 . In the instant description, an "undercut recess" refers to a recess that extends both in a direction orthogonal to a surface of the substrate (corresponding to a depth of the recess), and, in addition, in a direction parallel to a surface of the substrate and extending beneath the spacers (corresponding to an extent of undercut of the recess) . According to embodiments of the present invention, the extent of selective removal of the S/D regions may correspond to: a removal depth 22' ranging from about 10 nm to about 150 nm, and preferably from about 60 nm to about 90 nm and an extent of undercut 22" ranging from about 0 to about 40 nm and preferably from about 20 nm to about 25 nm . According to embodiments of the present invention, it is not necessary for the recesses to define the shapes shown in the exemplary Figs. Ib-Id. Embodiments of the present invention encompass within their scope the formation of recesses of any shape defining a depth and an extent of undercut as defined above. Preferably, according to embodiments of the present invention, selective removal of S/D regions 20 takes place by using any of well known etching techniques, such as, for example, SF6, NF3, C12, wet etches or other types of etching techniques as would be within the knowledge of a person skilled in the art. The choice between different conventional etch techniques would be dictated by the desired shape of the recess, and would thus impact, but only to a small extent, MOS performance, to the extent that the shape of the recess would modulate how the dopants of the tilted implant would distribute into the silicon.[0021] As depicted in Fig. Ic, partially fabricated transistor structure 10 is shown at a subsequent, intermediate stage of fabrication according to embodiments of the present invention. In particular, Fig. Ic shows a tilt-angle implantation 24 of dopants of the same type as dopants of the well (i.e. n-type or p-type) toward recesses 22 to form a localized halo implant region 26 beneath the gate electrode between the recesses 22. By "beneath the gate electrode," what is meant in the context of embodiments of the present invention is a location that is at least partially beneath the gate electrode. By "between the recesses," what is meant in the context of embodiments of the present invention is a location that is at least partially between the recesses. Tilt-angle implantation 24 according to embodiments of the present invention may be performed using conventional tilt-angle implantation techniques. Furthermore, according to embodiments of the present invention, a range of tilt angles [alpha] (measured with respect to an axis orthogonal to the surface of the substrate, and depicted as a doted line Z-Z in Fig. Ic) suitable for tilt-angle implantation may include angles between about 20 degrees and about 50 degrees and, preferably, angles between about 30 degrees and about 40 degrees. The dopants used for tilt-angle implantation according to embodiments of the present invention may include, by way of example, n-type dopants such as arsenic, phosphorus or antimony, or p-type dopants such as boron or indium as a function of the type of MOS device being fabricated. According to embodiments of the present invention, species used in tilt-angle implantation 24 may be the same as the dopants used to implant the wells of the transistor device 10, or, in the alternative, they may include other species, such as, by way of example, Ge, F or C. Ge, F and C are in column IV of the periodic table, and, as such, do not lead to a doping of the wells. However, Ge, F and C are known to suppress the diffusion of other species, such as boron or phosphorus into the silicon from the raised S/D regions. Thus, Ge, F and C, or similar species, may be used according to an embodiment of the present invention to prevent the species implants of the S/D regions to overrun the device. That is, Ge, F, C or similar species prevent the dopants of the S/D regions to diffuse to the extent that they would push the S/well and D/well junction to regions far toward the middle portion of the device. According to one embodiment, halo implant region 26 may be formed by way of tilt-angle implantation with dopant doses ranging from about IxIO<13> atoms/cm<3> to about 5xlO<14> atoms/cm<3>, and preferably from about 2xlO<13> atoms/cm<3> to about 5xlO<13> atoms/cm<3>. The halo implant region 26 is implanted at a predetermined depth of interest 26' below the gate electrode 12. The depth of interest 26', according to embodiments of the present invention, may be between about 10 nm to about 60 nm, and may further achieved using an implantation energy ranging from about 5 keV to about 60 keV. For the tilt-angle implantation according to embodiments of the present invention, it must be kept in mind that if the tilt angle is large (for example, above about 40 degrees), and the spacing between adjacent components of the MOS device is small, (for example, about 100 nm), there might some shadowing issue from the gates.[0022] Referring next to Fig. Id, according to embodiments of the present invention, raised S/D structures 30 are formed in the recesses 22 to a depth of 30' and to a height of 30", substantially filling the recesses 22. According to embodiments of the present invention, the thickness of the raised S/D structures, that is, the sum of depth 30' and height 30", may range from about 40 nm to about 300 nm. The raised S/D structures may be formed using any of the well known techniques in the art, such as, for example through the selective epitaxial growth. Epitaxial deposition may, in accordance with an embodiment of the invention, be a low temperature selective epitaxial deposition LT-SE of in-situ doped silicon, such as for example Si/SiGe and Boron, or Si/SiGe and As, to provide in-situ doped S/D structures 30 as shown. Halo implant region 26 can advantageously suppress the diffusion of the dopant in the raised S/D structures 30 into the substrate.[0023] After formation of the S/D structures 30 as shown by way of example inFig. Id, standard CMOS flow may be used in order to complete fabrication of a MOS device according to embodiments of the present invention.[0024] Fig. 2 illustrates, by way of example, a flow diagram of a method of providing a MOS device having a retrograde well profile according to embodiments of the present invention. At 1001, the method includes defining undercut recesses in the substrate by removing S/D regions from a substrate of a partially fabricated transistor structure. At 1002, the method includes providing halo implant region beneath the gate electrode by tilt-angle implanting dopants directed toward undercut portions of the recesses. At 1003, the method includes providing raised source/drain structures in the undercut recesses. Embodiments of the method according to Fig. 2 may be employed by way of example to form corresponding stages of the partially fabricated transistor structures shown in Figs. Ia- Id, other configurations of partially fabricated transistor structures being within the realm of embodiments of the present invention. Thus, 1001 could, by way of example, result in the profile shown in Fig. Ib; 1002 could, by way of example, result in the profile shown in Fig. Ic; and 1003 could, by way of example, result in the profile shown in Fig. Id, other profiles being within the scope of the present invention as readily recognizable by one skilled in the art.[0025] Referring next to Figs. 3a and 3b, a cross sectional profile obtained by simulation of a portion 200 of a partially fabricated transistor structure 210 after spacer definition is shown according to an embodiment of the present invention. The X and Y axes correspond to dimensions in microns along the length and height (or depth) of the shown portion 200. Portion 200 includes a portion of a gate electrode 212 including one of two spacers 218. One of two undercut recesses 222 is also prominently shown. A dopant concentration legend bar 232 is provided on the right of portion 200.[0026] Referring to Fig. 3 a, the partially fabricated transistor structure 210 in that figure is at the same stage of its fabrication as, for example, partially fabricated transistor structure 10 shown in Fig. Ib and described above. Legend bar 232 at the right of Fig. 3 a allows an evaluation of a distribution of dopants already present in bulk regions 234 of portion 200 and in parts of the gate electrode 212, the concentration of dopants in bulk regions 234 shown in Fig. 3a representing the already doped well regions of the portion 200.[0027] Referring next to Fig. 3b, portion 200 of partially fabricated transistor structure 210 previously shown in Fig. 3 a is depicted after tilt-angle implantation according to an embodiment of the present invention. The partially fabricated transistor structure 210 in Fig. 3b is at the same stage of its fabrication as, for example, partially fabricated transistor structure 10 shown in Fig. Ic and described above. As shown in Fig. 3b, tilt-angle implantation as depicted by the three bold arrows in Fig. 3b is directed toward the undercut recess 222, while the spacers about the gate electrode serve as a mask that prevent the dopant from directly reaching the channel. The tilt-angle implantation shown occurs at an angle of about 40 degrees and results in the formation of halo implant region 226 beneath the gate electrode 212 as shown. In the shown figures, because of the relatively small size of the gate, the halo regions from the S and D sides have merged together and resulted in a peak halo concentration in the middle of the device. Advantageously, most of the dopants of the tilt-angle implantation do not penetrate into the channel region, such that there is a substantial improvement in the surface properties of a MOS device formed from the partially fabricated transistor structure 210, such as threshold voltage, which increases at a much smaller rate as compared with the prior art. Implanted dopants not screened by the spacer significantly reinforce dopant concentrations in the well deeper into the bulk regions of the substrate.[0028] Referring next to Fig. 4, a graph of dopant concentration versus depth is shown for the partially fabricated transistor structure 200 of Fig. 3b. In Fig. 4, dopant concentrations are plotted at a depth starting slightly above point A in Fig. 4, and along a cut-line extending toward the well and away from the gate electrode in the direction of the Y axis. Point A corresponds to zero depth at the substrate/gate electrode interface at a central portion of the gate electrode, and the cut-line used to generate Fig. 4 starts at about 0.101413 micron above point A as stated on the graph of Fig. 4. As shown clearly in Fig. 4, a MOS device fabricated according to the embodiment of the present invention shown in Figs. 3a and 3b exhibits higher dopant concentrations overall along a depth of the MOS well, and a higher peak concentration corresponding to a halo implant or retrograde well region, advantageously resulting in an improved retrograde well profile, and thus in improved short-channel effects of the results MOS device.[0029] Fig. 5 is a graph of threshold voltage VTP in Volts plotted versus gate length FCCD in nanometers for a given leakage target (as indicated by "@opt" in Fig. 5) between about 100 to about 200 nA/[mu]m(the leakage target having been normalized per unit width in a direction perpendicular to the views shown of the MOS devices depicted herein) for a MOS device fabricated according to an embodiment of the present invention. As suggested in Fig. 5, for the embodiments of the present invention that would yield the VTP versus FCCD curve shown in the figure, and for a given leakage target, at the same VTP, gate lengths are appreciably smaller. By way of example, for a VTP of about 0.45 Volts, embodiments of the present invention can support a gate length that is about 2 nanometers smaller than a gate length of an otherwise identical MOS device not including a retrograde well profile according to embodiments of the present invention. As mentioned previously, smaller gate lengths are advantageously possible according to embodiments of the present invention by virtue of deeper subsurface doping of MOS wells resulting from halo implantation.[0030] Referring to Fig. 6, there is illustrated one of many possible systems 90 in which a MOS device 101 formed according to embodiments of the present invention may be used. In one embodiment, the electronic assembly 100 may include a microprocessor. In an alternate embodiment, the electronic assembly 100 may include an application specific IC (ASIC). Integrated circuits found in chipsets (e.g., graphics, sound, and control chipsets) may also be packaged in accordance with embodiments of this invention.[0031] For the embodiment depicted by Fig. 6, the system 90 may also include a main memory 102, a graphics processor 104, a mass storage device 106, and/or an input/output module 108 coupled to each other by way of a bus 110, as shown. Examples of the memory 102 include but are not limited to static random access memory (SRAM) and dynamic random access memory (DRAM). Examples of the mass storage device 106 include but are not limited to a hard disk drive, a compact disk drive (CD), a digital versatile disk drive (DVD), and so forth. Examples of the input/output module 108 include but are not limited to a keyboard, cursor control arrangements, a display, a network interface, and so forth. Examples of the bus 110 include but are not limited to a peripheral control interface (PCI) bus, and Industry Standard Architecture (ISA) bus, and so forth. In various embodiments, the system 90 may be a wireless mobile phone, a personal digital assistant, a pocket PC, a tablet PC, a notebook PC, a desktop computer, a set-top box, a media-center PC, a DVD player, and a server.[0032] Advantageously, tilt-angle implantation according to embodiments of the present invention results in the formation of retrograde well profiles at the halo implant region 26, that is, results in the formation of higher well dopant concentrations in the bulk regions of the transistor structure 10 than in the channel region of the same. Typical well dopant concentrations achieved by embodiments of the present invention are from about IxIO<18> atoms/cm<3> to about IxIO<19> atoms/cm<3>. The above concentrations may be achieved according to embodiments of the present invention using input doses ranging from about IxIO<13> atoms/cm<2> to about IxIO<14> atoms/cm<2>. [0033] At least two main advantages are realized by the formation of retrograde well profiles according to embodiments of the present invention. First, such retrograde well profiles allow for better short channel effect control, allowing the gate length to be scaled while maintaining the same off-state current leakage and threshold voltage (VT). Second, retrograde well profiles according to embodiments of the present invention, by virtue of the gate length scaling, advantageously allow for better drive current at a given VT, meaning that the same doping level in the channel will not lead to degradation in mobility within the device. Additionally, embodiments of the present invention allow for minimum changes to be made to well known baseline MOS fabrication processes while providing the advantages notes above.[0034] According to embodiments of the present invention, a key advantage is that the need to use large amounts of energy for dopant implantation beneath the channel region is obviated by virtue of performing the tilted implantation after etching the recesses. As a result, the spacer adjacent to the gate is able to screen implanted dopants efficiently. If no etching is performed before dopant implantation, the large energy required to implant dopants beneath the channel region would tend to further implant dopants, in significant amounts, not only in the spacer but also in the channel, in this way reducing the retrograde profile of the well. In addition, a fill back of the recesses with a large volume of low resistive material, such as, for example, epitaxial material, according to embodiments of the present invention, advantageously allows the current to spread easily before entering the contacts of the MOS device. [0035] Although the instant description pertains in general to elevated raised source/drain devices, embodiments of the present invention encompass within their scope the extension of tilted implantation to cases without raised source/drain regions. In such cases, the contacts to the source and drain would be directly formed into the recessed regions of the MOS device. [0036] Additionally, embodiments of the present invention encompass within their scope tilted implantation on only one side of a MOS gate, and, therefore, an asymmetric doping, such that either the source or the drain will receive more dopants than the other. Embodiments of the present invention further encompass within their scope the performance of an additional implant after the removal of source/drain regions using dopants of the same type as the source/drain regions as a compensation implant. A function of a compensation implant would be to reduce a parasitic capacitance of the MOS device being fabricated. In addition, according to embodiments of the present invention, it would be possible to effect multiple implantations of dopants and/or of neutral species such as, for example, Ge, F or C after source/drain removal using different tilt/energy/doses for each implantation or order to further optimize the gain of a MOS device fabricated as a result. [0037] Although specific embodiments have been illustrated and described herein for purposes of description of the preferred embodiment, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations calculated to achieve the same purposes may be substituted for the specific embodiment shown and described without departing from the scope of the present invention. Those with skill in the art will readily appreciate that the present invention may be implemented in a very wide variety of embodiments. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof. |
The present disclosure includes apparatuses and methods for performing data restore operations in memory. An embodiment includes a memory, and a controller configured to perform a data restore operation on data stored in the memory using a first table and a second table stored in the controller, wherein the first table includes a current mapping of the data stored in the memory that is based on a previous assessment of previous error rates associated with the data stored in the memory, and the second table includes a new mapping of the data stored in the memory that is based on a current assessment of current error rates associated with the data stored in the memory. |
What is Claimed is:1. An apparatus, comprising:a memory; anda controller configured to perform a data restore operation on data stored in the memory using a first table and a second table stored in the controller, wherein:the first table includes a current mapping of the data stored in the memory that is based on a previous assessment of previous error rates associated with the data stored in the memory, andthe second table includes a new mapping of the data stored in the memory that is based on a current assessment of current error rates associated with the data stored in the memory.2. The apparatus of claim 1, wherein the controller is configured to perform the data restore operation on the data stored in the memoiy by:performing a sense operation on the memory using the current mapping of the data included in the first table; andperforming a program operation on the memory using the new mapping of the data included in the second table.3. The apparatus of any one of claims 1-2, wherein:the current mapping of the data included in the first table includes a mapping of the data to a first physical location in the memory; andthe new mapping of the data included in the second table includes a mapping of the data to a second physical location in the memoiy that is different than the first physical location.4. The apparatus of claim 3, wherein the second physical location has a lower error rate associated therewith than the first physical location.5. The apparatus of claim 3, wherein the second physical location is a spare location in the memory.6. A method for operating memory, comprising:performing a data restore operation on data stored in a memory by: sensing the data stored in the memory using a first mapping of the data, wherein:the first mapping is a current mapping of the data stored in the memory that is based on a previous assessment of previous error rates associated with the data, andthe first mapping is stored in a first table; andprogramming the sensed data to the memory using a second mapping of the data, wherein:the second mapping is new mapping of the data stored in the mem or\' that is based on a current assessment of current error rates associated with the data; andthe second mapping is stored in a second table.7. The method of claim 6, wherein the method includes programming the sensed data to a physical location in the memoiy that is different from a physical location in the memory from which the data is sensed and has a lower error rate associated therewith than the physical location in the memory from which the data is sensed.8. The method of claim 6, wherein performing the data restore operation on the data stored in the memory includes:determining to use the first mapping to sense the data stored in the memoiy based on a value of a phase bit associated with the data stored in the memory; anddetermining to use the second mapping to program the sensed data to the memoiy based on the value of the phase bit.9. The method of any one of claims 6-8, wherein the method includes performing the current assessment of the current error rates associated with the data stored in the memory using the sensed data. apparatus, compnsin a memory; anda controller configured to perform a data restore operation on a plurality of groups of data stored in the memory using a first table and a second table stored in the controller, wherein:the first table includes a current mapping of the groups of data that is based on a previous assessment of previous error rates associated with each respective group; andthe second table includes a new mapping of the groups of data that is based on a current assessment of current error rates associated with each respective group.11 , The apparatus of claim 10, wherein :the first table and the second table each include a plurality of entries, wherein each respective entry corresponds to a different one of the plurality of groups of data; andeach respective entry includes:a logical address for its respective group of data; and a physical address for its respective group of data.12. The apparatus of claim 10, wherein:the second table includes a first plurality of entries and a second plurality of entries, wherein each respective entry of the first plurality of entries and the second plurality of entries corresponds to a different one of the plurality of groups of data;each respective entry of the first plurality of entries includes:a logical address for its respective group of data; and a value indicating whether its respective group of data has been redirected to a different physical location in the memory; andeach respective entry of the second plurality of entries includes:a logical address for its respective group of data; and a physical address for its respective group of data.13. The apparatus of any one of claims 10-12, wherein the controller is configured to, upon power being restored subsequent to a power loss occurring while performing the data restore operation:perform sequential sense operations to sequentially sense each respective group of data using the new mapping of the groups of data included in the second table;perform, upon the sense operation to sense one of the groups of data failing, a sense operation to sense that group of data using the current mapping of the groups of data included in the first table; andresume the data restore operation upon the sense operation to sense that group of data using the current mapping in the first table succeeding.14. A method of operating memory, comprising:determining an error rate associated with each respective one of a plurality of groups of data stored in a memory;ranking the groups of data based on the determined error rate associated with each respective group;generating a first table that includes a mapping of the groups of data that is based on the ranking;determining the error rate associated with each respective group of data subsequent to generating the first table,re-ranking the groups of data based on the error rate associated with each respective group determined subsequent to generating the first table;generating a second table that includes a mapping of the groups of data that is based on the re-ranking, andperforming a data restore operation on the groups of data by:sensing each respective group of data using the mapping of the first table; andprogramming the sensed data to the memory using the mapping of the second table.15. The method of claim 14, wherein generating the second table includes: determining which groups of data are among a number of groups that rank highest in the re-ranking; determining, for each respective group of data determined to he among the number of groups that rank highest in the re-ranking, whether that group of data is also among the number of groups of that rank highest in the ranking; mapping in the second table, for each respective group of data determined to not also be among the number of groups that rank highest in the ranking, that group of data to a physical location in the memory that is different than a physical location in the memory to which that group of data is mapped in the first table; andmapping in the second table, for each respective group of data determined to also be among the number of groups that rank highest in the ranking, that group of data to a physical location in the memory that is the same as the physical location in the memory to which that group of data is mapped in the first table. 6. The method of claim 15, wherein generating the second table includes: determining, for each respective group of data determined to not be among the number of groups that rank highest in the re-ranking, whether that group of data is among the number of groups that rank highest in the ranking; mapping in the second table, for each respective group of data determined to not be among the number of groups that rank highest in the ranking, that group of data to a physical location in the memory that is the same as the physical location in the memory to which that group of data is mapped in the fi st table; andmapping in the second table, for each respective group of data determined to be among the number of groups that rank highest in the ranking, that group of data to a physical location in the memory that is different than the physical location in the memory to which that group of data is mapped in the first table.17, The method of claim 14, wherein the method includes inverting a value of a global phase bit associated with the memory upon initiating the performance of the data restore operation, such that the inverted value of the global phase bit does not match a value of a phase bit associated with the groups of data.18. The method of claim 17, wherein performing the data restore operation on the groups of data includes:determining to use the mapping of the first table to sense each respective group of data based on the value of the phase bit associated with the groups of data;inverting the value of the phase bit associated with the groups of data upon initiating the programming of the sensed data to the memory, anddetermining to use the mapping of the second table to program the sensed data to the memory based on the inverted value of the phase bit associated with the groups of data.19. The method of claim 17, wherein the method includes:detecting a power loss while performing the data restore operation;storing, upon detecting the power loss, the value of the global phase bit and a logical address for the group of data upon which the data restore operation has most recently been performed, andupon power being restored:setting the value of the phase bit associated with the groups of data having a logical address less than or equal to the logical address for the group of data upon which the data restore operation has most recently been performed to the value of the global phase bit;setting the value of the phase bit associated with the groups of data having a logical address greater than the logical address for the group of data upon which the data restore operation has most recently been performed to the inverted value of the global phase bit; andresuming the data restore operation.20. The method of claim 14, wherein the method includes:determining the error rate associated with each respective group of data subsequent to performing the data restore operation;performing an additional re-ranking of the groups of data based on the error rate associated with each respective group determined subsequent to performing the data restore operation;z updating the mapping of the groups of data in the first table based on the additional re-ranking; andperforming an additional data restore operation on the groups of data by: sensing each respective group of data using the mapping of the second table, andprogramming the sensed data to the memoiy using the updated mapping of the first table. |
PERFORMING DATA RESTORE OPERATIONS IN MEMORYTechnical Field[0001] The present disclosure relates generally to semiconductor memory and methods, and more particularly, to performing data restore operations in memory.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits and/or external removable devices in computers or other electronic devices. There are many different types of memory including volatile and non-volatile memory. Volatile memory can require power to maintain its data and can include random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), and resistance variable memory such as phase change random access memory (PCRAM), 3D Phase Change Material and Switch (PCMS), resistive random access memory (RRAM), magnetic random access memory (MRAM), and programmable conductive memory, among others.[0003] Memory devices can be combined together to form a solid state drive (SSD). An SSD can include non-volatile memory (e.g., 3D PCMS, NAND flash memory and/or NOR flash memory), and/or can include volatile memory (e.g., DRAM and/or SRAM), among various other types of non-volatile and volatile memory. Flash memory devices can include memory cells storing data in a charge storage structure such as a floating gate, for instance, and may be utilized as non-volatile memory for a wide range of electronic applications. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption.[0004] Memory ceils in an array architecture can be programmed to a target (e.g., desired) data state. For example, a single level cell (SLC) can be programmed to a targeted one of two different data states, which can be represented by the binary units 1 or 0. Some memory ceils can be programmed to a targeted one of more than two data states (e.g., 1 1 1 1, Oi l 1 , 0011, 101 1 , 1001, 0001, 0101, 1101, 1 100, 0100, 0000, 1000, 1010, 0010, 01 10, and 1 110). Such cells may be referred to as multi state memory cells, multiunit cells, or multilevel cells (MLCs). MLCs can provide higher density memories without increasing the number of memory cells since each cell can represent more than one digit (e.g., more than one bit).[0005] Various internal and/or external mechanisms, however, may cause an error to occur when the state of a memory cell is sensed (e.g., read). For example, such mechanisms may cause memory cells to be sensed to be in a state other than the target state (e.g., a different data state than the data state to which the cell was programmed). This may reduce the quality of the data stored in the memory, which may decrease the lifetime of the memory and/or cause the memory to fail, if corrective actions are not taken.[0006] Error detection and correction schemes such as, for instance, error correction code (ECC) schemes and/or redundant array independent disc (RAID) schemes, can be utilized to correct such errors. However, the capabilities of such schemes may be limited. For instance, such schemes may only be capable of detecting and correcting a certain (e.g., finite) quantity (e.g., number or distribution) of erroneous data; if this limit is exceeded, the erroneous data may not be correctable, and may become corrupted and/or lost.Brief Descri ption of the Drawings[0007] Figure 1 illustrates a block diagram of an apparatus in the form of a memory device in accordance with an embodiment of the present disclosure.[0008] Figure 2 illustrates a block diagram of an apparatus in the form of a memory device in accordance with an embodiment of the present disclosure.[0009] Figure 3 illustrates a block diagram of an apparatus in the form of a computing system including at least one memory system in accordance with an embodiment of the present disclosure.[0010] Figures 4A-4B illustrate examples of tables used to perform data restore operations in accordance with an embodiment of the present disclosure.[0011] Figure 5 illustrates an example of a table used to perform data restore operations in accordance with an embodiment of the present disclosure. [0012] Figure 6 illustrates an example of a table used in operation of memory in accordance with an embodiment of the present disclosure.[0013] Figure 7 illustrates a method for operating memory in accordance with an embodiment of the present disclosure.[0014] Figure 8 illustrates a method for operating memory in accordance with an embodiment of the present disclosure.[0015] Figure 9 illustrates a method for operating memory in accordance with an embodiment of the present disclosure.Detailed Description[0016] The present disclosure includes apparatuses and methods for performing data restore operations in memory. An embodiment includes a memory, and a controller configured to perform a data restore operation on data stored in the memory using a first table and a second table stored in the controller, wherein the first table includes a current mapping of the data stored in the memory that is based on a previous assessment of previous error rates associated with the data stored in the memory, and the second table includes a new mapping of the data stored in the memory that is based on a current assessment of current error rates associated with the data stored in the memory.[0017] Embodiments of the present disclosure can operate tocontinuously perform data restore operations on data stored in memory by continuously reevaluating the memory and reprogramming (e.g., rewriting) the data so that the data is always stored in the best avail ble memory (e.g., the memory having the lowest error rate associated therewith). This can lower the overall error rate associated with the data stored in the memory, thereby ensuring that the error detection and correction capabilities (e.g. limits) of error detection and correction schemes (e.g., ECC and/or RAID schemes) utilized by the memory are not exceeded. As such, performing data restore operations in accordance with the present disclosure can increase the quality of the data stored in the memory, which may increase the lifetime of the memory a d/or prevent the memory from failing.[0018] Embodiments of the present disclosure can also ensure efficient use of the memory's resources (e.g., power, speed, and/or storage space). For example, otherwise unrelated sense operations perfonned on the memory can be combined into a single operation (e.g., a read-verify operation) for both error assessment (e.g., collating error distribution) and data restoration (e.g., retrieving data to be migrated). In contrast, previous approaches may split such operations into two separate sense operations to be performed on the memory, which may use additional storage space (e.g. overhead) and result in additional wear (e.g., read disturb) on the memory.[0019] Further, the rate at which data restore operations in accordance with the present disclosure are performed can be adjustable, which in turn can result in a corresponding increase or decrease in the storage space needed for the data restore operations (e.g., an increased speed may result in a corresponding increase in overhead, and vice versa). As such, data restore operations in accordance with the present disclosure can be tailored to particular system requirements, such as, for instance, quality of service requirements.[0020] Further, data restore operations in accordance with the present disclosure can operate in the background of the memory, independent of client (e.g., user and/or host) accesses. This may enhance the performance (e.g., quality of service) of the memory by reducing the chance that a client access will incur latency-reducing, time consuming data correction and/or restore operations. In contrast, previous approaches may rely on client accesses for error detection and/or correction, which may result in increased latency.[0021] Further, data restore operations in accordance with the present disclosure may be fine grained. For example, if a single unreliable portion (e.g., sector) of the memory requires extensive correction and/or restoration, such as, for instance, a RAID rebuild, this memory portion may be restored entirely, thereby reducing excess client latency.[0022] Further, data restore operations in accordance with the present disclosure can manifest updated memory mappings to reflect newly assessed memory health. For example, data restore operations in accordance with the present disclosure can continuously assess the fidelity of the memory, and adapt the data storage in the memory to reflect these assessments. In contrast, previous approaches may use time-based (e.g., zero-time based) mapping and/or data storage assessments, and/or make irreversible mapping and/or storage decisions. [0023] Further, performing data restore operations in accordance with the present disclosure can satisfy a periodic memory refresh useful for 3D Phase Change Material and Switch (PCMS) memory. For instance, data restore operations in accordance with the present disclosure can maintain the plasticity of such memory, and therefore lower its error rates, by continually rewriting its stored data as part of the restore operation.[0024] Further, in the context of flash memory, data restore operations in accordance with the present disclosure can satisfy the function of continuously evaluating and managing (e.g. tuning) trim settings, such as, for instance, the calibration of sense amplifiers, of the memor to reduce (e.g., minimize) errors during sense operations. In contrast, previously approaches may rely on client accesses for tuning trim settings, thereby incurring increased latency,[0025] As used herein, "a" or "an" can refer to one or more of something, and "a plurality of can refer to more than one of such things. For example, a memory ceil can refer to one or more memory cells, and a plurality of memory cells can refer to two or more memory cells. Additionally, the designators "E" and "N", as used herein, particularly with respect to reference numerals in the drawings, indicates that one or more of the particular feature so designated can be included with an embodiment of the present disclosure.[0026] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits.[0027] Figure 1 illustrates a block diagram of an apparatus in the form of a memory device in accordance with an embodiment of the present disclosure. As used herein, an "apparatus" can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example,[0028] The memory device can include three-dimensional memory entities, such as the three-dimensional memory entity 137 illustrated in Figure 1. The three-dimensional memory entity can include a plurality of two-dimensional memory entities 135-1, 135-2,... , 135-E. The two-dimensional memory entities 135 can be arrays of memory elements such as memory cells, although for clarity only one respective memory element 138-1, 138-2,... , 138-E is illustrated for each two-dimensional memory entity 135. The two-dimensional memory entities 135 can be physical memory entities such as dice or chips that include an array of memory elements. The two-dimensional memory entities are referred to as being two-dimensional because they are of a lesser dimension than the three- dimensional memory entity 137. Although the two-dimensional memory entities 135 are three-dimensional physical objects, they are referred to as being two- dimensional because a group of two-dimensional memory entities 135 can form a memory entity of a higher dimension, which is referred to as a three- dimensional memory entity 137. The two-dimensional memory entities may include more than one planar array of memory cells.[0029] The two-dimensional memory entities 135 are referred to as being two-dimensional because they are of a greater dimension than a memory element. The two-dimensional memory entities 135 include a plurality of memory elements arranged in at least two physical dimensions (e.g., at least one memory array ). The memory elements individually can be referred to as one- dimensional memory elements (again, even though they exist as three- dimensional physical objects). A grouping of a plurality of three-dimensional memory elements 137 can be referred to as a four-dimensional memory element (not specifically illustrated in Figure 1 ). A grouping of a plurality of four- dimensional memory elements can be referred to as a five-dimensional memory element, etc.[0030] Although not specifically illustrated in Figure 1, the memory device can be coupled to a controller, such as, for instance controller 308 further described herein in connection with Figure 3. Controller 308 can perform data restore operations, such as, for instance, data scrubbing and/or migration operations, on data stored in the memory device, as will be further described herein.[0031] As shown in Figure 1, the memory device can have a first resolution 139 and a second resolution 141 associated therewith. The first resolution 39 can be referred to as a page of data. In some embodiments, the first resolution 139 can include a respective memory element 138-1, 138-2,... , 138-E from each two-dimensional memory entity 135-1, 135-2,... , 135-E contained within a selected three-dimensional memory entity 137.[0032] Figure 1 also includes an illustration of a respective example of the second resolution 141-1, 141-2,... , 141-E for each of the two-dimensional memory elements 135. The second resolution 141 can be a portion of a two- dimensional memory entity 135. For example, the second resolution 141-1 illustrated in association with the two-dimensional memory entity 135-1 is a portion thereof. Although the second resolution 141 is illustrated as being a quarter of the two-dimensional memory entity 135, embodiments are not limited to any particular portion.[0033] Collectively, the portions of the two-dimensional memory entities135 corresponding to the second resolution 141 make up a portion 143 of the three-dimensional memory entity 137. For example, in the case of 3D PCMS memory, the two-dimensional memory entities 135 can be referred to as tiles, the three dimensional memory entities 137 can be referred to as slices, the portion 143 of the three-dimensional memory entity 137 can be referred to as a parcel, and the memory elements 138 can be referred to as bits. Further, a plurality of pages of data can form a sector. For instance, a sector can be a logical construction comprising an arbitrary group of pages (e.g., the pages of data that form a sector need not be adjacent within a slice, and/or may be located on different slices). A sector may be, and/or refer to, a unit of data that is accessible to a user (e.g., a user of host 302).[0034] The memory apparatus may include spare memor entities such as spare two-dimensional memory entities 135 and/or three-dimensional memory entities 137 (e.g., spare slices and/or spare parcels). As a non-limiting example, 1/16 of the slices on a memory die can be reserved as spares. Such a spare memory entity can include one or more pages of the memory that are not allocated to any sector of data. These spare memory entities can be used as substitutes for other memory entities that are identified as being error-prone as part of an error management and/or data restoration process, as will be further described herein.[0035] Figure 2 illustrates a block diagram of an apparatus in the form of a memory device 210 in accordance with an embodiment of the present disclosure. In the embodiment illustrated in Figure 2, the memory device 210 is a 3D Phase Change Material and Switch (PCMS) memory device, A 3D PCMS memory device is an example of a multidimensional memory device.[0036] A 3D PCMS device can include memory elements having a"stack" structure. A memory element can comprise a switch element and a storage element (e.g., a switch element coupled in series with a storage element). The switch element can be a diode, field effect transistor (FET), a bipolar junction transistor (BJT), an ovonic memory switch (OMS), or an ovonic threshold switch (OTS), among others. In an embodiment, the memory element can comprise a memory material that can serve as both the storage element and the memory element, and which may be referred to herein as a switch and storage material (SSM). An SSM may comprise a chalcogenide alloy; however, embodiments are not so limited.[0037] In an embodiment, the switch element and storage element associated with the respective memory cells can be series coupled two-terminal devices. For instance, the switch element can be a two-terminal OTS (e.g., a chalcogenide alloy formed between a pair of electrodes), and the storage element can be a two-terminal phase change storage element (e.g., a Phase Change Material (PCM) formed between a pair of electrodes). A memory cell including an OTS in series with a PCM can be referred to as a PCMS memory cell. In an embodiment, an electrode can be shared between the switch element and storage element of the memory cells. Also, in an embodiment, memory cells can have top or bottom electrodes comprising conductive lines.[0038] The memory device 210 can include a plurality of two- dimensional memory elements, which for the 3D PCMS memory device can be referred to as tiles. The tiles can include more than one deck (e.g., such as a lower deck 224-1 and an upper deck 224-2 as illustrated) of memory elements in an array. The tiles can have a width 226 and a height 228, as identified in the figure. The tiles can be divided into sub-tiles 230- 1 , 230-2, 230-3, 230-4. In an embodiment, the sub-til es can be quarters of a tile.[0039] Each memory element (not specifically illustrated) can be addressed by an access line and sense line combination. Access lines may also be referred to as word lines or select lines. Sense lines may also be referred to as bit lines or data lines. By way of example, a tile can include 2048 sense lines 218-1, 218-2 and 4096 access lines per deck. However, memory device 210 is not limited to a particular number of access lines 222 and/or sense lines 218. The access lines can be coupled to access line decoders 222-1, 222-2, 222-3. The sense lines can be coupled to sense line decoders 220-1 , 220-2, The access line decoders 222 and the sense line decoders 220 can be coupled to a controller (not specifically illustrated), such as, for instance controller 308 further described herein in connection with Figure 3.[0040] Figure 3 illustrates a block di agram of an apparatus in the form of a computing system 300 including at least one memory system 304 in accordance with an embodiment of the present disclosure. As used herein, a memory system 304, a controller 308, or a memory device 310 might also be separately considered an "apparatus." The memory system 304 can be a solid state drive (SSD), for instance, and can include a host interface 306, a controller 308 (e.g., a processor and/or other control circuitry), and one or more memory devices 310-1, . . ., 310-N (e.g., solid state memory devices such as 3D PCMS memory devices), which provide a storage volume for the memory system 304.[0041] As illustrated in Figure 3, the controller 308 can be coupled to the host interface 306 and to the memory devices 310 via a plurality of channels and can be used to transfer data between the memory system 304 and a host 302. The interface 306 can be in the form of a standardized interface. For example, when the memory system 304 is used for data storage in a computing system 300, the interface 306 can be a serial advanced technology attachment (SAT A), peripheral component interconnect express (PCIe), or a universal serial bus (USB), among other connectors and interfaces. In general, however, interface 306 can provide an interface for passing control, address, data, and other signals between the memory system 304 and a host 302 having compatible receptors for the interface 306.[0042] Host 302 can be a host system such as a personal laptop computer, a desktop computer, a digital camera, a mobile telephone, or a memory card reader, among various other types of hosts. Host 302 can include a system motherboard and/or backplane and can include a number of memory access devices (e.g., a number of processors).[0043] The controller 308 can communicate with the memory devices310 to control data sense (e.g., read), program (e.g., write), and erase operations, among other operations. Although not specifically illustrated, in some embodiments, the controller 308 can include a discrete memory channel controller for each channel coupling the controller 308 to the memory devices 310. The controller 308 can include, for example, a number of components in the form of hardware and/or firmware (e.g., one or more integrated circuits) and/or software for controlling access to the number of memory devices 310 and/or for facilitating data transfer between the host 302 and memory devices 310.[0044] The memory devices 3 0 can include a number of arrays of memory elements (e.g., memory cells). For example, the memory devices 310 can be 3D PCMS memory devices analogous to memory device 210 described in connection with Figure 2, including memory elements arranged in tiles as previously described herein. However, embodiments are not limited to a particular type of memory array or array architecture.[0045] In operation, data can be written to and/or read from a memory device of a memory system (e.g., memory devices 310 of system 304) as a physical page of data, for example. As one example, a 3D PCMS memory device may be configured to store a particular quantity of bytes of data per page, which can be one bit from each of the quantity of tiles in a slice. Data can be transferred to/from a host (e.g., host 302) in data segments referred to as sectors (e.g., host sectors). A sector of data is a logical granularity that can be remapped to a variety of different underlying system granularities,[0046] In the embodiment illustrated in Figure 3, controller 308 can include an error correction component 312 (e.g., an error coder such as an error correction code (ECC) engine) and a data restore component 314. Error correction component 312 and data restore component 314 can be discrete components such as an application specific integrated circuit (ASIC) or the components may reflect functionally provided by circuitry within the controller 308 that does not necessarily have a discrete physical form separate from other portions of the controller 308. Although illustrated as components within the controller 308 in Figure 3, error correction component 312 and data restore component 314 can be external to the controller 308 or have a component located within the controller 308 and a component located external to the controller 308. For example, the error correction component 312 can include an error correction coding circuit located on the controller 308 and an error correction coding circuit located external to the controller 308. Although various functions may be described with respect to the error correction component 312 and the data restore component 314, the various functions may equally be said to be performed by the controller 308. The controller 308 can be configured to perform data restore operations, such as, for instance, data scrubbing and/or migration operations, on data stored in memory devices 310, among other operations.[0047] As shown in Figure 3, data restore component can include a first table 316-1 and a second table 316-2, as illustrated in Figure 3. Tables 316-1 and 316-2 may be referred to herein as repair tables, and may be generated by controller 308 and stored (e.g., persisted) in non-volatile memory of controller 308, as will be further described herein. One of the repair tables (e.g., table 316- 1), which may be referred to as the current repair table, may include (e.g., store) a current (e.g., active) mapping of the data stored in memory devices 310 that is based on a previous assessment (e.g., ranking) of previous error rates associated with the data. The other repair table (e.g., table 316-2), which may be referred to as the new repair table, may include a new (e.g., subsequent) mapping of the data that is based on a current assessment of the current error rates associated with the data.[0048] Mapping, as used herein, can refer to the composition of a user- accessible data unit (e.g., a sector) from its constituent memory elements (e.g., pages and bits). For instance, the mappings in the repair tables may be logical to phy sical mappings of groups of data, such as, for instance, sectors or parcels of data, stored in memory devices 310. Examples of the repair tables, and of the data mappings included in the repair tables, will be further described herein (e.g., in connection with Figures 4A-4B and 5).[0049] In an embodiment, controller 308 can perform data restore operations on data (e.g., groups of data) stored in memory devices 3 0 (e.g., to ensure the data is being stored in the best available memory by avoiding error- prone memory in favor of more error-resistant memory) using repair tables 316- 1 and 316-2. For example, controller 308 can perform a sense operation on memory devices 310 (e.g., to sense data stored in the memory as part of a first pass of the migration) using the current data mapping in the current repair table, and then perform a program operation on memory devices 310 (e.g., to program the sensed data to the memory as part of a second pass of the migration) using the new data mapping in the new repair table.[0050J For instance, the current data mapping in the current repair table may map the data (e.g., a group of the data) to a first physical location (e.g., address) in memory devices 310, and the new data mapping in the new repair table may map the data to a second physical location in memory devices 310. The second physical location (e.g., the location to which the sensed data is programmed) may be different from the first physical location (e.g., the location from which the data was sensed), and/or may have a lower error rate associated therewith than the first location. For instance, the first physical location may be a user-accessible location (e.g., page) in the memory and the second physical location may be a spare location (e.g., a spare page) in the memory, or vice versa.[0051] The previous and current error rates associated with the data stored in memory devices 3 0 can be, for example, bit error rates associated with error correction operations performed on the data. A bit error rate, as used herein, can refer to the quantity of erroneous bits corresponding to an amount of erroneous data sensed from a memory during a sense operation divided by the total amount of data sensed during the sense operation (e.g., the sample size).[0052] The previous and current error rates associated with the data stored in memory devices 3 0 can be determined using error correction component 308, and the assessments of the previous and current error rates can be performed by controller 308. For example, error correction component 308 can perform error correction operations on a plurality of groups of data stored in memory devices 310 (e.g., the data sensed during the sense operation), and determine the error rate (e.g., bit error rate) associated with the error correction operation performed on each respective group. Controller 308 can then rank the groups of data based on the error rate associated with each respective group. For instance, controller 308 can rank the groups in order from highest to lowest error rate.[0053] Controller 308 can then generate the current repair table with mappings based on the ranking. For example, controller 308 can determine which of the groups of data are among a number of groups, that is equal to the number of spare locations in the memory, that rank the highest (e.g. have theIz worst error rates) in the ranking, and map these groups to the physical locations in the memory corresponding to the spare locations. The other groups of data can he mapped to physical locations in the memory corresponding to user- accessible locations. The mappings of the current repair table can then be used for initial operations (e.g. program and/or sense operations) performed on memory devices 310.[0054] Subsequent to generating the current repair table (e.g., during subsequent operation of memory devices 310), controller 308 can perform a subsequent assessment of the error rates, and generate the new repair table based on this subsequent assessment. For example, error correction component 308 can perform subsequent error correction operations on the plurality of groups of data, and determine the subsequent (e.g., new) error rate associated with the subsequent error correction operation performed on each respective group.Controller 308 can then re-rank the groups of data based on the subsequent error rate associated with each respective group (e.g., in order from highest to lowest).[0055] For example, controller 308 can determine which of the groups of data are among a number of groups, equal to the number of spare locations in the memory, that rank the highest (e.g. have the worst error rates) in the re-ranking, and then determine, for each of those respective highest ranking groups, whether that group is also among the number of groups that was determined to rank the highest in the original ranking. For each of these respective highest ranking groups determined to also be among the highest ranking groups in the original ranking (e.g., those groups that were originally had the worst error rates, and now still have the worst error rates), controller 308 can map these groups of data to the same physical locations in the memory to which they were mapped in the current repair table (e.g., these groups will remain mapped to the spare locations in the new table). These groups can be referred to as static groups. For each of these respective highest ranking groups that were not also among the highest ranking groups in the original ranking (e.g., those groups that were not originally among those with the worst error rates, but now are), controller 308 can map these groups of data to different physical locations in the memory than the locations that they were mapped to in the current repair table (e.g., these groups will now be mapped to the spare locations in the new table, instead of to the user-accessible locations to which they were mapped in the current table). These groups can be referred to as added groups.[0056] Controller 308 can also determine, for each respective group of data that is not among the highest ranking groups in the re-ranking, whether that group is among the number of groups that was determined to rank highest in the original ranking. For each of these respective groups that was determined to rank highest in the original ranking (e.g., those groups that originally had the worst error rates, but now do not), controller 308 can map these groups of data to different physical locations in the memory than the locations that they were mapped to in the current repair table (e.g., these groups will now be mapped to user-accessible locations in the new table, instead of to the spare locations to which they were mapped in the current table, as they have now been displaced by the added groups). These groups can be referred to as deleted groups. For each of the respective groups that were not determined to be among the highest ranking groups in either the original ranking or the re-ranking (e.g., those groups that were never among those with the worst error rates), controller 308 can map these groups to the same physical locations in the memory to which they were mapped in the current repair table (e.g., these groups will remain mapped to the user-accessible locations in the new table).[0057] In instances in which the spare locations in the memory all have data stored therein (e.g., are full), adding a group to the spare locations in the memory will necessitate displacing a group from the spare locations. However, the group to be displaced (e.g., the deleted group) can not simply be overwritten by the added group; otherwise, its data would be lost. Rather, the group to be displaced must first be copied out of its spare location before the added group is copied in, which can be accomplished by utilizing the two repair tables to perform the two passes of the data restore operation, as described herein.[0058] Controller 308 can then perform a data restore operation on the data stored in memory devices 310 using the mappings of the current repair table and the new repair table, as previously described herein. Once the data restore operation has been performed, the new repair table may assume the role of, and be referred to as the current repair table, and the repair table that was the current repair table may become unused. The previously described operation cycle can then repeat itself, with controller 308 generating a newly updated mapping in the unused table space to serve as the "new" repair table.[0059] For example, subsequent to performing the data restore operation, controller 308 can perform a new error rate assessment, and generate the updated mapping to serve as the now-new repair table based on this new assessment. For example, error correction component 308 can continue to perform error correction operations on the plurality of groups of data and determine the error rates associated with these operations. Controller 308 can then perform an additional (e.g. new) re-ranking of the groups of data based on these error rates, and generate the updated mapping for the now-new repair table (e.g., to take the place of the previous mapping of the table) based on the re-ranking, in a manner analogous to the previous ranking and repair table generation process previously described herein.[0060] Controller 308 can then perform another data restore operation using the two repair tables, in a manner analogous to that previously described herein but with the roles of the tables reversed. This cycle can be continuously performed throughout the lifetime of memory system 304, with tables 316-1 and 316-2 alternating between the current repair table and the new repair table (e.g., alternating between being used to sense and program data) in such a manner, such that data groups (e.g., pages) with the worst error rates are continuously relegated to the spare portion of the memory,[0061] Memory system 304 may use volatile storage for tables 316-1 and316-2 while a newly updated mapping is being generated, but only one of the tables (e.g., the current repair table) may reside in volatile storage during intervals between data restore operations. Further, memory system 304 may use non-volatile storage for tables 316-1 and 316-2. For instance, the system may store the new (e.g., newly generated) repair table in non-volatile storage before commencing a data restore operation to protect against power loss, as will be further described herein. Further, metadata such as, for instance, a timestamp or monotonically increasing sequence number, indicating which table represents the current repair table and which table represents the new repair table may be stored in the non-volatile storage to protect against power loss.[0062] Figures 4A-4B illustrate examples of tables used to perform data restore operations in accordance with an embodiment of the present di sclosure. For instance, table 416-1 illustrated in Figure 4A can be an example of a current repair table previously described herein (e.g., in connection with Figure 3), and table 416-2 illustrated in Figure 4B can be an example of a new repair table previously described herein (e.g., in connection with Figure 3). The examples illustrated in Figures 4A and 4B can be referred to as full-resolution repair tables.[0063] As shown in Figures 4A-4B, tables 416-1 and 416-2 can each include a plurality of entries (e.g., elements) 450 and 45 1 , respectively. Each respective entry can correspond to a different group (e.g., a different sector and/or parcel ) of data stored in memory devices 310 previously described in connection with Figure 3. For example, table 416-1 can include N entries, with entry 450-0 corresponding to a zeroth group of data, entry 450-1 corresponding to a first group of data, entry 450-2 corresponding to a second group of data, etc., through entry 450-N corresponding to the Nth group of data. Similarly, table 416-2 can include the same number (e.g., N) of entries, with entry 451-0 corresponding to a zeroth group of data, entry 451-1 corresponding to a first group of data, entry 451-2 corresponding to a second group of data, etc., through entry 451-N corresponding to the Nth group of data.[0064] As shown in Figures 4A and 4B, each respective entry 450 and451 in tables 416-1 and 16-2, respectively, can include a logical address and a physical address for its respective group of data. For instance, in the example illustrated in Figures 4A-4B, entries 450-0 and 451-0 both have the same logical address (e.g., 0) and the same physical address (e.g., 0) for group zero.Continuing in the example, entries 450-1 and 451-1 both have the same logical address (e.g., 1) for group 1, but have different physical addresses (e.g., 1 and N- 2, respectively) for group 1.[0065] As such, each respective entry 450 and 451 in tables 416-1 and416-2, respectively, can represent a logical to physical mapping for its respective group of data. If the physical address for a particular group is the same in both tables, then that group is mapped to the same physical location by its respective entry in each table. If the physical address for a particular group is different in each table, then that group is mapped to different physical locations by its respective entry in each table. The logical address can be represented by the index of the entry, and the physical address can be represented by the content (e.g., value) of the entry.[0066] At least one of the groups of data can include user data (e.g., data stored in a user-accessible location in the memory), and at least one of the groups of data can include spare data (e.g., data stored in a spare location in the memory). For instance, in the example illustrated in Figures 4A and 4B, the entries in portion 452 of tables 416-1 and 416-2 can correspond to groups of user data, and the entries in portion 454 of tables 4 6-1 and 416-2 can correspond to groups of spare data.[0067] For example. Figure 4A illustrates a full -resolution repair table416-1 for an initial state of a memory. All groups in table 416-1 are identity mapped, with groups N-2, N-l , and N mapped to spare locations in the memory (e.g., these groups are assumed to have the worst error rates). Although these groups, which comprise portion 454, are placed at the end of table 416-1, embodiments of the present disclosure are not so limited (e.g., the groups corresponding to the spare locations may be placed anywhere in the table).[0068] Figure 4B illustrates an example of the repair table of Figure 4A after a data restore operation in accordance with the present disclosure has been performed. In this example, it has been determined that groups 1, 3, and N have the worst error rates. Accordingly, the data of groups 1 and 3 are now mapped (e.g., redirected) to groups N-2 and N-l of spare portion 454 in table 416-2, while the data of group N remains identity mapped, as illustrated in Figure 4B. Conversely, because groups N-2 and N-l no longer have the worst error rates, the data of those groups are now mapped to groups 1 and 3 of data portion 452 in table 416-2, as illustrated in Figure 4B. In such a manner, the groups with the worst error rates are always located in spare portion 454 of the table.[0069] Subsequent data accesses (e.g., during subsequent operation of the memory) may now be filtered through table 416-2. For instance, an attempt to access the data of logical group 1 would be redirected to physical group N-2, while an attempt to the data of logical group N-2 would be redirected to physical group 1.[0070] Figure 5 illustrates an example of a table 516-2 used to perform data restore operations in accordance with an embodiment of the present disclosure. For instance, table 516-2 illustrated in Figure 5 can be an example of a new repair table previously described herein (e.g., in connection with Figure 3). The example illustrated in Figure 5 can be referred to as a bitmap-based repair table,[0071] As shown in Figure 5, table 516-2 can include a plurality of entries (e.g., elements) 556, Each respective entry can correspond to a different group (e.g., a different sector and/or parcel) of data stored in memory devices 310 previously described in connection with Figure 3, in a manner analogous to the entries of table 416-2 previously described in connection with Figure 4B. Further, in a manner analogous to that previously described in connection with Figure 4B, at least one of the groups of data can include user data, and at least one of the groups of data can include spare data. For instance, in the example illustrated in Figure 5, the entries in portion 558 of table 516-2 can correspond to groups of user data, and the entries in portion 560 of table 516-2 can correspond to groups of spare data.|0072| As shown in Figure 5, each respective entry in portion 560 of table 516-2 can include a logical address and a physical address for its respective group of data, in a manner analogous to that previously described in connection with Figure 4B. For instance, in the example illustrated in Figure 5, entry 556- N-2 has a logical address of N-2 and a physical address 1 for group N-2, entry 556-N-l has a logical address of N-l and a physical address of 3 for group N-l, and entry 556-N has a logical and physical address of N for group N. As such, each respective entry in portion 560 of table 516-2 can represent a logical to physical mapping for its respective group of data, which may be a redirected mapping (e.g., to a different physical location in the memory), as previously described herein.[0073] As shown in Figure 5, each respective entry in portion 558 of table 516-2 can also include a logical address for its respective group of data. However, as illustrated in Figure 5, instead of a physical address, each respective entry in this portion of the table may include a bit value (e.g., flag) indicating whether its respective group of data has been redirected to a different physical location in the memory (e.g., to a different physical address than indicated by the previous table). For instance, data portion 558 of table 516-2 may be condensed into a bitmap, with one bit per group: a zero indicating an identity mapped group, and a one indicating a redirected group, as illustrated in Figure 5. By condensing data portion 558 of table 516-2 into a bitmap in such a manner, the size of table 516-2 may be reduced (e.g., as compared to the size of table 416-2).[0074] For instance, in the example illustrated in Figure 5, the bit values(e.g., 1) of entries 556-1 and 556-3 indicate that groups I and 3 have been redirected to different physical locations in the memory (e.g., these groups have been redirected from the data portion of the memory to the spare portion of the memory). For clarity, these redirections are identical to the redirections previously described in connection with Figure 4B.[0075] Subsequent data accesses (e.g., during subsequent operation of the memory) may now be filtered through table 516-2, and accesses of redirected data groups may include a search of spare portion 560 to locate the redirected data. For example, an attempt to access the data of logical group 3 would encounter a set (e.g., 1) bit value in the bitmap of table 516-2. This would trigger a lookup in spare portion 560 for the value "3", which would be found at group N-l, and accordingly the access would target this physical address. In contrast, an attempt to access the data of logical group 0 would encounter a clear (e.g., 0) bit value in the bitmap, and would therefore proceed to access physical group 0 with no additional lookup needed.[00761 Figure 6 illustrates an example of a table 662 used in operation of memory in accordance with an embodiment of the present disclosure. For instance, table 662 may be used in conjunction with tables 316-1 and 316-2 previously described in connection with Figure 3, for determining which of these tables should be used during program and/or sense operations performed on the memory, such as, for instance, program and/or sense operations performed during data restore operations. Table 662 may be stored, for example, in volatile memory (e.g., RAM or DRAM) external to memory devices 310 previously described in connection with Figure 3.[0077] Table 662 can be a bitmap whose constituent bits are each associated with a single group (e.g., sector) in memory devices 310. For example, as shown in Figure 6, table 662 can include a plurality of entries (e.g., elements) 664. Each respective entry can correspond to a different group (e.g., a different sector) of data stored in memory devices 310 previously described in connection with Figure 3, and can include a phase bit associated with that respective group, as illustrated in Figure 6. That is, each group can have a phase bit associated therewith.[0078] As previously described herein, tables 316-1 and 316-2 can alternate between being the current repair table and the new repair table (e.g., their respective mappings can alternate between being the mapping used for program and sense operations). Which of these tables (e.g., which table's mapping) should be used when programming data to, or sensing data stored in, a group of data can be determined based on (e.g., indicated by) the value of that group's phase bit in table 662. For example, when the phase bit associated with a group is clear (e.g., 0), the first table (e.g., table 316-1.) should be used when programming data to, or sensing data stored in, that group, and when the phase bit associated with a sector is set (e.g., 1), the second table (e.g., table 316-2 should be used. All phase bits in table 662 can be cleared to 0 upon initialization and/or power up of the mem or}'.[0079] Figure 7 illustrates a method 768 for operating memory, such as, for instance, memory devices 310 previously described in connection with Figure 3, in accordance with an embodiment of the present disclosure. Method 768 can be performed using, for example, controller 308 previously described in connection with Figure 3, and may be performed atomically with respect to other concurrent operations (e.g. client accesses) being performed on the memory.[0080] At block 770, method 768 includes initiating a data restore (e.g., migration) operation to be performed on a group (e.g., sector) of data stored in the memory. At block 772, method 768 includes sensing (e.g., reading) the data stored in that group using a first one of tables 316-1 and 316-2 previously described in connection with Figure 3 (e.g. using the mapping of that table). The table used to sense the data can be the table that is presently serving as the current repair table. The determination of which table to use to sense the data (e.g., which table is the current repair table) can be made based on the value of the phase bit associated with that group of data, as previously described in connection with Figure 6. In an embodiment, the data may be sensed into a buffer.[0081] At block 774, method 768 includes computing metadata using a second (e.g., the other) one of tables 316-1 and 316-2 (e.g., the table not indicated to be the current repair table by the phase bit associated with that group). The metadata may include, for instance, metadata spare encodings for the group of data. For example, designated spare bits in that group's space (e.g., footprint) in the memory may be populated with replicated data from groups previously determined to have the worst error rates, and accordingly the data stored in the group may change due to a changing error rate within the group (e.g., within the slices of the group), even if that group's data remains static. The metadata may also include, for example, a representation of the phase bit associated with the group, for use if power-loss occurs during the data restore operation.[0082] At block 776, method 768 includes programming (e.g., writing) the data that was stored in the group (e.g., the data sensed at block 772) to the memory using the second table (e.g., using the table presently serving as the new- repair table). That is, the determination of which table to use to program the data (e.g., which table is the current repair table) can also be made based on the value of the phase bit associated with that group of data, as previously described in connection with Figure 6.[0083] The location in the memory to which the data is programmed may be a different physical location in the memory, such as a location (e.g., page) having a lower error rate than the error rate of the group from which the data was sensed, as previously described herein. As such, the data can be redirected from a group having a higher error rate to a group having a lower error rate, as previously described herein. At block 778 (e.g., upon the data beingprogrammed to the memory), method 768 can include inverting the value of the phase bit associated with the group from which the data was sensed.[0084] Figure 8 illustrates a method 880 for operating memory, such as, for instance, memory devices 310 previously described in connection with Figure 3, in accordance with an embodiment of the present disclosure. Method 880 can be performed using, for example, controller 308 previously described in connection with Figure 3, and can be part of (e.g., provide rules for) program and/or sense operations being performed on a group (e.g., sector) of data stored in the memory during data restore operations or client accesses.[0085] At block 882, method 880 includes initiating a program or sense operation to be performed on a group (e.g., sector) of data stored in the memory. Although not shown in Figure 8, if the program or sense operation is initiated as part of a data restore operation being performed on the sector, then the value of a global phase bit associated with the memory (e.g., a single phase bit that is associated with the entire memory) can be inverted, such that the inverted value of the global phase bit does not match the value of the phase bit associated with the sector. If the program or sense operation is not initiated as part of a data restore operation (e.g., the operation is part of a normal client access), then the value of the global phase bit is not inverted, and accordingly matches the value of the phase bit associated with the sector. Accordingly, the value of the global phase bit can indicate whether a data restore operation is currently being performed.[0086] At block 884, method 880 includes determining whether the value of the phase bit for the sector matches the value of the global phase bit. The value of the phase bit associated with the sector can be provided, for instance, by table 662 previously described in connection with Figure 6.[0087] If it is determined that the phase bit value for the sector matches the global phase bit value (e.g., indicating the program or sense operation has been initiated as part of a client access), then the program or sense operation can be performed at block 886 using the appropriate repair table (e.g. the mapping of that table) based on (e.g., indicated by) the value of the phase bit for the sector, and method 880 can end at block 888. Accordingly, for program and sense operations initiated as part of a client access, the operation will use the repair table indicated by the value of the sector's phase bit regardless of the value of the global phase bit.[0088] If it is determined that the phase bit value for the sector does not match the global phase bit value, then it is determined at block 890 whether the operation initiated at block 882 is a program operation or a sense operation. If the operation is a sense operation, then the sense operation can be performed at block 886 using the appropriate repair table (e.g. the mapping of that table) based on (e.g., indicated by) the value of the phase bit for the sector, and method 880 can end at block 888. Accordingly, for sense operations initiated as part of a data restore (e.g., migration) operation, and if the sector's phase bit does not match the global phase bit subsequent to the inversion of the global phase bit, the operation will use the repair table indicated by the value of the sector's phase bit. [0089] If the operation initiated at block 882 is a program operation, then the value of the phase bit for the sector can be inverted at block 892, and metadata for the sector can be computed based on the inverted phase bit value at block 894. The metadata may include, for instance, metadata spare encodings for the sector of data, and can be computed in a manner analogous to that previously described in connection with Figure 7. The program operation can then be performed at block 886 using the appropriate repair table (e.g. the mapping of that table) based on (e.g., indicated by) the inverted value of the phase bit for the sector, and method 880 can end at block 888. Accordingly, for program operations initiated as part of a data restore (e.g., migration) operation, and if the sector's phase bit does not match the global phase bit subsequent to the in version of the global phase bit, the operation will first invert the value of the sector's phase bit, and then encode the metadata spare and perform the programming using the repair table indicated by the now-inverted value of the sector' s phase bit.[0090] Although not shown in Figure 8, if a power loss occurs while a data restore operation is being performed on the memory, the present value of the gl obal phase bit and the logical address for the sector of the memory on which the data restore operation has been most recently performed can be rapidly stored (e.g., persisted) to non-volatile memory (e.g., the same nonvolatile memory in which the repair tables are persisted) upon detecting the occurrence of the power loss. Upon power (e.g., and the repair tables) subsequently being restored, the value of the phase bit associated with each respective sector of the memory having a logical address less than or equal to the stored logical address (e.g., less than or equal to the logical address for the sector on which the data restore operation was most recently performed before the power loss) can be set to the value of the global phase bit, and the value of the phase bit associated with the other sectors (e.g., each respective sector having a logical address greater than the stored logical address) can be set to the inverted value of the global phase bit. Accordingly, the data restore operation can resume at the sector where it was when the power loss occurred.[0091] Figure 9 illustrates a method 995 for operating memory, such as, for instance, memory devices 310 previously described in connection with Figure 3, in accordance with an embodiment of the present di scl osure. Method 995 can be performed using, for example, controller 308 previously described in connection with Figure 3.[0092] Method 995 can be a method to resume a data restore operation being performed on the memory if a power loss occurs and no extra information (e.g., the global phase bit value and sector logical address as described in connection with Figure 8) is persisted during the power loss. For example, upon power being restored subsequent to the power loss, sequential sense operations can be performed on the groups (e.g., sectors) of data stored in the memory (e.g., in the same sequential order in which the data restore operation is performed) to sequentially (e.g. one at a time) sense each respective sector using the new mappings of the second table (e.g., using the mappings of the table presently serving as the new repair table). Upon one of these sense operations failing, a sense operation to sense that respective sector of data can be performed using the current mappings of the first table (e.g. using the mappings of the table presently serving as the current repair table), and if that sense operation is successful, that indicates the sector where the data restore operation was when the power loss occurred, and accordingly the data restore operation can be resumed at that sector.[0093] For example, upon power being restored at block 91 1 , method995 includes performing, at block 913, a sense operation to sense the first sector of data in the sequence using the mapping for that sector in the second (e.g., new) repair table, and determining, at block 915, whether that sense operation is a success or failure. Whether the sense operation is a success or failure can be determined, for example, based on the number of errors that occur during the sense operation and/or whether the errors are correctable (e.g., the sense operation may fail if the number of errors exceeds the error correction capabilities of the memory, as previously described herein).[0094] If the sense operation is a success, then it is determined at block969 whether the value of the phase bit associated with the fi rst sector matches the value of the global phase bit. The value of the phase bit associated with the sector can be provided, for instance, by table 662 previously described in connection with Figure 6, and the global phase bit can be the global phase bit previously described in connection with Figure 8. [0095] If the sense operation performed at block 913 is determined to be a success at block 915, and the values of the sector phase bit and the global phase bit are determined to match at block 969, it can be assumed that the first sector was successfully migrated before the power loss occurred and the sequence can move on to the next sector. For instance, at block 967 it can be determined whether there more sectors of data to sense, and if there are more sectors to sense, method 995 can move to the next (e.g., second) sector of data in the sequence at block 999, and proceed to sense that sector using the second table at block 913. If it is determined at block 967 that there are no more sectors to sense (e.g., that the sequence has been performed on all data sectors), it can be assumed that all the sectors have been successfully migrated, and method 995 can end at block 997.[0096] If the sense operation performed at block 913 is determined to be a failure at block 915, or if the values of the sector phase bit and the global phase bit are determined to not match at block 969, it can be assumed that the first sector of data has not yet been successfully migrated. Accordingly, a sense operation can be performed at block 925 to sense the first sector using the mapping for that sector in the first (e.g., current) repair table, and it can be determined at block 927 whether that sense operation is a success or failure.[0097] If the sense operation performed at block 925 (e.g., using the current repair table) is a success, then it is determined at block 929 whether the value of the phase bit associated with the first sector matches the value of the global phase bit. If these phase bit values match, it can be assumed that the sector at which the data restore operation was when the power loss occurred has been located, and accordingly the data restore operation can be resumed at that sector (e.g. the first sector) at block 945. Further, although not shown in Figure 9, the phase bits for all sectors in the sequence preceding that sector can be set to the set value (e.g., 1), and the phase bits for all succeeding sectors in the sequence can be set to the cleared value (e.g., 0).[0098] If the sense operation performed at block 925 is determined to be a failure at block 927, or if the values of the sector phase bit and the global phase bit are determined to not match at block 929 (e.g., if the first sector can not be successfully sensed using either repair table), it can be assumed that this sector's data has been lost. Accordingly, that sector can be flagged as bad at block 965 so that it is skipped in the sequence, and method 995 can move to the next sector in the sequence (e.g., determine whether there more sectors of data to sense at block 967, move to the next sector of data in the sequence at block 999, and proceed to sense that sector using the second table at block 913).[0099] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of a number of embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of ordinary skill in the art upon reviewing the above description. The scope of a number of embodiments of the present di scl osure includes other applications in which the above staictures and methods are used. Therefore, the scope of a number of embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[00100] In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. |
The present invention provides an improved surface P-channel transistor and a method of making the same. A preferred embodiment of the method of the present invention includes providing a semiconductor substrate, forming a gate oxide layer over the semiconductor substrate, subjecting the gate oxide layer to a remote plasma nitrogen hardening treatment followed by an oxidative anneal, and forming a polysilicon layer over the resulting gate oxide layer. Significantly, the method of the present invention does not require nitrogen implantation through the polysilicon layer overlying the gate oxide and provides a surface P-channel transistor having a polysilicon electrode free of nitrogen and a hardened gate oxide layer characterized by a large concentration of nitrogen at the polysilicon electrode/gate oxide interface and a small concentration of nitrogen at the gate oxide/semiconductor substrate interface. The method of the present invention is easily incorporated into known fabrication processes and provides an enhanced surface P-channel transistor that resists hot electron degradation, is substantially impermeable to dopants included in overlying layers, and is characterized by a greatly increased extrapolated time dependent dielectric breakdown value. |
What is claimed is: 1. A method for making a surface P-channel transistor, comprising:providing a substrate; forming an oxide layer over said substrate, said oxide layer having an upper portion and an interface with said substrate; hardening said oxide layer using a remote plasma-based nitrogen hardening ("RPN") treatment for incorporating nitrogen into at least the upper portion of said oxide layer; annealing said oxide layer following said RPN treatment, said oxide layer having a concentration of nitrogen in the upper portion thereof about at least five times greater than the nitrogen concentration at the interface of said oxide layer and said substrate; and; forming a polysilicon layer over said oxide layer. 2. The method of claim 1, wherein providing said substrate comprises providing a silicon substrate and forming said oxide layer over said substrate comprises thermally growing said oxide layer from said silicon substrate.3. The method of claim 1, wherein forming said oxide layer over said substrate comprises depositing an oxide layer over said substrate.4. The method of claim 1, wherein hardening said oxide layer using said RPN treatment comprises hardening said oxide layer using a high density plasma ("HDP") RPN treatment.5. The method of claim 4, wherein hardening said oxide layer using said HDP RPN treatment comprises hardening said oxide layer using an HDP RPN process run at approximately 60[deg.] C. for about 10 seconds using about 1500 watts of power.6. The method of claim 1, wherein hardening said oxide layer using said RPN treatment comprises hardening said oxide layer using a thermal RPN treatment.7. The method of claim 6, wherein hardening said oxide layer using said thermal RPN treatment comprises hardening said oxide layer using a thermal RPN treatment run at approximately 750[deg.] C. for about 2 minutes.8. The method of claim 1, wherein annealing said oxide layer following said RPN treatment comprises annealing said oxide layer in an environment comprising a nitrogen-containing oxidant.9. The method of claim 8, wherein annealing said oxide layer in said environment comprising said nitrogen-containing oxidant further comprises annealing said oxide layer at approximately 800[deg.] C. for approximately 60 seconds.10. The method of claim 1, further comprising doping said polysilicon layer over said oxide layer.11. The method of claim 10, wherein doping said polysilicon layer over said oxide layer comprises doping said polysilicon layer with a P-type dopant.12. A method for fabricating a semiconductor memory device comprising:providing a substrate; forming an oxide layer over said substrate, said oxide layer having an upper portion and an interface with said substrate; hardening said oxide layer using a RPN treatment for incorporating nitrogen into at least the upper portion of said oxide layer; annealing said oxide layer following said RPN treatment, said oxide layer having a concentration of nitrogen in the upper portion thereof about at least five times greater than the nitrogen concentration at the interface of said oxide layer and said substrate; and; creating at least one surface P-channel transistor including at least a portion of said oxide layer. 13. A method for fabricating a surface P-channel transistor, comprising:providing a substrate; forming an oxide layer over said substrate in a nitrogen-containing environment, said oxide layer having an upper portion and an interface with said substrate; hardening said oxide layer using a RPN treatment for incorporating nitrogen into at least the upper portion of said oxide layer, the upper portion of said oxide layer having concentration of nitrogen therein varying from about at least five times greater than the nitrogen concentration at the interface of said oxide layer and said substrate; and defining at least one surface P-channel transistor including at least a portion of said oxide layer. 14. The method of claim 13, wherein forming said oxide layer over said substrate in said nitrogen-containing environment comprises forming an oxide layer in an environment containing sufficient nitrogen to provide an oxide layer comprising approximately 0.5% nitrogen in the portion of said oxide layer adjacent the interface of said oxide layer and said substrate.15. The method of claim 14, wherein hardening said oxide layer using said RPN treatment comprises hardening said oxide layer using an HDP RPN process run at approximately 60[deg.] C. for about 10 seconds using about 1500 watts of power.16. The method of claim 14, wherein hardening said oxide layer using said RPN treatment comprises hardening said oxide layer using a thermal RPN treatment run at approximately 750[deg.] C. for about 2 minutes. |
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to semiconductor devices and methods for their fabrication. Specifically, the present invention relates to surface P-channel transistors, including hardened gate oxides, possessing enhanced performance and reliability characteristics.2. State of the ArtHigher performance, enhanced reliability, and greater packaging density of integrated circuits are constant goals of the semiconductor industry. However, as components become smaller and smaller to meet these goals, it becomes increasingly difficult to produce semiconductor devices capable of reliable, long-term operation, particularly in light of the operational stresses each component of a state of the art semiconductor device must endure. For example, as state of the art surface P-channel transistors decrease in size, the size and thickness of the gate oxides included in these transistors must also decrease, but as gate oxides shrink, they become more permeable to dopants included in overlying polysilicon electrodes, less resistant to hot electron degradation, and more susceptible to breakdown at voltages below normal operating parameters.To combat such problems, various processes for hardening gate oxides have become essential to the fabrication of state of the art semiconductor devices, and several hardening processes are well-known in the art. For example, both furnace-based nitrogen processing and remote plasma-based nitrogen hardening ("RPN") may be used to harden gate oxides. Relative to nonhardened devices, gate oxides hardened by known methods are generally less permeable to dopants included in polysilicon electrodes, more resistant to hot electron degradation, and less susceptible to breakdown at voltages below normal operating voltages. However, known processes for hardening gate oxides also have drawbacks. For example, after being subjected to such processes, gate oxides often contain a significant amount of unbound, or interstitial, nitrogen, which is mobile and may diffuse out of the gate oxide, reducing the effectiveness of the hardening procedure and contaminating the overlying polysilicon electrode. Further, in order to prevent diffusion of dopants from the polysilicon electrode and into and through the gate oxide, known hardening processes often provide a high concentration of nitrogen at the interface of the gate oxide and the underlying semiconductor substrate. However, as is known, excessive nitrogen at the gate oxide/substrate interface significantly degrades transistor performance.In terms of device performance and reliability, it has been found to be advantageous to fabricate a gate oxide having a large nitrogen concentration (about 2.5% or greater nitrogen by atomic weight) at the interface of the gate oxide and the overlying polysilicon electrode while having a small nitrogen concentration (about 0.5% nitrogen by atomic weight) at the gate oxide/substrate interface. The large nitrogen concentration at the polysilicon electrode/gate oxide interface effectively prevents diffusion of dopants from the polysilicon electrode and into and through the gate oxide, while the small nitrogen concentration at the gate oxide/substrate interface confers resistance to hot electron degradation without substantially effecting device performance. Yet known processing techniques do not reliably provide surface P-channel transistors including a gate oxide having a large nitrogen concentration at the polysilicon electrode/gate oxide interface and a small nitrogen concentration at the gate oxide/substrate interface.At least one method has been developed in an attempt to provide a transistor including a gate oxide having similar characteristics. U.S. Pat. No. 6,017,808 to Wang et al. (hereinafter "the '808 Patent") describes a method for hardening a gate oxide designed to provide a transistor wherein a large peak of nitrogen exists within the polysilicon and oxide layers at the interface of the gate oxide and the overlying polysilicon electrode, while a relatively smaller nitrogen peak occurs within the oxide layer and the underlying semiconductor substrate at the interface of the gate oxide and the underlying semiconductor substrate. To achieve this structure, the method of the '808 Patent requires implanting nitrogen through the polysilicon layer and into the gate oxide layer followed by an anneal step. After the implantation and annealing steps, a first nitrogen peak occurs entirely within the polysilicon layer, a second nitrogen peak occurs within the polysilicon layer and the gate oxide at the polysilicon/gate oxide interface, and a third nitrogen peak occurs within the gate oxide layer and underlying substrate at the gate oxide/substrate interface. However, the first nitrogen peak located entirely within the polysilicon layer is problematic because it retards diffusion of subsequently implanted dopants, such as boron, within the polysilicon layer. Therefore, the method of the '808 Patent requires removal of only the portion of the polysilicon layer including the first nitrogen peak without removing the portion of the polysilicon layer including the second nitrogen peak. Once the portion of the polysilicon layer including the first nitrogen peak is removed, an additional, nitrogen-free polysilicon layer may be optionally formed over the remaining portion of the nitrogen implanted polysilicon layer.As will be readily appreciated, achieving the structure disclosed in the '808 patent using the methods described therein is at best problematic, particularly in light of the continually decreasing thicknesses of polysilicon electrodes included in state of the art semiconductor devices. One of the most problematic aspects of the method described in the '808 Patent is the need to remove only the portion of the nitrogen implanted polysilicon layer including the first nitrogen peak. The reference teaches that this task may be accomplished using known wet etch, dry etch, or chemical mechanical polishing processes. However, the polysilicon layers used for polysilicon electrodes in state of the art transistors are exceedingly thin. The polysilicon electrodes of some state of the art devices may be as thin as seven or fewer molecular monolayers, and known etching and polishing processes are difficult to control with sufficient precision to remove only predetermined portions of material layers of such minute thicknesses. Moreover, in this context, the polysilicon layer will include varying concentrations of nitrogen at any given depth, and as the nitrogen concentration varies, the etch rate will also vary, making precise control of the etching process even more difficult. Thus, removing only the portion of the polysilicon layer including the first nitrogen peak is extremely difficult, and known removal processes will most likely result in removal of too much or too little polysilicon material, resulting in transistors exhibiting impaired performance or reduced reliability.It would, therefore, be desirable to provide a method for fabricating a surface P-channel transistor which provides a surface P-channel transistor including a hardened gate oxide characterized by a large nitrogen concentration at the polysilicon/gate oxide interface and a small nitrogen concentration at the gate oxide/substrate interface, and which can be accomplished without the need to partially remove the polysilicon layer overlying the gate oxide. Ideally, such a method could be easily incorporated into current fabrication processes and would reliably produce state of the art surface P-channel transistors exhibiting enhanced performance and reliability.SUMMARY OF THE INVENTIONThe method and device of the present invention answer the foregoing needs. In a preferred embodiment, the present invention includes a method for forming improved surface P-channel transistors including providing a semiconductor substrate and forming a gate oxide layer over the semiconductor substrate. According to the method of the present invention, the gate oxide layer is subjected to a RPN treatment, which incorporates a high concentration of nitrogen into an upper area of the gate oxide layer. Following the RPN treatment, the resultant intermediate structure is annealed in an environment including an oxygen-containing or nitrogen-containing oxidant. The anneal step smooths out the distribution of nitrogen within the gate oxide layer, reacts substantially all of the unbound or interstitial nitrogen left after the RPN treatment, and results in a gate oxide layer having a large concentration of nitrogen near its upper surface and a small concentration of nitrogen at the interface of the gate oxide layer and the underlying semiconductor substrate. Following the oxidative anneal, a polysilicon layer is formed over the gate oxide layer, and the resultant intermediate structure may be processed by known fabrication techniques to define one or more surface P-channel transistors as well as any other feature necessary to the proper function of a desired semiconductor device.In an alternative embodiment, the gate oxide layer formed over the semiconductor substrate may be formed in a nitrogen-containing environment to provide a gate oxide layer including a small concentration of nitrogen throughout its depth. The lightly nitridated gate oxide layer is then subjected to a RPN treatment, resulting in a gate oxide layer having a large concentration of nitrogen near its upper surface and a smaller concentration of nitrogen at the interface of the gate oxide layer and the underlying semiconductor substrate. Once the RPN treatment is complete, the resultant intermediate structure may be processed by known fabrication techniques to define one or more surface P-channel transistors as well as any other feature necessary to the proper function of a semiconductor device.As can be easily appreciated, the method of the present invention enables fabrication of surface P-channel transistors including hardened gate oxides characterized by a large concentration of nitrogen at its upper surface and a small concentration of nitrogen at the interface of the gate oxide layer and the underlying semiconductor substrate. Moreover, the method of the present invention does not involve nitrogen implantation of the oxide layer through an overlying polysilicon layer, and, as a result, does not require partial removal of a specific portion of the polysilicon layer for fabrication of a functioning and reliable polysilicon electrode. Finally, the method of the present invention is easily incorporated into processes for fabricating state of the art semiconductor devices, and because it does not require partial removal of the polysilicon electrode layer, the method of the present invention may be applied even as feature dimensions of state of the art semiconductor devices continue to decrease.The surface P-channel transistors of the present invention are produced by the method of the present invention and include a semiconductor substrate, a substantially nitrogen-free polysilicon electrode, and a hardened gate oxide characterized by a large concentration of nitrogen at its upper surface and a small concentration of nitrogen at the gate oxide/substrate interface. Due to the physical characteristics of its hardened gate oxide, the surface P-channel transistor of the present invention exhibits performance and reliability advantages over known devices. For example, the surface P-channel transistor of the present invention does not exhibit the disadvantageous roll-off characteristics of surface channel transistors including gate oxides having a large concentration of nitrogen at the gate oxide/substrate interface, yet the surface P-channel transistor of the present invention possesses a greatly enhanced extrapolated time dependent dielectric breakdown ("TDDB"). Additionally, the gate oxide of a surface P-channel transistor according to the present invention preferably includes substantially no unbound or interstitial nitrogen. Therefore, the improved surface P-channel transistor of the present invention avoids many of the difficulties associated with known surface channel transistors including nitrogen hardened gate oxides.Other features and advantages of the present invention will become apparent to those of skill in the art through consideration of the ensuing description, the accompanying drawings, and the appended claims.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSThe drawings presented in conjunction with the description of the present invention are not actual views of any particular portion of an actual semiconducting device or component, but are merely representations employed to more clearly and fully depict the present invention.FIG. 1 illustrates a semiconductor substrate covered by a gate oxide layer having an upper surface;FIG. 2 illustrates the structure of FIG. 1 after such structure has been subjected to a RPN treatment;FIG. 3 provides a graph illustrating the results of a binding energy analysis performed on a gate oxide layer after the gate oxide layer was subjected to a RPN treatment;FIG. 4 provides a graph illustrating the results of a binding energy analysis performed on a gate oxide layer after the gate oxide layer was subjected to a RPN treatment and annealed in an N2 environment;FIG. 5 illustrates an intermediate semiconductor structure including a Semiconductor substrate and a hardened gate oxide layer;FIG. 6 illustrates another intermediate semiconductor structure created by forming a polysilicon layer over the hardened gate oxide layer of the intermediate structure illustrated in FIG. 5; andFIG. 7 illustrates one embodiment of the surface P-channel transistor of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe method of the present invention is relatively simple, may be easily incorporated into existing fabrication processes, and reliably produces a surface P-channel transistor including a gate oxide that enhances device performance and longevity.In a preferred embodiment, the method of the present invention includes providing a semiconductor substrate 10 having a gate oxide layer 12 formed there over, as is illustrated in drawing FIG. 1. The semiconductor substrate 10 may be made of any suitable material known in the art, and the gate oxide layer 12 can be formed over the semiconductor substrate 10 using any known process using any suitable material known in the art. For example, the semiconductor substrate 10 may be fabricated using silicon and the gate oxide layer 12 may include silicon dioxide (SiO2) which has been thermally grown or vapor deposited using well-known methods. The gate oxide layer 12 includes an upper surface 13 and may be formed in various thicknesses to suit various fabrication processes. The gate oxide layer 12, however, will generally have a thickness of about 70 Å or less, and for use in state of the art 0.18 [mu]m technology, a gate oxide layer 12 having a thickness in the range of about 30 Å to 50 Å is preferred.After provision of the semiconductor substrate 10 having the gate oxide layer 12 formed there over, the gate oxide layer 12 is subjected to a RPN treatment. The RPN treatment incorporates nitrogen into an upper area 14 (depicted in FIG. 2) of the gate oxide layer 12, resulting in a large concentration of nitrogen at the upper surface 13 of the gate oxide layer 12. As is shown in drawing FIG. 3, a graph of a binding energy analysis of the gate oxide layer 12 after the RPN treatment, the nitrogen-containing upper area 14 of the gate oxide layer 12 includes unbound or interstitial nitrogen (indicated by the oxy-nitride peak 16) as well as silicon nitride (Si3N4) (indicated by the nitride peak 18).RPN treatments are well documented in the art, and, as will be appreciated by the skilled artisan, any suitable RPN treatment may be used in the context of this invention. For example, a thermal RPN treatment using microwave plasma to excite the nitrogen molecules included in the process environment may be conducted at substantially 750[deg.] C. for approximately two minutes. However, a high density plasma (HDP) RPN treatment is presently preferred.Where a HDP RPN is used, the process may be conducted at between about substantially 60[deg.] C. and 65[deg.] C. for about 10 seconds using 1500 watts power. Where a gate oxide layer having a thickness of substantially 30 Å is subjected to such an HDP RPN treatment, the highest concentration of nitrogen (approximately 20.5% by atomic weight) will occur at the upper surface 13 of the gate oxide layer 12, but the nitrogen will only extend approximately substantially 9 Å within the substantially 30 Å gate oxide layer. Therefore, the RPN treatment incorporates nitrogen in only the upper area 14 of the gate oxide layer 12.As has been discussed, however, it is highly desirable to include a small concentration of nitrogen at the interface of the gate oxide layer 12 and the semiconductor substrate 10. In order to more progressively distribute, or "smooth out," the nitrogen concentration within the gate oxide layer 12, the intermediate structure 20 (shown in drawing FIG. 2) formed by the RPN treatment is annealed in an environment containing either an oxygen-containing oxidant or a nitrogen-containing oxidant. For example, the anneal may be conducted in an N2 environment at substantially 800[deg.] C. for 60 seconds. However, various anneal processes are known in the art, and different anneal processes executed in different reactive environments, such as N2O or NO environments, may also be used in the context of the present invention to achieve the desired results.Even after the anneal, the upper area 14 of the gate oxide layer 12 includes a high concentration of nitrogen. However, during the anneal, the nitrogen is progressively incorporated throughout the depth 21 of the gate oxide layer 12, resulting in a small concentration of nitrogen at the interface 15 of the gate oxide layer 12 and the semiconductor substrate 10. Most preferably, the small concentration of nitrogen at the gate oxide layer/semiconductor substrate interface 15 will equal an atomic concentration of about 0.5%. If significantly more nitrogen is included at the gate oxide layer/semiconductor substrate interface 15, the resulting transistor will exhibit increasing threshold voltage (VT) roll-off, and if significantly less nitrogen is included at the gate oxide layer/semiconductor substrate interface 15, the gate oxide will be more susceptible to hot electron degradation and the resulting transistor will exhibit a lower TDDB. Additionally, though the large nitrogen concentration of the upper area 14 of the gate oxide layer 12 may vary significantly, it is preferred that the concentration of nitrogen at the upper surface 13 of the gate oxide layer 12 be at least about five times greater than the small nitrogen concentration included at the gate oxide layer/semiconductor substrate interface 15, as it has been determined that such a concentration is necessary to effectively block dopant diffusion from the polysilicon electrode and into and through the upper area 14 of the gate oxide layer 12.As can be appreciated by reference to drawing FIG. 3 and drawing FIG. 4, annealing the intermediate structure 20 resulting from the RPN treatment has an additional advantage. As already discussed, drawing FIG. 3 provides a graph of a binding energy analysis of the gate oxide layer 12 immediately following the RPN treatment. Again, as is demonstrated by the oxy-nitride peak 16 and the nitride peak 18 shown in drawing FIG. 3, following the RPN treatment, the nitrogen-containing upper area 14 of the gate oxide layer 12 includes unbound or interstitial nitrogen as well as Si3N4. As is evidenced by the information provided in drawing FIG. 4, however, after annealing the intermediate structure 20, the gate oxide layer 12 no longer includes a significant amount of interstitial nitrogen (indicated by the lack of a significant oxynitride peak) but includes an increased amount of Si3N4, as is indicated by a second nitride peak 22. Thus, annealing the first intermediate structure 20 results in a second intermediate structure 24 (depicted in drawing FIG. 5) including a semiconductor substrate 10 and a hardened gate oxide layer 26 characterized by a large concentration of nitrogen at the upper surface 28, a small concentration of nitrogen at the hardened gate oxide layer/semiconductor substrate interface 30, and substantially no unbound or interstitial nitrogen.Following the anneal step, a third intermediate structure 31 (illustrated in drawing FIG. 6) is formed by forming a polysilicon layer 32 over the hardened gate oxide layer 26. The polysilicon layer 32 is formed using any known process and may also be doped with boron or other known dopants such that the polysilicon layer 32 can be used to form polysilicon electrodes with desired electrical properties. The third intermediate structure 31 is then processed as is known in the art to produce a semiconductor device including at least one surface P-channel transistor which incorporates a portion of the hardened gate oxide layer 26 for a gate oxide as well as a portion of the overlying polysilicon layer 32 for a polysilicon electrode.The preferred embodiment of the method of the present invention, therefore, provides an enhanced surface P-channel transistor. The gate oxide produced by the method of the present invention prevents diffusion of dopant from the overlying polysilicon electrode, resists hot electron degradation without adverse VT roll-off effects, exhibits enhanced resistance to breakdown below normal operating voltages, and results in a surface P-channel transistor having a greatly increased TDDB. In fact, while transistors incorporating gate oxides hardened by RPN alone have an extrapolated TDDB of approximately 8 years, the method of the preferred embodiment of the method of the present invention provides a surface P-channel transistor having an extrapolated TDDB of 500 years.In an alternative embodiment of the method of the present invention, the hardened gate oxide layer is formed using a different process. Instead of subjecting the gate oxide layer to a RPN treatment followed by an anneal, the gate oxide layer is formed in a nitrogen-containing environment to provide a slightly nitridated gate oxide layer, which is subsequently subjected to a RPN treatment to achieve a high concentration of nitrogen in an upper area of the gate oxide layer. The lightly nitridated gate oxide layer may be formed by executing a known oxide deposition or growth process in a nitrogen-containing environment, but for reasons already discussed, the lightly nitridated gate oxide layer should not include more than about 0.5% nitrogen by atomic weight. Moreover, as with the preferred embodiment of the method of the present invention, any suitable RPN treatment known in the art may be used. The hardened gate oxide layer formed in the alternative embodiment of the method of the present invention, therefore, will include a small concentration of nitrogen (no more than about 0.5%) at the interface of the hardened gate oxide and the underlying semiconductor substrate, as well as a high concentration of nitrogen at its upper surface due to the RPN treatment.The alternative embodiment of the method of the present invention also includes forming a polysilicon layer over the hardened gate oxide layer. As was the case in the preferred embodiment of the method of the present invention, following formation of the polysilicon layer, the resultant intermediate structure may be processed as is known in the art to produce a semiconductor device including at least one surface P-channel transistor which incorporates a portion of the hardened gate oxide layer for a gate oxide as well as a portion of the overlying polysilicon layer for a polysilicon electrode. However, the alternative embodiment of the method of the present invention is not preferred because the gate oxide layer formed in the alternative embodiment includes a significant amount of interstitial nitrogen due to the lack of a post RPN anneal.Nevertheless, as will be apparent to the skilled artisan, the method of the present invention does not involve nitrogen implantation into the gate oxide layer through an overlying polysilicon layer. The method of the present invention, therefore, does not necessitate the partial removal of the overlying polysilicon layer to facilitate fabrication of a high-performance, reliable polysilicon electrode. As a result, the method of the present invention provides a more reliable and easily executed technique for forming an enhanced surface P-channel transistor, particularly when the ever-shrinking dimensions of state of the art semiconductor device features are considered.The present invention also includes the enhanced surface P-channel transistor 37 produced by the method of the present invention. As is illustrated in drawing FIG. 7, a surface P-channel transistor 37 of the present invention includes a semiconductor substrate 38, a gate oxide 40, and a polysilicon electrode 42 overlying the gate oxide 40. The gate oxide 40 includes a large concentration of nitrogen (i.e., approximately 2.5% or more nitrogen by atomic weight) near the interface 44 of the polysilicon electrode 42 and the gate oxide 40, and the gate oxide 40 includes a small concentration of nitrogen (i.e., approximately 0.5% nitrogen by atomic weight) at the interface 46 of gate oxide 40 and the underlying semiconductor substrate 38. It should be understood however, that the surface P-channel transistor 37 of the present invention is not limited to the features detailed herein and may include other well-known features. Moreover, the surface P-channel transistor 37 can be fabricated in various sizes to suit virtually any application.Due to the physical characteristics of the gate oxide 40 incorporated therein, the surface P-channel transistor 37 of the present invention shows enhanced performance and reliability. The gate oxide 40 of the surface P-channel transistor 37 of the present invention provides a transistor that possesses enhanced resistance to hot electron degradation, less susceptibility to breakdown below normal operating voltages, substantially no VT roll-off, and an extrapolated TDDB of 500 years. Moreover, in contrast to the surface channel transistor formed by the method of the '808 patent, the polysilicon electrode 42 overlying the gate oxide 40 of the surface P-channel transistor 37 of the present invention is nitrogen-free, allowing for more effective distribution of the dopants. Therefore, the surface P-channel transistor 37 of the present invention not only exhibits enhanced performance and reliability, but the surface P-channel transistor 37 of the present invention will not suffer from performance problems associated with dopant depletion regions in the polysilicon electrode 42 which may result from nitrogen implanted into and through the polysilicon layer.Though the surface P-channel transistor and method of the present invention have been described herein with reference to specific examples, such examples are for illustrative purposes only. The scope of the present invention is defined by the appended claims and is, therefore, not limited by the preceding description and drawings. |
DSP architectures having improved performance are described. In an exemplary architecture, a DSP includes two MAC units and two ALUs, where one of the ALUs replaces an adder for one of the two MAC units. This DSP may be configured to operate in a dual-MAC/single-ALU configuration, a single-MAC/dual-ALU configuration, or a dual-MAC/dual-ALU configuration. This flexibility allows the DSP to handle various types of signal processing operations and improves utilization of the available hardware. The DSP architectures further includes pipeline registers that break up critical paths and allow operations at a higher clock speed for greater throughput. |
WHAT IS CLAIMED IS: CLAIMS 1. A processor comprising: a first multiply-accumulate (MAC) unit operable to receive and multiply first and second operands to obtain a first intermediate result, to store the first intermediate result in a first register, to add the stored first intermediate result with a third operand, and to provide a first output; and a second MAC unit operable to receive and multiply fourth and fifth operands to obtain a second intermediate result, to store the second intermediate result in a second register, to add a sixth operand with either the stored second intermediate result or a sum of the stored first and second intermediate results, and to provide a second output. 2. The processor of claim 1, further comprising: an arithmetic logic unit (ALU) path operable to receive and perform a first operation on a seventh operand or an eighth operand to obtain a third intermediate result, to store the third intermediate result in a third register, to perform a second operation on the third intermediate result, the seventh operand, the eighth operand, or a combination thereof, and to provide a third output. 3. The processor of claim 2, further comprising: a register file operable to provide the first through eighth operands for the first and second MAC units and the ALU path and to store the first through third outputs from the first and second MAC units and the ALU path. 4. The processor of claim 1, wherein the first MAC unit includes a first multiplier operable to receive and multiply the first and second operands and provide the first intermediate result, whereupon the first register is operable to store the first intermediate result, and a first adder operable to add the stored first intermediate result from the first register with the third operand and provide the first output, and the second MAC unit includes <Desc/Clms Page number 15> a second multiplier operable to receive and multiply the fourth and fifth operands and provide the second intermediate result, whereupon the second register is operable to store the second intermediate result, a second adder operable to add the stored second intermediate result from the second register with either zero or the first intermediate result from the first register, and a third adder operable to add an output of the second adder with the sixth operand and provide the second output. 5. The processor of claim 2, wherein the ALU path includes a shifter operable to receive and shift the seventh or eighth operand and provide the third intermediate result, whereupon the third register is operable to store the third intermediate result, and an ALU operable to perform the second operation on the third intermediate result from the third register, the seventh operand, the eighth operand, or a combination thereof, and to provide the third output. 6. A processor comprising: a first multiply-accumulate (MAC) unit including a first multiplier operable to receive and multiply first and second operands and to provide a first intermediate result, and a first arithmetic logic unit (ALU) operable to receive the first intermediate result, a third operand, and at least one additional operand, to operate on the received operands, and to provide a first output; and a second MAC unit including a second multiplier operable to receive and multiply fourth and fifth operands and to provide a second intermediate result, a first adder operable to add the second intermediate result with either zero or the first intermediate result from the first MAC unit, and a second adder operable to add an output of the first adder with a sixth operand and to provide a second output. 7. The processor of claim 6, further comprising: an ALU path including <Desc/Clms Page number 16> a shifter operable to receive and shift a seventh operand or an eighth operand and to provide a third intermediate result, and a second ALU operable to operate on the third intermediate result, the seventh operand, the eighth operand, or a combination thereof, and to provide a third output. 8. The processor of claim 7, further comprising: a register file operable to provide the first through eighth operands for the first and second MAC units and the ALU path and to store the first through third outputs from the first and second MAC units and the ALU path. 9. The processor of claim 8, wherein the register file includes at least two output ports configurable to support the first MAC unit operating as either a MAC or an ALU. 10. The processor of claim 7, wherein the first and second ALUs are further operable to receive operands from an intermediate bus. 11. The processor of claim 7 and configurable to operate in a dualMAC/single-ALU configuration, a single-MAC/dual-ALU configuration, or a dualMAC/dual-ALU configuration. 12. The processor of claim 7 and configurable, on an instruction by instruction basis, to operate in a dual-MAC/single-ALU configuration, a singleMAC/dual-ALU configuration, or a dual-MAC/dual-ALU configuration. 13. The processor of claim 6, further comprising: a first register operable to store the first intermediate result and to provide a stored first intermediate result to the first ALU and the first adder; and a second register operable to store the second intermediate result and to provide a stored second intermediate result to the first adder. 14. The processor of claim 7, further comprising: <Desc/Clms Page number 17> a first register operable to store the first intermediate result and to provide a stored first intermediate result to the first ALU and the first adder; a second register operable to store the second intermediate result and to provide a stored second intermediate result to the first adder; and a third register operable to store the third intermediate result and to provide a stored third intermediate result to the second ALU. 15. A processor comprising: a first multiply-accumulate (MAC) unit including a first multiplier operable to receive and multiply first and second operands and to provide a first intermediate result, a first register operable to store the first intermediate result and to provide a stored first intermediate result, and a first arithmetic logic unit (ALU) operable to receive and operate on the stored first intermediate result, a third operand, at least one other operand, or a combination thereof, and to provide a first output; and a second MAC unit including a second multiplier operable to receive and multiply fourth and fifth operands and to provide a second intermediate result, a second register operable to store the second intermediate result and to provide a stored second intermediate result, a first adder operable to add the stored second intermediate result with either zero or the stored first intermediate result from the first MAC unit, and a second adder operable to add an output of the first adder with a sixth operand and to provide a second output. 16. The processor of claim 15, further comprising: an ALU path including a first shifter operable to receive and shift a seventh operand or an eighth operand and to provide a third intermediate result, a third register operable to store the third intermediate result and to provide a stored third intermediate result, and <Desc/Clms Page number 18> a second ALU operable to operate on the stored third intermediate result, the seventh operand, the eighth operand, or a combination thereof, and to provide a third output. 17. The processor of claim 16, wherein the second ALU is operable to operate on the stored third intermediate result, the seventh operand, the eighth operand, a ninth operand, a tenth operand, or a combination thereof, and to provide the third output. 18. The processor of claim 16, wherein the ALU path further includes a second shifter operable to receive ninth and tenth operands and to provide a fourth output, and a multiplexer operable to receive the third and fourth outputs and to provide either the third or fourth output. 19. The processor of claim 15, wherein the first MAC unit further includes a first shifter operable to receive the stored first intermediate result and to provide a first shifted result to the first ALU and the first adder, and the second MAC unit further includes a second shifter operable to receive the stored second intermediate result and to provide a second shifted result to the first adder. 20. The processor of claim 16, wherein the first ALU is operable to operate on the stored first intermediate result, the first operand, the second operand, the third operand, the seventh operand, or a combination thereof, and to provide the first output. 21. A wireless apparatus comprising a first multiply-accumulate (MAC) unit including a first multiplier operable to receive and multiply first and second operands and to provide a first intermediate result, and a first arithmetic logic unit (ALU) operable to receive the first intermediate result, a third operand, and at least one additional operand, operate on the received operands, and to provide a first output ; <Desc/Clms Page number 19> a second MAC unit including a second multiplier operable to receive and multiply fourth and fifth operands and to provide a second intermediate result, a first adder operable to add the second intermediate result with either zero or the first intermediate result from the first MAC unit, and a second adder operable to add an output of the first adder with a sixth operand and to provide a second output; an ALU path including a shifter operable to receive and shift a seventh operand or an eighth operand and to provide a third intermediate result, and a second ALU operable to operate on the third intermediate result, the seventh operand, the eighth operand, or a combination thereof, and to provide a third output; and a register file operable to provide the first through eighth operands for the first and second MAC units and the ALU path and to store the first through third outputs from the first and second MAC units and the ALU path. |
<Desc/Clms Page number 1> DIGITAL SIGNAL PROCESSORS WITH CONFIGURABLE DUAL-MAC AND DUAL-ALU BACKGROUND 1. Field [0001] The present invention relates generally to electronics, and more specifically to digital signal processors (DSPs) with configurable multiply-accumulate (MAC) units and arithmetic logic units (ALUs). II. Background [0002] DSPs are specialized microprocessors that are specifically designed to execute mathematical computations very rapidly. DSPs are widely used in a variety of electronic units such as compact disc players, PC disk drives, modem banks, audio devices, cellular phones, and so on. In cellular phones, the demand for DSP computation capability continues to grow, driven by the increasing needs of applications such as 3G (3rd generation) modem processing, position determination, image and video processing, 3-D gaming, and so on. These applications require DSPs that can perform computations quickly and efficiently. [0003] A DSP typically contains a MAC unit and an ALU. The MAC unit is used for multiply-accumulate operations, which are commonly used in filtering and signal processing. The ALU is used for addition, subtraction, logical, shift, and bit- manipulation operations. A DSP may also contain multiple MAC units for higher computational throughput. An exemplary dual-MAC DSP architecture is described in U. S. Patent No. 6,557, 022, entitled"Digital Signal Processor with Coupled Multiply- Accumulate Units, "issued April 29,2003. The goals of any DSP design are to (1) achieve the highest number of operations per unit time and (2) provide flexibility to perform different types of operations concurrently to allow for better utilization of the available hardware. DSP architectures that can satisfy these goals are highly desirable for meeting the processing demands of modern-day applications. <Desc/Clms Page number 2> SUMMARY [0005] DSP architectures having improved performance are described herein. In one embodiment, a DSP includes two MAC units and two ALUs, where one of the ALUs replaces an adder for one of the two MAC units. This DSP may be configured, possibly on an instruction-by-instruction basis, to operate in a dual-MAC/single-ALU configuration, a single-MAC/dual-ALU configuration, or a dual-MAC/dual-ALU configuration. The configuration flexibility allows the DSP to handle various types of signal processing operations and improves utilization of the available hardware. The DSP further includes pipeline registers that break up critical paths and allow the DSP to operate at a higher clock speed for greater throughput. Other embodiments of DSP architectures are also described below. Various aspects and embodiments of the invention are described in further detail below. In one aspect, a processor is presented comprising: a first multiply- accumulate (MAC) unit operable to receive and multiply first and second operands to obtain a first intermediate result, store the first intermediate result in a first register, add the stored first intermediate result with a third operand, and provide a first output; and a second MAC unit operable to receive and multiply fourth and fifth operands to obtain a second intermediate result, store the second intermediate result in a second register, add a sixth operand with either the stored second intermediate result or a sum of the stored first and second intermediate results, and provide a second output. [0008] In another aspect, a processor is presented comprising: a first multiply- accumulate (MAC) unit including a first multiplier operable to receive and multiply first and second operands and provide a first intermediate result, and a first arithmetic logic unit (ALU) operable to receive the first intermediate result, a third operand, and at least one additional operand, operate on the received operands, and provide a first output; and a second MAC unit including a second multiplier operable to receive and multiply fourth and fifth operands and provide a second intermediate result, a first adder operable to add the second intermediate result with either zero or the first intermediate result from the first MAC unit, and a second adder operable to add an output of the first adder with a sixth operand and provide a second output. [0009] In another aspect, a processor is presented comprising: a first multiply- accumulate (MAC) unit including a first multiplier operable to receive and multiply first and second operands and provide a first intermediate result, a first register operable to <Desc/Clms Page number 3> store the first intermediate result and provide a stored first intermediate result, and a first arithmetic logic unit (ALU) operable to receive and operate on the stored first intermediate result, a third operand, at least one other operand, or a combination thereof, and provide a first output ; and a second MAC unit including a second multiplier operable to receive and multiply fourth and fifth operands and provide a second intermediate result, a second register operable to store the second intermediate result and provide a stored second intermediate result, a first adder operable to add the stored second intermediate result with either zero or the stored first intermediate result from the first MAC unit, and a second adder operable to add an output of the first adder with a sixth operand and provide a second output. [0010] In another aspect, a wireless apparatus is presented comprising: a first multiply-accumulate (MAC) unit including a first multiplier operable to receive and multiply first and second operands and provide a first intermediate result, and a first arithmetic logic unit (ALU) operable to receive the first intermediate result, a third operand, and at least one additional operand, operate on the received operands, and provide a first output; a second MAC unit including a second multiplier operable to receive and multiply fourth and fifth operands and provide a second intermediate result, a first adder operable to add the second intermediate result with either zero or the first intermediate result from the first MAC unit, and a second adder operable to add an output of the first adder with a sixth operand and provide a second output; an ALU path including a shifter operable to receive and shift a seventh operand or an eighth operand and provide a third intermediate result, and a second ALU operable to operate on the third intermediate result, the seventh operand, the eighth operand, or a combination thereof, and provide a third output; and a register file operable to provide the first through eighth operands for the first and second MAC units and the ALU path and to store the first through third outputs from the first and second MAC units and the ALU path. <Desc/Clms Page number 4> BRIEF DESCRIPTION OF THE DRAWINGS [0011] FIG. 1 shows a DSP with two MAC units and one ALU. [0012] FIG. 2 shows a pipelined DSP with two MAC units and one ALU. [0013] FIG. 3 shows a configurable DSP with two MAC units and two ALUs. [0014] FIG. 4 shows a configurable pipelined DSP with two MAC units and two ALUs. [0015] FIG. 5 shows another configurable pipelined DSP with two MAC units and two ALUs. [0016] FIGS. 6,7 and 8 show the DSP of FIG. 5 operating in the dual-MAC/single- ALU, single-MAC/dual-ALU, and dual-MAC/dual-ALU configurations, respectively. [0017] FIG. 9 shows a wireless device in a wireless communication system. DETAILED DESCRIPTION [0018] The word"exemplary"is used herein to mean"serving as an example, instance, or illustration. "Any embodiment or design described herein as"exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. FIG. 1 shows a block diagram of a DSP 100 with two MAC units and one ALU. A register file 110 contains a bank of general-purpose registers that can be used to store operands and results for the MAC units and ALU. Register file 110 couples to and exchanges data with a memory unit (not shown in FIG. 1). For the embodiment shown in FIG. 1, register file 110 has three input ports labeled as PI1 through PI3 and eight output ports labeled as PO1 through P08. In general, a register file can have any number of input and output ports. [0020] For the first MAC unit (MAC1), a multiplier 122a receives and multiplies two operands from output ports P04 and PO5 of register file 110 and provides a result to one input of an adder 140a. Adder 140a receives another operand from output port P06, adds two input operands, and provides an output to input port PI2 of register file 110. A multiplexer 128 receives the output of multiplier 122a and a value of zero on two inputs and provides either the multiplier output or zero depending on a multiplexer control (MC). [0021] For the second MAC unit (MAC2), a multiplier 122b receives and multiplies two operands from output ports P02 and P03 of register file 110 and provides its result <Desc/Clms Page number 5> to one input of an adder 130. Adder 130 also receives the output of multiplexer 128, adds two input operands, and provides an output to one input of an adder 140b. Adder 140b receives another operand from output port PO1, adds two input operands, and provides an output to input port PI1 of register file 110. For the ALU path, a shifter 154 receives two inputs from output ports P07 and PO8 of register file 110 and a third input from an intermediate bus. The intermediate bus transfers immediate values embedded in an instruction to the ALU. Shifter 154 selects one of the three inputs, shifts the operand from the selected input by a specified number of bits (e. g. , 0,1, 2, or 3 bits to the left), and provides an output to one input of multiplexers 158a and 158b. Multiplexer 158a also receives an operand from output port P07 and provides one of two inputs to one input of an ALU 160. Multiplexer 158b also receives the immediate values from the intermediate bus and provides one of two inputs to the other input of ALU 160. ALU 160 operates on its input operands and provides an output to input port PI3 of register file 110. [0023] The units within DSP 110 may be designed with any number of bits. As an example, multipliers 122a and 122b may be 16x16 bit multipliers, adder 130 may be an 32-bit adder, adders 140a and 140b may be 40-bit adders, and shifter 154 and ALU 160 may be 40-bit units. Similarly, register file 110 may be designed with any number of bits for its input and output ports. As an example, output ports PO1, P06, and P07 may provide 40-bit operands, output ports P02, P03, P04, and PO5 may provide 16-bit operands, output port PO8 may provide 16-bit or 40-bit operands, and input ports PI1, PI2, and PI3 may receive 40-bit results. The above are exemplary values, and other bit widths may also be used. [0024] DSP 100 may be configured to operate as either two independent MAC units or two coupled MAC units. For the independent dual-MAC configuration, multiplexer 128 is controlled to pass the zero value to adder 130, and MAC1 and MAC2 operate independently and can perform two MAC operations simultaneously on different sets of operands. For the coupled dual-MAC configuration, multiplexer 128 is controlled to pass the output of multiplier 122a, and MAC1 and MAC2 collectively perform the computation: (B * C) (D * E) or A (B * C) (D * E), where A through E are operands from output ports PO1 through P05, respectively. These two computations are very useful for complex multiply and accumulate operations. <Desc/Clms Page number 6> [0025] It is highly desirable to increase the speed of the clock for the DSP in order to improve processing capability per unit time (i. e. , to perform more operations per second). For example, if the clock speed can be increased by 50%, then 50% more operations may be performed per second with the same hardware. However, since the coupled dual-MAC path and the ALU path each have multiple operations in series on its critical path, the DSP architecture shown in FIG. 1 does not scale well as the clock speed is increased. The coupled dual-MAC path has a multiply and two addition operations in its critical path through multiplier 122a or 122b and adders 130 and 140b. The ALU path has a shift and an addition operation in its critical path. These operations require some time to complete and will thus limit the clock speed that may be used for the DSP. [0026] FIG. 2 shows a block diagram of a pipelined DSP 102 with two MAC units and one ALU. DSP 102 includes all of the elements of DSP 100 shown in FIG. 1. DSP 102 further includes (1) a register 124a coupled between multiplier 122a and adder 140a, (2) a register 124b coupled between multiplier 122b and adder 130, and (3) a register 156 coupled between shifter 154 and multiplexers 158a and 158b. [0027] Registers 124a, 124b, and 156 are pipeline registers inserted in the critical paths of MAC1, MAC2, and the ALU path, respectively. These registers break up the critical paths and allow DSP 102 to be clocked at a higher rate. An execution cycle for DSP 102 is broken into two pipeline stages. In the first pipeline stage, multipliers 122a and 122b fetch operands from register file 110, perform multiply operations, and store their results in registers 124a and 124b, respectively. Similarly, for the ALU path, shifter 154 receives inputs from register file 110 and/or the immediate bus, performs shifts as specified, and stores results in register 156. In the second pipeline stage, the adders in MAC1 and MAC2 and ALU 160 in the ALU path are active. For the independent dual-MAC configuration, adder 140a adds the output of register 124a with an operand from output port P06 and provides an output to input port PI2, and adder 140b adds the output of register 124b with an operand from output port P01 and provides an output to input port PI1. For the coupled dual-MAC configuration, adder 130 adds the outputs of registers 124a and 124b, and adder 140b adds the output of adder 130 and the operand from output port PO1 and provides an output to input port PI1. For the ALU path, ALU 160 receives the output of register 156 and/or operands from output port P07 and the intermediate bus, operates on the input operands, and provides an output to input port PI3. <Desc/Clms Page number 7> [0028] DSP 102 can provide all of the functionalities of DSP 100. However, DSP 102 may be clocked at a faster rate than DSP 100 (up to twice as fast) because the critical paths in DSP 102 are broken up with pipelined registers. This then allows DSP 102 to achieve a higher overall throughput than DSP 100. A pipeline register may also be inserted between adders 130 and 140b to further break up this path, if it is a new critical path with a much longer delay than all other paths in DSP 102. In this case, the execution cycle for DSP 102 would be broken up into three pipeline stages. [0029] The DSP architecture shown in FIG. 1 has limited configurability and does not fit all types of signal processing computations. DSP 100 can perform two (either independent or combined) MAC operations and one ALU operation in parallel. For some applications, it may be preferable to have two ALU operations and a single MAC operation occur in parallel, or to have two MAC operations and two ALU operations all occur in parallel. Applications that favor two ALU operations in parallel include sum of absolute difference (SAD) metric computations for motion estimation in video compression, template comparison in voice recognition, and path distance calculations in Viterbi decoding, all of which are known in the art. FIG. 3 shows a block diagram of a configurable DSP 104 with two MAC units and two ALUs. DSP 104 includes most of the elements of DSP 100 shown in FIG. 1. DSP 104 further includes multiplexers 142a and 142b and an ALU 150 that replaces adder 140a in DSP 100. [0031] For the embodiment shown in FIG. 3, multiplexer 142a receives the output of multiplier 122a and operands from output port PO5 and the intermediate bus. Multiplexer 142a selects one of its three inputs and provides the operand from the selected input to one input of ALU 150. Multiplexer 142b receives operands from output ports P04 and P06, selects one of two inputs, and provides the operand from the selected input to another input of ALU 150. ALU 150 can perform logical and bit- manipulation operations along with addition and subtraction operations on its input operands and provides an output to input port PI2. FIG. 3 shows the use of configurable output ports P04, P05, and P06 of register file 110 to support MAC1 and ALU 150. This reduces the number of output ports needed to support the MAC and ALU, which can simplify the design of the register file. FIG. 3 also shows a specific example for connecting ALU 150 to the output ports of register file 110 and to other units in DSP 104. Other connections are also possible. For example, multiplexers 142a and 142b may have more inputs to <Desc/Clms Page number 8> receive more operands and/or may receive operands from different output ports of register file 110. [0033] DSP 104 may be operated in various configurations, which are listed in Table 1. These various configurations may be selected by appropriately setting the connections for the various units within DSP 104, for example, using DSP instructions. The configuration for DSP 104 may be changed dynamically, for example, on an instruction by instruction basis. Table 1 <tb> Single <SEP> MAC <SEP> Dual <SEP> MAC<tb> <tb> Single <SEP> ALU <SEP> Supported <SEP> Supported<tb> <tb> Dual <SEP> ALU <SEP> Supported <SEP> Supported<tb> For DSP 104, some of the operands are shared in some of the configurations because of the limited number of output ports and connections. [0034] The flexibility to operate the DSP in various configurations allows the DSP to better adapt and fit various types of signal processing operations. This then allows for better utilization of the available hardware and higher overall throughput. The various configurations for the DSP are illustrated below. [0035] FIG. 4 shows a block diagram of a configurable pipelined DSP 106 with two MAC units and two ALUs. DSP 106 includes all of the elements of DSP 104 shown in FIG. 3. DSP 106 further includes pipeline registers 124a, 124b, and 156 that are placed at the output of multipliers 122a and 122b and shifter 154, respectively. DSP 106 can support all of the configurations shown in Table 1 for DSP 104. However, DSP 106 can be operated at a higher clock speed than DSP 104 because pipeline registers 124a, 124b, and 156 break up the critical paths for MAC1, MAC2, and the ALU path, respectively. [0036] The DSP datapath may be designed with more units and/or connections than that shown in FIGS. 3 and 4 to achieve even greater flexibility and functionality. Moreover, the register file may be designed with additional output ports to support greater flexibility in selecting operands. FIG. 5 shows a block diagram of another configurable pipelined DSP 108 with two MAC units and two ALUs. DSP 108 includes most of the elements of DSP 106 shown in FIG. 4. However, DSP 108 includes a register file 112 having ten output ports that replaces register file 110 having eight output ports. DSP 108 further includes <Desc/Clms Page number 9> additional units and connections for MAC1 and MAC2 and the ALU path, as described below. [0038] For MAC1, a shifter 126a receives the output of register 124a, shifts its input operand by a specified number of bits, and provides an output to one input of multiplexers 128 and 142a. Multiplexer 142a also receives operands from output ports P04, P05, and P07 and the intermediate bus. Multiplexer 142a provides one of its five inputs to one input of ALU 150. [0039] For MAC2, a shifter 126b receives the output of register 124b, shifts its input operand by a specified number of bits, and provides an output to adder 130. A shifter 132 receives the operand from output port P01, shifts its input operand by a specified number of bits, and provides an output to one input of a multiplexer 134. Multiplexer 134 also receives values of'0'and'0x8000'and provides one of its three inputs to adder 140. In particular, multiplexer 134 provides the'0'value when no addition is required for adder 140, the'0x8000'value for rounding, and the operand from output port P01 when accumulation is performed. For the ALU path, a multiplexer 152 receives operands from output port P08 and the intermediate bus and provides an output to shifter 154. Shifter 154 also receives an operand from output port P07, selects one of two inputs, shifts the operand from the selected input by a specified number of bits, and provides an output to register 156. Multiplexer 158a receives the output of register 156 and an operand from output port P09, selects one of two inputs, and provides the operand from the selected input to one input of ALU 160. Multiplexer 158b receives operands from output port P010 and the intermediate bus, selects one of two inputs, and provides the operand from the selected input to the other input of ALU 160. ALU 160 operates on its input operands and provides an output to a multiplexer 164. A shifter 162 receives operands from output port P09 and multiplexer 158b at two inputs, selects one of the two inputs, shifts the operand from the selected input by a specified number of bits, and provides an output to multiplexer 164. Multiplexer 164 provides one of two inputs to an ALU saturation unit 166, which saturates the received value and provides the saturated value to input port PI3. [0041] Shifters 126a, 126b, and 132 are provided in MAC1 and MAC2 to handle numbers of different orders of magnitude. Shifters 154 and 162 are provided in the ALU path for shift operations. Each of these shifters may be individually configured to shift its input operand, for example, by 0,1, 2, or 3 bits to the left, or by some other <Desc/Clms Page number 10> range of bit shifts. Multiplexer 134 supplies additional accuracy by providing'0x8000' for rounding, which supplies an additional half-bit of precision. [0042j DSP 108 has the following differences from DSP 100 in FIG. 1. First, pipeline registers 124a and 124b are inserted at the outputs of multipliers 122a and 122b in MAC1 and MAC2, respectively, and pipeline register 156 is inserted at the output of shifter 154 in the ALU path. Second, adder 140a in MAC1 has been replaced with ALU 150, which can perform logical and bit-manipulation operations along with addition and subtraction. Third, shifter 162 and two additional output ports P09 and PO10 in register file 112 have been added for the ALU path. Fourth, various new connections are now feeding into ALU 150 for MAC1. [0043] DSP 108 can support all of the configurations shown in Table 1 for DSP 104 in FIG. 3. DSP 108 can support various types and combinations of operations because of the additional shifters, multiplexers, output ports, and connections. DSP 108 can also support a higher clock speed because pipeline registers 124a, 124b, and 156 break up the critical paths for MAC1, MAC2, and the ALU path, respectively. [0044] FIG. 6 shows DSP 108 operating in the dual-MAC/single-ALU configuration. In this configuration, MAC1 and MAC2 may be operated independently or in combination by appropriately controlling multiplexer 128. ALU 150 receives the output of shifter 126a (via multiplexer 142a, which is not shown in FIG. 6 for clarity) and an operand from output port P06 (via multiplexer 142b, which is also not shown). For this configuration, ALU 150 functions as an adder and performs addition on the two input operands. [0045] FIG. 7 shows DSP 108 operating in the single-MAC/dual-ALU configuration. In this configuration, MAC1 is bypassed and MAC2 is operational. Multiplexer 142a can receive operands from output ports P04, P05, and P07 and the intermediate bus, select one of the four inputs, and provide the operand from the selected input to one input of ALU 150. Multiplexer 142b can receive operands from output ports P04 and P06, select one of the two inputs, and provide the operand from the selected input to the other input of ALU 150. ALU 150 can perform any ALU operation on its input operands. [0046] FIG. 8 shows DSP 108 operating in the dual-MAC/dual-ALU configuration. In this configuration, MAC1 and MAC2 are operated in the coupled dual-MAC configuration, and multiplexer 128 is omitted for clarity. Multiplexer 142a can receive operands from output port P07 and the intermediate bus, select one of the two inputs, <Desc/Clms Page number 11> and provide the operand from the selected input to one input of ALU 150. ALU 150 can also receive an operand from output port P06 at its other input and perform any ALU operation on its input operands. DSPs 104 and 106 can also be operated in the dual-MAC/single-ALU, single-MAC/dual-ALU, and dual-MAC/dual-ALU configurations, in a manner similar to that shown in FIGS. 6,7, and 8 for DSP 108. However, the connections for DSPs 104 and 106 for these various configurations would be different from the connections for DSP 108, since DSPs 104 and 106 have fewer connections, output ports, and multiplexers than DSP 108. [0048] The configurable architectures for DSPs 104,106, and 108 allow these DSPs to perform various types and combinations of computations in a single instruction. For example, the following computation types and combinations may be performed by these DSPs in one instruction: A=B+C; D=E+F; G=H+ (I*J). A=B+C; D=E-F; G=H+ (I*J) + (K*L). A= (B 3) +C; D=E&F; G=H- (I*J). The input operands for the computations shown above can come from the output ports of the register file and the intermediate bus. The three results A, D, and G for the computations can be provided to the three input ports of the register file. Many other computation types and combinations can also be performed by DSPs 104,106, and 108. [0049] The configurable architectures for DSPs 104,106, and 108 are more suitable for all types of signal processing operations than the architecture for DSP 100 because they support all of the parallel combinations shown in Table 1. The configurable and/or pipeline DSPs described herein may be used for various applications including wireless communication, computing, networking, personal electronics, and so on. An exemplary use of the DSPs for wireless communication is described below. FIG. 9 shows a block diagram of a wireless device 900 in a wireless communication system. Wireless device 900 may be a cellular phone, a handset, a terminal, a mobile station, or some other device or design. The wireless communication system may be a Code Division Multiple Access (CDMA) system, a Global System for Mobile Communications (GSM) system, a multiple-input multiple-output (MIMO) <Desc/Clms Page number 12> system, an orthogonal frequency division multiplexing (OFDM) system, an orthogonal frequency division multiple access (OFDMA) system, and so on. Wireless device 900 is capable of providing bi-directional communication via a receive path and a transmit path. [0052] For the receive path, signals transmitted by base stations in the system are received by an antenna 912, routed through a duplexer (D) 914, and provided to a receiver unit (RCVR) 916. Receiver unit 916 conditions (e. g. , filters, amplifies, and frequency downconverts) the received signal, digitizes the conditioned signal, and provides data samples to a DSP 920 for further processing. For the transmit path, data to be transmitted from wireless device 900 is provided by DSP 920 to a transmitter unit (TMTR) 918. Transmitter unit 918 conditions (e. g. , filters, amplifies, and frequency upconverts) the data and generates a modulated signal, which is routed through duplexer 914 and transmitted via antenna 912 to the base stations. [0053] DSP 920 includes various units such as, for example, register file 930, MAC units 932, ALUs 934, an internal controller 940, and an internal memory unit 942, all of which are coupled via an internal bus. Internal controller 940 executes instructions that direct MAC units 932 and ALUs 934 to perform various computations. For example, DSP 920 may perform encoding, interleaving, modulation, code channelization, spectral spreading, filtering, and so on, for the transmit path. DSP 920 may perform filtering, despreading, channelization, demodulating, deinterleaving, decoding, and so on, for the receive path. These various operations are known in the art. The specific processing to be performed by DSP 920 is dependent on the communication system. Register file 930, MAC units 932, and ALUs 934 may be implemented with any of the DSP architectures shown in FIGS. 2,3, 4 and 5. [0054] Controller 950 controls the operation of DSP 920 and other units within wireless device 900. The other units are not shown in FIG. 9 since they do not contribute to the understanding of the various embodiments. Memory units 942 and 952 store program code and data used by controllers 940 and 950, respectively. FIG. 9 shows an exemplary design of a wireless device in which the configurable and/or pipeline DSPs described herein may be used. These DSPs may also be used in other electronic devices. [0056] The configurable and/or pipeline DSP architectures described herein may be implemented in various hardware units. For example, these DSP architectures may be implemented in an application specific integrated circuit (ASIC), a digital signal <Desc/Clms Page number 13> processing device (DSPD), a programmable logic device (PLD), a field programmable gate array (FPGA), a processor, a controller, a micro-controller, a microprocessor, and other electronic units. [0057] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Disclosed is a memory controller coupled to the memory sub-system of a computing device via an interface. The controller monitors the operation of the memory and compares the operation of the memory to a plurality of performance threshold values. The controller then modifies the operating voltage and/or frequency of the memory based on the result of the comparison. The controller may also modify the memory powerdown policy and the memory prefetch policy. The voltage and/or frequency may be modified based on the latency sensitivity of the executing application. The host computing device may run an operating system that provides feedback to the controller to modify the performance thresholds. The monitored characteristics may be the amount of time that the processor is stall by the memory or the memory bandwidth utilisation. |
CLAIMSWhat is claimed is: I. A system comprising: a processing core to execute an application; a memory sub-system; and a memory controller coupled with the processing core and the memory sub-system, the memory controller to monitor operation of the memory sub-system, to compare the monitored operation to one or more one or more performance thresholds, and to modify operating voltage and/or operating frequency parameters for the memory sub-system based on at least monitored performance characteristics of the memory sub-system. 2. The system of claim 1 wherein the memory controller further modifies a memory powerdown policy and a memory prefetch policy based on at least monitored performance characteristics of the memory sub-system. 3. The system of claim 1 wherein the memory controller ftirther modifies the operating voltage and/or the operating frequency parameters for the memory sub-system based at least on a detected latency sensitivity of the application. 4. The system of claim I wherein the memory controller further modifies a pre-fetch policy based at least on the monitored performance characteristics of the memory sub-system. 5. The system of claim I wherein the memory controller and the memory sub-system interoperate to cause the memory sub-system to operate in one of four operational states as defined by the operating voltage and the operating frequency parameters 6. The system of claim I wherein the processing core further executes an operating system, the operating system to provide feedback to be utilized by the memory controller to modify the operating voltage and/or the operating frequency parameters for the memory sub-system.7. The system of claim 6 wherein the operating system feedback comprises at least modifying the one or more one or more performance thresholds.8. The system of claim 1 wherein the monitored performance characteristics of the memory sub-system comprise at least an amount of time the processing core is stalled by memory sub-system latency.9. The system of claim I wherein the monitored performance characteristics of the memory sub-system comprise at least memory sub-system bandwidth utilization.10. A method comprising: monitoring operating characteristics of a memory sub-system within an electronic device; comparing the monitored operating characteristics of the memory sub-system with a plurality of performance threshold values; and modifying operating voltage and operating frequency of the memory sub-system based at least on the comparison of the monitored operating characteristics and the plurality of thresholds.LI. The method of claim 10 further compnsing modifying a memory powerdown policy and a memory prefetch policy based on at least monitored performance characteristics of the memory sub-system.12. The method of claim 10 wherein modifying the operating voltage and the operating frequency parameters for the memory sub-system is based at least on a detected latency sensitivity of the application.13. The method of claim 10 wherein further comprising modifying a pre-fetch policy based at least on the monitored performance characteristics of the memory sub-system.14. The method of claim 10 wherein a memory controller and a memory sub-system interoperate to cause the memory sub-system to operate in one of at least four operational states as defined by the operating voltage and the operating frequency parameters 15. The method of claim 10 further comprising executing an operating system, the operating system to provide feedback to be utilized by the memory controller to modify the operating voltage and/or the operating frequency parameters for the memory sub-system.16. The method of claim 15 wherein the operating system feedback comprises at least modifying the one or more one or more performance thresholds.17. The method of claim 10 wherein the monitored performance characteristics of the memory sub-system comprise at least an amount of time the processing core is stalled by memory sub-system latency.18. The method of claim 10 wherein the monitored performance characteristics of the memory sub-system comprise at least memory sub-system bandwidth utilization.19. An apparatus comprising: an interface to communicate with a memory sub-system; performance threshold storage to store one or more performance threshold values; and memory controller circuitry coupled with the interface and the performance threshold storage, the memory controller circuitry to monitor operation of the memory sub-system, to compare the monitored operation to the one or more one or more performance threshold values, and to modify operating voltage and/or operating frequency parameters for the memory sub-system based on at least monitored performance characteristics of the memory sub-system.20. The apparatus of claim 19 wherein the memory controller further modifies a memory powerdown policy and a memory prefetch policy based on at least monitored performance characteristics of the memory sub-system.21. The apparatus of claim 19 wherein the memory controller circuitry further modifies the operating voltage andior the operating frequency parameters for the memory sub-system based at least on a detected latency sensitivity of the application.22. The apparatus of claim 19 wherein the memory controller circuitry further modifies a pre-fetch policy based at least on the monitored performance characteristics of the memory sub-system.23. The apparatus of claim 19 wherein the memory controller circuitry and the memory sub-system interoperate to cause the memory sub-system to operate in one of four operational states as defined by the operating voltage and the operating frequency parameters 24. The apparatus of claim 19 wherein the monitored performance characteristics of the memory sub-system comprise at least an amount of time the processing core is stalled by memory sub-system latency.25. The apparatus of claim 19 wherein the monitored performance characteristics of the memory sub-system comprise at least memory sub-system bandwidth utilization.26. A method comprising: monitoring a memory scaling factor (MSF) for one or more processing cores, aggregate channel bandwidth from s memory controller, and operating system performance versus power bias in a host system; comparing the MSF, aggregate channel bandwidth and operating system performance versus power bias with a plurality of performance threshold values; and modifying one or more ofmemory operating frequency, memory operating voltage, powerdown policy, and prefetching policy of a memory sub-system of the host system based at least on the comparison of the MSF, aggregate channel bandwidth and operating system performance versus power bias with a plurality of performance threshold values.*.:r: INTELLECTUAL . ... PROPERTY OFFICE Application No: GB 1109041.2 Examiner: Mr David Maskery Claims searched: 1 -26 Date of search: 28 September 2011 Patents Act 1977: Search Report under Section 17 Documents considered to be relevant: Category Relevant Identity of document and passage or figure of particular relevance to claims X,Y X: 1, 3, 9, US 6263448 B 10, 12, (TSERN et AL) See whole document.18, 19, 21, 25 Y: 2,4, 5, 8, 11, 13, 14, 17, 20 and 22 -X,Y X: 1, 3, US 2009/0070605 A 10, 12, (NIJHAWAN et AL) See whole document.19, 21 Y: 2,4, 5, 8, 11, 13, 14, 17, 20 and 22 X,Y X: 1, 3, GB 2432695 A 10, 12, (AMD) See whole document.19, 21 Y: 2,4, 5, 8, 11, 13, 14, 17, 20 and 22 X,Y X:1,3, U56112310B 10, 12, (JUN et AL) See whole document.19, 21 Y: 2,4, 5, 8, 11, 13, 14, 17, 20 and 22 Y 2,4, 11, US 2009/0119471 A 13, 20 and (HUR et AL) See paragraph 38.Intellectual Property Office is an operating name of the Patent Office www.ipo.gov.uk *.:r: INTELLECTUAL . ... PROPERTY OFFICE Y 2,4,11, US2008/0183903A 13, 20 and (VANSTEE et AL) See paragraph 31.Y 2,4,11, US2006/0174228A 13, 20 and (RADHAKRISHNAN et AL) See paragraphs 9 and 23 -25.Y 5, 14 and US 2008/0215903 A 23 (PAYNE) See paragraphs 26 -31.Y 8, 17 and WO 2009/125789 A 24 (NEC CORP) See US 2011/0022876 & 8,17 and US2OII/0022876A 24 (SASAKI) English language version of WO 2009/125789 Categories: X Document indicating lack of novelty or inventive A Document indicating technological background and/or state step of the art.Y Document indicating lack of inventive step if P Document published on or after the declared priority date but combined with one or more other documents of before the filing date of this invention.same category.& Member of the same patent family E Patent document published on or after, but with priority date earlier than, the filing date of this application.Field of Search:Search of GB, EP, WO & US patent documents classified in the following areas of the UKCX Worldwide search of patent documents classified in the following areas of the IPC GO6F The following online and other databases have been used in the preparation of this search report EPODOC, WPI.International Classification: Subclass Subgroup Valid From GO6F 000 1/32 01/01/2006 Intellectual Property Office is an operating name of the Patent Office www.ipo.gov.uk |
MEMORY POWER MANAGEMENT Y1 DYNAMIC MEMORY OPERATION STATES TECHNICAL FlLD Embodiments of the invention relate to operational management of electronic devices. More particularly, embodiments of the invention relate to techniques for adaptively adjusting operational states of electronic devices. BACKGROUND Enterprise server systems as well as other electronic systems have seen an increased focus on energy efficiency and energy proportional computing in the last several years. Managing memory power is critical to the overall efficiency in these platforms given capacity and bandwidth requirements of server processors and As the number of processing cores continues to increase and integration of throughput computing and input/output (I/O) capabilities accelerates this trend is expected to intensify making memory power management a key element of platform energy efficiency. One approach is to focus on reducing idle memory power through aggressive support of power-down and self-refresh states leading to significant improvements in memory power efficiency. BRIEF DESCRiPTION OF THE DRAWINGS Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements. Figure 1 is an example memory power/performance table for an embodiment having four memory power states and two memory modules. Figure 2 provides example bandwidth vs. latency curves for an example memory sub-system operating at 800 MHz, 1066 MHz and 1333 MHz. Figure 3 is a block diagram of one embodiment of an electronic system. Figure 4 is a flow diagram of one embodiment of a technique for dynamic selection and modification of memory operational states. DETAILED DESCRIPTION In the following description, numerous specific details are set forth. However, embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description. Described herein are techniques for dynamic memory frequency/voltage scaling to augment existing power slates and further improve memory power efficiency. Each frequency/voltage operating point is defined as H-state similar to processor P-states. In one embodiment, H-state control policies are implemented in hardware. Described herein are techniques to obtain, within the memory controller, a memory scaling factor that dynamically captures workload sensitivity to memory latency and guides H-state transition decisions. In one embodiment, workloads that can tolerate higher memory latency run at lower frequency improving platform energy efficiency while workloads that are sensitive to memory latency run at higher speeds taking full advantage of performance capabilities available in the platform. Within the memory controller, a process may be periodically scheduled to access the memory operating condition and to select the appropriate H-state for the next time interval. This process might be executed, for example, at I ms granularity. The concept of H-states can be applied beyond frequency/voltage, for example, by defining H-states with different prefetch policies, powerdown policies and powerdown depth. in alternate embodiments, a technique may be provided for an operating system interaction to support a "hardware managed/operating system guided" power management paradigm. The techniques described herein may be considered as defining specific memory operation states (H-states), selection of the best operation state for the next time interval given a set of observations, and to reconfiguration of the memory parameters according to the new operation state. Furthermore, an interface may be provided for interaction with the operating system to obtain policy information and to provide Quality of Service (Q0S) feedback. The memory managementlconfiguration process (observing, selecting, reconfigure) might be performed at a time cadence of, for example, I ms or at a different cadence (higher or lower) dependent on specific requirements. Figure 1 is an example memory power/performance table for an embodiment having four memory power states and two memory modules. An example of H-state definitions is given for two Dual In-line Memory Modules (DIMMs) per channel (e.g., DDR3) is given in Figure 1. The specific set of H-states, not limited to four as in this example and their specific configurations will be platform dependent and may be determined through, for example, optimization studies during the architecture design phase. The technique described with respect to Figure 1 provides an approach for determining workload memory scalability, called Memory Scaling Factor (MSF), that can be used to control memory frequency and voltage scaling. The two performance characteristics of a memory sub-system are its bandwidth capability and Latency. For closed page operation without any powerdown, the relationship between bandwidth and latency is very well described by the queuing equation: Bandwidth Latancy = Idlelatency + slope * _______________________________ PeakS ustainedBandwi dill -Bandwidth Where IdleLatency represents the latency observed on an idle memory subsystem, Bandwidth represents the current memory bandwidth usage, and PeakSustainedBandwidth represents a highest sustainable bandwidth for the memory device. As long as the memory sub-system is capable of delivering the bandwidth required by the application, the critical performance factor is the latency. Figure 2 provides example bandwidth vs. latency curves for an example memory sub-system operating at 800 MHz, 1066 MHz and 1333 MHz. In the example of Figure 2, the memory running at 1333 MHz has lower latency than at 1066 MHz, which has lower latency (haii at 800 MHz. In one embodiment, latency as function of bandwidth is calculated for each of the frequencies at which the memory may operate and the result is stored for later use. As the memory latency increases, the processor core's clocks per instruction (CPI) increases. CPI is related to memory latency by: CPJ = CP1 + MPI * BlockingFactor * MemoryLaxency where MPI represents misses per instruction and BlockingFactor is a number between 0 and 1 corresponding to the percentage of misses that stall the processor core. A Memory Scaling Factor (MSF) may be utilized for memory operation state selection purposes. MPF may be defined as: MSF= %tCP1 %MlemoryLczzency or as: -(CPIN -CPlFreq)i [Latency N -Late YMazFreq)i tencyMF,, A small MSF value implies a high CPIC0rC, low MPI and/or low blocking factor, and an application that is relatively insensitive to memory latency. A high MSF implies low CPI01, high MPI and/or high blocking factor, and an application that is sensitive to memory latency. The MSF can be determined in different ways. In one embodiment, MSF is determined in real time (or near real time) via a small perturbation of the memory timing parameter in the latency path. In one embodiment, this may be accomplished by toggling the value of tRCD up and down by 2 clock cycles every 10 ms. For example, if the tRCD of the DIMM is specified at 9 clock cycles, the tRCD may be varied between 8 and 10 clock cycles. This periodic variation in the memory latency provides a signal in the thread's CPI. In one embodiment, by utilizing filtering and weighted rolling averaging, it is possible to extract an MSF. In one embodiment, the "read round trip delay" may be toggled up and down by, for example, one or two clock cycles. This may be effective for both "memory closed page" and "memory open page" policies. In another embodiment, where a "core memory stall counter" divided by a "core running clocks counter" the MSF may be determined directly for a core. This last embodiment does not require toggling of a timing parameter. In various embodiments, formulas, for the Nth sample, and threads 0 to M, are: MSFN CPI -high -tRCDNI -average(CPI _low_tRCD2, CPI low ZRCDN) (2clock I.5ns I clock) I MeasuredLatency where the term "1.5 ns" is dependent on the clock frequency and may be changed for different frequencies. Another formula that may be utilized is: max(MSFthreadNth,0 * UtilN.fh(d, ,..., MSFthreadNl,,,eM * Utflt*rrv4) where MSFihread indicates the MSF for a particular thread and (liii indicates memory utilization by the thread. The performance impact of running at different memory frequencies can be summarized by: %CPI _impact MSF4%L.atency impact where %Latency_impact represents the percentage increase in CPI caused by running at a lower frequency or H-state. %Latency impact is the percentage increase in memory read latency caused by running at a lower frequency or H-state. In one embodiment a Maximum Performance Hit (MPH) parameter is selected S that may be utilized to select the memory operating frequency that meets the selected MPH. in one embodiment that utilizes the 800 MHz,, 1066 MHz and 1333 MHz memory devices, the latency impacts may be determined as: %Latcncy -impact 1066 = Latency _l066 / Latency _1333 %Latency impact 800 = Latency _800/ Latency_ 1333 from the latencies corresponding to the observed bandwidth as stored in the manner described above. In one embodiment, the memory controller (or other system component) may select the lowest H-state that satisfies: %CPI _impact «= MPH Previously, for active memory, only a single memory state is supported. That is, at boot time the BIOS code sets all memory parameters to a fixed value according to selected BIOS options. Thus, in these previous memory sub-systems, memory frequency and voltage, powerdown states and policies and prefetch policies are static. Generally, memory is configured to provide highest performance regardless of energy costs. Utilizing the techniques and mechanisms described herein, memory operation can be dynamically customized to provide reduced energy consumption without excessive performance penalties that would be required by a static configuration targeting power savings. Returning to Figure 1, the parameters described above can be utilized to select an operational state (H-state) for the memory system. The parameters of Figure 1 illustrate four example operational states (HO, HI, 1-12 and H3) 110 that provide different combinations of operating frequency 120, rank idle time values 130, and rank powerdown policies 140. The upper table rows (lower numerical H-state values) provide higher memory system performance while the lower table rows (higher numerical H-state values) provide lower memory system performance. Figure 3 is a block diagram of one embodiment of an electronic system. The electronic system illustrated in Figure 3 is intended to represent a range of electronic systems (either wired or wireless) including, for example, servers, desktop computer systems, laptop computer systems, cellular telephones, personal digital assistants (PDAs) including cellular-enabled PDAs, set top boxes. Alternative electronic systems may include more, fewer and/or different components. Electronic system 300 includes bus 305 or other communication device to communicate information, and processor 310 coupled to bus 305 that may process information. While electronic system 300 is illustrated with a single processor, electronic system 300 may include multiple processors and/or co-processors and/or multiple processing cores. Electronic system 300 further may include random access memory (RAM) or other dynamic storage device 320 (referred to as memory), coupled to bus 305 and may store information and instructions that may be executed by processor 310. Memory 320 may also be used to store temporary variables or other intermediate information during execution of instructions by processor 310. In one embodiment, processor(s) 310 may include both a processor core and a memory controller. In alternate embodiments, the processor core(s) and memory controller may be part of different components. Memory 320 includes a memory system that may be adaptively controlled to function as described above with various operational parameters based on system conditions and/or policies. System conditions may be monitored by processor 310 and/or a memory controller. The memory controller may be part of processor 310, memory 320, or another system component. Electronic system 300 may also include read only memory (ROM) and/or other static storage device 330 coupled to bus 305 that may store static information and instructions for processor 310. Data storage device 340 may be coupled to bus 305 to store information and instructions. Data storage device 340 such as a magnetic disk or optical disc and corresponding drive may be coupled to electronic system 300. Electronic system 300 may also be coupled via bus 305 to display device 350, such as a cathode ray tube (CRT) or liquid crystal display (LCD), to display information to a user. Alphanumeric input device 360, including alphanumeric and other keys, may be coupled to bus 305 to communicate information and command selections to processor 310. Another type of user input device is cursor control 370, such as a mouse, a trackball, or cursor direction keys to communicate direction information and command selections to processor 310 and to control cursor movement on display 350. Electronic system 300 further may include network interface(s) 380 to provide access to a network, such as a local area network. Network interface(s) 380 may include, for example, a wireless network interface having antenna 385, which may represent one or more antenna(e). Network interface(s) 380 may also include, for example, a wired network interface to communicate with remote devices via network cable 387, which may be, for example, an Ethernet cable, a coaxial cable, a fiber optic cable, a serial cable, or a parallel cable. In one embodiment, network interface(s) 380 may provide access to a local area network, for example, by conforming to IEEE 802.1 lb and/or IEEE 802.llg standards, and/or the wireless network interface may provide access to a personal area network, for example, by conforming to Bluetooth standards. Other wireless network interfaces and/or protocols can also be supported. IEEE 802.1 lb corresponds to IEEE Std. 802.1 lb-l999 entitled "Local and Metropolitan Area Networks, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: Higher-Speed Physical Layer Extension in the 2.4 0Hz Band," approved September 16, 1999 as well as related documents. IEEE 802.11 g corresponds to IEEE Std. 802.11 g-2003 entitled "Local and Metropolitan Area Networks, Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, Amendment 4: Further [-ligher Rate Extension in the 2.4 0Hz Band," approved June 27, 2003 as well as related documents. Bluetooth protocols are described in "Specification of the Bluetooth System: Core, Version 1.1," published February 22, 2001 by the Bluetooth Special Interest Group, Inc. Associated as well as previous or subsequent versions of the Bluetooth standard may also be supported. In addition to, or instead of, communication via wireless LAN standards, network interface(s) 380 may provide wireless communications using, for example, Time Division, Multiple Access (TDMA) protocols, Global System for Mobile Communications (GSM) protocols, Code Division, Multiple Access (CDMA) protocols, and/or any other type of wireless communications protocol Figure 4 is a flow diagram of one embodiment of a technique for dynamic selection and modification of memory operational states. The flow diagram of Figure 4 includes optional operating system involvement. Some embodiments include operating system involvement, and other embodiments can operate without operating system involvement. Thus, the operating system components of Figure 4 are optional. Current operating conditions are observed, 400. These operating conditions may include one or more of the conditions described above, for example, memory bandwidth utilization, current memory operating state, memory scaling factor, etc. The current operating conditions are utilized to select a next operational state for the memory system, 410. SeLection of the next operational state for the memory may also include operating system guidance, 420. The operating system guidance may include, for example, performance bias values, power bias values, andlor other policy information. In one embodiment, the next operational state is selected from one of four operational states, for example, as described with respect to Figure 1. In alternate embodiments, a different number of operational states for the memory system may be supported. The memory system transitions to the new operational state, 430. Under some conditions the new operational state may be the same as the old operational state, for example, if the monitored conditions have not significantly changed. The cycle then repeats. In one embodiment the operational state is updated/changed approximately every I ms; however, other periods can be used. In one embodiment, after selection of the operational state, information related to the selection of the new operational state is provided to the operating system. In one embodiment, this is referred to as quality of service (QoS) feedback to the operating system, 450. Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one cmbodiment" in various places in the specification arc not necessarily all referring to the same embodiment. While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting. B |
Systems, methods, and apparatuses for range protection are provided. In some embodiments, an apparatus comprises at least one monitoring circuit to monitor for memory accesses to an address space andtake action upon a violation to the address space, wherein the action is one of generating a notification to a node that requested the monitor, generating the wrong request, generate a notification ina specific context of the home node, and generating a notification in a node that has ownership of the address space; at least one a protection table to store an identifier of the address space; andat least one hardware core to execute an instruction to enable the monitoring circuit. |
1. A computing device comprising:Means for monitoring memory accesses to an address space and taking action upon a violation of said address space, wherein said action is one of: generating a notification to a node that requested said monitoring ;generating the notification in the specific context of the home node; and generating the notification in the node having ownership of said address space;means for storing an identifier of said address space; andMeans for executing instructions for enabling the means for monitoring, wherein the instructions include a base address, a memory size granularity, a trace granularity, a mode, and an indication of an action to be taken.2. The computing device of claim 1, wherein the memory size granularity is one of cache lines, pages, huge pages, or huge pages.3. The computing device of claim 1, wherein the tracking granularity depends on the number of node groups.4. The computing device of claim 1, wherein the mode is one of reading and writing.5. The computing device of any one of claims 1 to 4, further comprising:means for processing memory requests from the means for executing; andMeans for processing memory requests from the means for processing memory requests from the means for execution and as a portion of the memory space of the device.6. The computing device of any one of claims 1 to 4, further comprising:Means for storing remote storage requests as part of a transaction initiated by execution of the instructions.7. A method performed by a computing device, the method comprising:Execution of instructions in a core causes:Send a monitoring request to the first proxy connection for monitoring access to the address space, wherein the monitoring request includes a base address from the instruction, a memory size granularity, a trace granularity, a mode, a value of the address space as the A size that is a multiple of the granularity, and an action, where the action is one of: generating a notification to the node that requested the monitoring; generating an error request; generating a notification in the specific context of the home node; and A notification is generated in the node that owns the address space mentioned above;sending the monitoring request to a second proxy connection for the second proxy connection to broadcast the monitoring request as a multicast message to all cores in a socket of the second proxy connection to initiate monitoring,An acknowledgment is received from the second proxy connection indicating success or failure of the monitoring request.8. The method of claim 7, wherein the first proxy connection and the second proxy connection are located on physically different nodes and communicate through a fabric interface.9. The method of claim 7, wherein the first proxy connection and the second proxy connection are located on physically different sockets and communicate through a socket interface.10. The method of any one of claims 7 to 9, further comprising:Log out of said monitoring.11. The method of any one of claims 7 to 9, further comprising:The second proxy connection sends messages to any interconnected proxies in the nodes to which the second proxy is connected.12. The method of any one of claims 7 to 9, further comprising:receiving an indication of access to said address space;A directory is updated to indicate the access to the address space.13. A computing device comprising:At least one monitoring circuit for monitoring memory accesses to an address space and taking action upon a violation of said address space, wherein said action is one of the following: generating a notification to a node that requested said monitoring; Generate the notification in the specific context of the home node; and generate the notification in the node that has ownership of the address space;at least one protection table for storing identifiers of the address space; andAt least one hardware core for executing instructions for enabling the monitoring circuit, wherein the instructions include a base address, a memory size granularity, a trace granularity, a mode, and an indication of an action to be taken.14. The computing device of claim 13, wherein the memory size granularity is one of cache lines, pages, huge pages, or huge pages.15. The computing device of claim 13, wherein the tracking granularity depends on the number of node groups.16. The computing device of claim 13, wherein the mode is one of reading and writing.17. The computing device of claim 13, further comprising:cache proxy circuitry for processing memory requests from at least one of the at least one hardware core; andHome agent circuitry for handling memory requests from the caching agent and for homing as part of the device's memory space.18. The computing device of claim 13, further comprising:A buffer for storing remote storage requests that are part of a transaction initiated by execution of the instruction. |
Systems, methods and devices for range protectionTechnical fieldThe field of the invention relates generally to computer processor architecture, and more particularly to instructions that cause specific results when executed.Background techniqueExtensions to the Instruction Set Architecture (ISA) provide an interface for software to work with Transactional Memory (TM) support. The basic goal is to accelerate multi-threaded workloads by providing a hardware scheme that enables these workloads to perform a certain set of operations through lock elision. Commercial examples of TM are Hardware Lock Elision (HLE) and Restricted Transactional Memory (RTM).The HLE extension adds two new instruction prefixes XACQUIRE and XRELEASE. The basic concept is: the thread executes XACQUIRE, any instruction stream plus XRELEASE. Logically, this segment can be viewed as "lock(); Instructions(); unlock() (lock(); instructions(); unlock())". Even though one thread may be executing this segment, other threads consider this segment to be free. If the processor detects a violation (meaning another thread enters a critical section), the ongoing transaction is aborted and the thread restarts the instruction flow from XACQUIRE. After executing XRELEASE without detecting a violation, all instructions are committed.TSX is a good hardware-based solution for improving software systems with large numbers of threads accessing small but frequently shared addresses and code flows. However, this mechanism applies within the consistency domain (i.e., a multi-socket system connected via UPI). As data sets grow in size, transactional software such as databases need to be able to operate on several shared memory systems through high-speed interconnects such as fabrics. There may be dozens of these systems connected via interconnects, and these systems will span different consistency domains (a domain can be a single system or a group of systems).Description of the drawingsThe invention is illustrated by way of example and is not limited to the illustrations of the various figures in which like reference numerals designate similar elements and in which:Figure 1 illustrates an embodiment of a system supporting remote monitoring;Figure 2 shows an embodiment of a tag directory;Figure 3 illustrates an exemplary embodiment of the use of protective scopes;Figure 4 illustrates two embodiments for handling remote monitoring violations;Figure 5 shows an exemplary embodiment of using protection scopes on the receiving side;Figure 6 shows an exemplary embodiment of using unprotected scope on the receiving side;Figure 7 shows an example of the initialization and finalization process using protection scope and unprotection scope;Figure 8 illustrates an embodiment of a method for handling conflicts by a core that did not request monitoring (non-originating core);Figure 9 shows an example of conflict resolution. In this example, setting (protection scope) has already occurred;Figure 10 shows an example of a situation that could potentially cause huge problems in terms of software reliability and debuggability;Figure 11 shows an example of a situation with a range violation;Figure 12 is a block diagram of a register architecture according to one embodiment of the present invention;13A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline in accordance with embodiments of the invention;13B is a block diagram illustrating an exemplary embodiment of an in-order architecture core and an exemplary register renaming out-of-order issue/execution architecture core to be included in a processor in accordance with embodiments of the invention;14A-14B show a block diagram of a more specific exemplary in-order core architecture, which core will be one of several logical blocks in the chip (including other cores of the same type and/or different types);15 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics devices, in accordance with an embodiment of the invention;16-19 are block diagrams of exemplary computer architectures; and20 is a block diagram illustrating the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set, in accordance with an embodiment of the present invention.Detailed waysIn the description that follows, many specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail to avoid obscuring the understanding of this description.References in the specification to "one embodiment," "an embodiment," "example embodiment," etc. indicate that the described embodiment may include specific features, structures, or characteristics, but that each embodiment may not necessarily include the specific features, structures, or characteristics. Characteristics, structure, or properties. Moreover, such phrases do not necessarily refer to the same embodiment. Additionally, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is to be understood that it is within the knowledge of one skilled in the art to practice the feature, structure, or characteristic in conjunction with other embodiments, whether or not explicitly described.Significant investments in improving memory, storage devices, fabrics, and interconnect technologies have created the possibility for distributed shared memory systems (DSM) to meet the needs of enterprise and big data applications. Distributed shared memory systems can provide a large, single address space for a cluster of servers on the fabric where fabric latency is expected to be approximately the memory latency; thereby providing a "scale-up" node controller system Provide scalable, cost-effective alternatives. These node controllers will provide access to storage and memory technologies and protocols such as NVM or NBOD.However, one of the disadvantages of DSM is the cache coherency issue of the application's memory references. For enterprise or big data applications, several types of memory references, such as per-process stacks and temporary storage devices running on the system, do not need to be consistent. On the other hand, there are often sections of code where the application needs to ensure consistency (for example: critical sections for transaction processing). To enforce consistency, new software solutions based on hardware solutions are provided by next-generation data centers. Therefore, a software stack that uses certain hardware capabilities will provide applications with a way to enforce consistency or compatibility among the different processes and threads running in the data center.Enterprise applications tend to be highly complex. As a result, these applications rely on many different components coded using millions of lines of code. These applications are usually multi-threaded. In many cases, these applications are running simultaneously with thousands of threads and with DSMs, all of which will potentially share the same address space across dozens of servers. In this environment, the probability of having a software bug (in an application or library) related to a thread accessing the wrong memory area will be much higher relative to current architectures. Therefore, exposure mechanisms for protecting and detecting such memory accesses will be fundamental to software adoption and the success of future DSM architectures. Without hardware support, arbitrating among thousands of threads across dozens of nodes or detecting memory corruption in a DSM would be a tedious or impossible task.Embodiments for preventing memory corruption between different threads using DSM are detailed herein. The thread executing the violation can be notified via a page fault when the violation is about to occur, and the thread can prevent the violation from occurring. In many instances, this helps prevent software errors such as stray pointers or buffer overflows from generating memory corruption in the DSM.This allows threads running in one node to use specified access patterns to protect a given memory region belonging to another node. Any access done by any other thread in the cluster should generate an error and propagate the information to the system.Figure 10 shows an example of a situation that could potentially cause huge problems in terms of software reliability and debuggability. The database has threads running in node 0 1001, node 1 1003, and node 2 1005. Node 1 1003 exposes memory to node 0 1001 and node 2 1005. Node 0 1001 and node 2 1005 have two threads 1009, 1011 and 1013, 1015 that access remote memory.The database software stack assigns access to [a, b] to thread 0 1009, access to [b+1, c] to thread 1 1011, and access to [e, f] to thread 2 1013 and thread 3 1015. access. Due to a software implementation error (e.g., a corrupt pointer or a buffer overflow), Thread 2 1013 accidentally generated a memory reference that ended up in the address space intended (in theory) to be dedicated by Thread 0 1109 . This can cause memory corruption.With the embodiments detailed herein, the hardware will allow the database to protect each of the different memory regions that threads 0, 1, 2, and 3 are accessing.Figure 11 shows an example of a situation with a range violation. In this example, a put made by Thread 2 in Thread 1's address space would represent a violation. In this example, a page fault will be represented.Detailed herein are interfaces (instructions) that allow a software thread to specify that a given set of instructions is bound to a specific type of access to a certain memory range. Any access to this address space performed in the specified mode by any other thread (in or outside the consistency domain) will generate a page fault on the requesting side and perform other specified actions based on the previous registration.Detailed herein are embodiments of hardware for implementing remote address access monitoring and methods of use. Figure 1 illustrates an embodiment of a system supporting remote monitoring. A typical socket 101 includes multiple processor cores 105 , on-die interconnect hardware 113 , and fabric interfaces 111 . Remote monitoring can be performed within a node (via coherent on-die interconnect 113) or on a socket-by-socket basis between nodes using fabric switches and fabric interfaces 111. As such, depending on the address space to which the monitoring request is directed, the request may go to the local memory of the same node, which may go to the on-die interconnect 113 used to route the request to other processors within the same coherence domain. , or the request can go to the processor through the Host Fabric Interface (HFI) 111 outside the consistency domain. A system may consist of one or more coherence domains that are all connected by fabric interconnects. For example, a high-performance computing or data center consists of N clusters or servers that can communicate with each other using a fabric. By using the structure described, each coherence domain can expose some address regions to other coherence domains. However, access between different consistency domains is not consistent. In most instances, the structure allows memory-range addresses to be mapped between different coherence domains.Nodes also typically have caching agents and/or home agents 115. A cache agent is a coherency agent within a node that handles memory requests from cores within the same node. A home agent (HA) is a cluster of nodes responsible for processing memory requests from the cache agent and serves as a home for part of the memory address space (a die can have multiple homes with distributed address space mappings). In this illustration, there is one home agent 115 per socket, however, in some embodiments, there is one home agent per node. Further, in some embodiments, the functionality of the home agent is included in the caching agent and is referred to as a caching home agent (CHA) as shown at 109. Throughout this description, CHA is generally used for ease of description.A caching agent (such as CHA 109) is an entity that can bring transactions into consistent memory and can keep copies in its own cache structure. A cache agent is defined by the messages it can pour and pull based on the behavior defined in the cache coherence protocol. Caching agents can also provide copies of consistent memory contents to other caching agents. A home agent (such as CHA 109 or home agent 115) is the entity that services consistency transactions, including handshakes with caching agents when necessary. The home agent oversees a portion of consistent memory. The home agent is responsible for managing conflicts that may arise among different caching agents. The home agent provides appropriate data and ownership responses as needed through the flow of a given transaction.Further, the home agent includes a distributed directory with the following memory address states: clean (this is the only replica, for example, the row that was just written back), any (any remote socket within the node can have a replica), and Invalid (the local socket's cache has a copy). The additional status (remote) indicates that the remote node has requested a copy and may have been and may be updated when the request for the row originates from the structure.One logical place to add a monitoring scheme is the home agent within the node, and in some embodiments this is the case. However, when a distributed scheme maps the address space in HA (node controller, hash scheme, hemisphere, quadrant scheme, etc.), this can add too much design, area, and verification complexity. As such, in some embodiments, this monitoring information is maintained as a protection table (PT) 103 in: 1) A node's tunneling of any memory transactions from other nodes to the home node (fabric interface 111) The agent; 2) the core within node 105; and 3) can access local memory without going through the agent (on-die interconnect 113) to identify the unique agent accessed. This table is used by monitoring circuitry (not shown) which tracks memory/cache accesses, compares those accesses to the table and alerts the originating core of any accesses requested.In some embodiments, a distributed memory monitoring scheme allows cores to register on the home node to monitor interest address ranges. The monitoring scheme allows discovering when a given line accessed by other caching agents in the system falls within a specified address range; accordingly, the monitoring scheme updates the sharer's valid bits for the given range. A core requesting address range tracking uses the tag directory structure 107 to represent the sockets in the cluster that have access to a specific address range and are used by the core to track/monitor the address range.This directory is a non-perfect label directory in two dimensions. First, assume that the entire system can have a very large address space, and different addresses can match in the same tag entry (explained below). Second, each bit in the sharer's remote tracking information (e.g., bitmask or bloom filter) corresponds to a set of caching proxies in the system. The filtering hardware associated with the per-core label directory performs bloom filtering or other filtering to test for inclusion in the set.Figure 2 shows an example of a tag directory. Entries in tag directory 201 include the following fields: tracking size granularity 203, address hash 205, tracking granularity 207, and remote sharer information 209.Trace size 203 and trace granularity 207 may be provided through the consistency call as detailed above. This way, you can use page-level or hugepage-level tracing instead of cache line tracing to reduce the number of lines in the directory.As a simplified example, assume a hypothetical cluster consisting of 8 nodes with 2 sockets each, and consider that each node has only 4MB of memory (65K rows of 64B each). Now, there are 65K rows of entries in the lookup directory, each entry corresponding to a row in the node. In this case, the 16-bit bitmask accurately tracks which sockets have requested memory from this node. However, in practice, systems have much larger memories, and the space requirements for lookup directories can quickly become impractical. For this reason, the catalog is imperfect.Bloom filtering is performed on either node groups or subsets of nodes 209 instead of bitmasking to reduce the space complexity of the directory.To provide scalability, in some embodiments, the cache line address 205 is hashed onto lines in the directory using the hash function H(), noting that the number of lines is less than the number of cache lines. A good choice of H() can create fewer collisions, e.g. using the lower order bits of the cache line address ensures a good distribution of the hash function. Note that a conflict does not imply any loss of correctness; a conflict simply indicates a possible false positive: since both cache lines map to the same line in the directory, we will end up "remote"ing both cache lines Nodes" jointly monitor.The probability of false positives becomes smaller when a good hash function is chosen and the distributed directory bit is used (the tag directory only needs to be queried if the cache line's distributed directory bit indicates "remote"). At the same time, the number of nodes that need to be monitored is significantly reduced. As mentioned earlier, further tradeoffs may be achieved by changing the granularity of the hashing and using bloom filter tracing instead of bitmasking based on hints specified by the application.Each agent and core can contain a fixed number of protection entries in its protection table. Each protection entry contains the address range, the original ownership of the requested protection, and all associated information (attributes and actions). If no free entry exists, the action fails. This action also fails when the address rants. The action will also fail if the requested address range overlaps with another monitoring entry. If a failure occurs, a failure response will be sent back to the originating structure and communicated to the software stack. Ultimately, if a failure occurs, the software will be notified and said software will need to take appropriate action. A different way of propagating failure to the software stack could be to issue a callback from the core to the software stack.To allow protection of a given address range being monitored by a given core, in some embodiments the instructions and messages for protection initialization (Protect_Range(protect_range)) and protection release (Unprotect_Range(unprotect_range)) are provided by Processor core support. These instructions provide a new interface that allows software threads to specify that a given set of instructions is bound to a specific type of access to a certain memory range. Any access to this address space performed by any other thread (in or outside the consistency domain) in the specified mode will be automatically notified to the software stack. From this point of view, the software stack is the one responsible for taking specific actions (for example, restarting a copy of the monitored object).Examples of PROTECT_RANGE directive and UNPROTECT_RANGE directive are:PROTECT_RANGE base_address,granularity,mode,size,action(PROTECT_RANGE base_address, granularity, mode, size, action)UNPROTECT_RANGEThe semantics of PROTECT_RANGE are as follows: the thread provides the baseline address, the granularity of the address space that needs to be monitored, the monitoring mode and size. The granularity may be, for example, cache line, memory line, KB, MB, or GB (eg, encoded as: 0, 1, 2, 3, 4). size specifies a multiple of the granularity space that needs to be monitored. The mode specifies what type of violation is being monitored: read (R) or write (W) (for example, W mode will imply that the thread will be notified if a write operation is used to access the address region). "Action" specifies what action to take by the home node controlling access to this scope: (a) generate a notification to the requesting thread/node; (b) generate the notification in the specific context of the home node (e.g., an entity registers for protection monitoring, while another entity handles the violation); (c) generate a notification in the thread/node that has ownership of the violated address range; and (d) any combination of a, b, or c.In order to be able to propagate a page fault to one of the threads of the node to which the protection scope belongs, a third instruction is executed in all nodes of the data center. The specified thread is responsible for taking one or more specific actions when a page fault caused by a protection violation occurs in the local node. These different ways of propagating protection violations provide flexibility for the software stack to take distributed and potentially independent decisions when a protection violation is detected.The following example shows one way of binding instructions within a transaction to an address space corresponding to [300000, 300000+4MB] in read/write mode and specifying that only the owner should receive page faults: Protect_range300000, 2,4,W,Requestor (protection_scope 300000, 2,4,W,requestor).Executing PROTECT_RANGE causes a PROTECT_RANGE message to be sent from the initiating (originating) core to its local cache agent for propagation to remote cores, etc., in order to set up monitoring (eg, protection table(s)). In some embodiments, a protection table for execution cores is also set (eg, when information in the protection table is replicated across cores, agents, etc. in a node).Executing UNPROTECT_RANGE will stop the monitoring(s) and remove the associated protection tables.Figure 3 illustrates an exemplary embodiment of the use of PROTECT_RANGE. At 301, the first core (originating core) in the first node executes the PROTECT_RANGE instruction. For example, in Figure 1, core 0 105 executes the PROTECT_RANGE instruction.This execution causes a monitoring request (PROTECT_RANGE message) to be sent from the first core to its corresponding caching agent at 303 . This request includes information from the instruction (base address, granularity, size, and mode). The caching proxy manages the baseline address to set the requested monitoring. Depending on the implementation, this caching proxy is separate from the home proxy. For example, in Figure 1, core 105 sends a request to CHA 109 (combined cache and home agent). In other words, the core uses the provided attributes and actions to alert the cache agent of the addresses (AS) [base_address to base_address + granularity*size] that the core wants to monitor.At 305, in some embodiments, the caching agent identifies the local domain home agent responsible for managing the request. For example, the home agent is responsible for the base address. Note that the identified home agent may be combined in the same entity (CHA) as the caching agent, as detailed above.At 307, the identified home agent identifies what node in the system is home to the address space (which may be a local consistency domain) that the core (thread) wants to monitor.Once the ownership of the address region is identified, a request to set up monitoring in the home node is sent to the agent at 309 (fabric 111 or on-die interconnect 113 in the illustration of Figure 1). In other words, at 309, a protection message broker connection (PROTECT_RANGE message) is sent from the identified home agent to the remote node that is home to the address space. Note that nodes can belong to different consistency domains and use the structure, or they can be within the same consistency domain. In that case, the agent will be the on-die interconnect.At 311, a successful or failed response from the agent connection regarding registration for protection monitoring is received by the originating core. Examples of possible causes of failure include, but are not limited to, overlapping address spaces, no free monitoring space, and hardware failures. If the monitoring(s) are successful, the core's tag directory is updated. Further, in most embodiments, after confirming that one or more monitors are configured, the protection table is updated across all proxies in the socket.After registration, in some embodiments, notification of violations that occur while conducting transactions on the monitored address space is received at 313. For example, remote monitoring handles writes to monitored addresses. This can be received by the core or agent. Figure 4 illustrates two embodiments for handling remote monitoring violations. At 401, the core receives the violation notification. At 403, this causes the kernel to generate a user interrupt.Alternatively, at 405, a violation notification is received by the core. At 409, this causes the core to notify the software stack. In either case, the software stack is responsible for taking appropriate action to resolve the failure. In the case of action (a), requests violating the protected area will progress forward in order to deallocate the structure in the CHA. However, the return status to the requesting core (rather than any MESI or MEOSIF status) will be a violation notification. The requestor core will then generate a page fault in a manner similar to how current processors use the protection key mechanism to handle page faults. If action (a) is not configured, the requesting node will simply receive the data. The core propagates the violation to user space.If successful, in which all different operations are performed without requesting a core violation, an unprotect_range message (logout) is sent from the first core to the monitoring agent at 313 . For example, the core executes the UNPROTECT_RANGE instruction to release the monitor, and once the release instruction is executed, the core notifies the remote agent (the HFI of socket B in the example) of the release. The agent propagates release notifications to the true ownership and core of this watch.Figure 5 shows an exemplary embodiment using PROTECT_RANGE on the receiving side. At 501, the receiving node's agent receives a request to initiate protection monitoring. In other words, a protection message proxy connection (PROTECT_RANGE message) is received at the remote node that is home to the address space. Note that nodes can belong to different consistency domains and use the structure, or they can be within the same consistency domain. In that case, the agent will be the on-die interconnect.At 503, this request is sent to the core and proxy.At 505, acknowledgment from the core and agent regarding the request is received by the receiving agent (eg, fabric). For example, whether the protection monitoring was successfully set up. These acknowledgments typically include the responder's identifier.At 507, these acknowledgments are processed by the agent into a single acknowledgment that is sent to the originating core. Acknowledgment to the originating core includes an identifier of the location where monitoring is occurring.Figure 6 shows an exemplary embodiment using UNPROTECT_RANGE on the receiving side. At 601, the receiving node's agent receives a request to release monitoring. In other words, at 309, a monitoring message proxy connection (UNPROTECT_RANGE message) is received at the remote node that is home to the address space. Note that nodes can belong to different consistency domains and use the structure, or they can be within the same consistency domain. In that case, the agent will be the on-die interconnect.At 603, this request is sent to the core and agent.At 605, acknowledgment from the core and agent regarding the request is received by the receiving agent (eg, fabric). For example, monitoring whether to stop. These acknowledgments typically include the responder's identifier.At 607, these acknowledgments are processed by the agent into a single acknowledgment that is sent to the originating core. The acknowledgment to the originating core includes the identifier of the location where monitoring has ceased.Figure 7 shows an example of the initialization and finalization flow using PROTECT_RANGE and UNPROTECT_RANGE. Initiate core 701 to decode and execute the PROTECT_RANGE instruction. For example, a thread on core 701 executes the PROTECT_RANGE instruction. This instruction provides the base address, the granularity of the space to be monitored (eg, memory rows, KB, MB, or GB), size, and mode.Core 701 in Node A sends a request to the local caching agent (CHA:CA+HA) managing the baseline address to set up monitoring. Core 701 notifies the CHA that the core wants to monitor an address space (AS) with the action(s) to be performed, where AS = [@base to base+granularity*size].CHA 703 identifies the ownership (such as a socket) to which a given memory area is mapped. In some embodiments, the instructions abort if the region belongs to several homes. CHA 703 identifies what is the home agent in the local consistency domain that is responsible for managing the requested address (Base_Address). The home agent (CHA 703) identifies what node (socket) in the system is home to the address space (which may be a local consistency domain) that the thread wants to monitor.The CHA 703 sends a protection message to the proxy connection structure 707 for delivery to the remote node acting as the home of the AS. On the target side, the agent generates multicast messages directed to agents that include sockets, such as any on-die interconnect agent in node 717 and any fabric interconnect agent in node 709 as well as in the home socket. All cores 711 and 715.All target destinations respond with acknowledgment messages regarding the success or failure of monitoring registration. Typically, the response will be collapsed by the agent in the home node that received the monitoring request (structure 709 in this example). If this fails, the proxy will propagate the notification to the requester and will deregister with the remaining peers within the home node.If any proxy agent or core identifies a transaction to an address space that violates the requested monitoring AS, the agent will send a violation message to core 701, thereby notifying the core of the violation. Core 701 propagates the violation to user space.When there is no problem or a violation has been detected, core 701 will send a logout message once the UNPROTECT_RANGE instruction is decoded and executed, thereby warning the agent that the core does not need to monitor the AS any more. Once the release instruction is executed, the core notifies the remote agent (structure 709) of the release. Agent 709 propagates release notifications to the true ownership and core of this monitor. Note that core 701 will know the agent identifier due to the confirmation received during the registration process detailed earlier.As suggested above, there may be times when a given area of space is detected as being accidentally accessed/modified by other threads in the system. Three different scenarios can occur: 1) The address space is accessed by another proxy that violates protected mode. This will imply that a transaction has been violated and the agent must take the appropriate action specified at the protection_area space. 2) The access is not completed by any other thread in the system. This will imply that no false behavior has occurred and protection resources can be released. 3) Any other type of failure occurs. For example, the remote node fails, or a timeout occurs. When an error occurs (1) or (3), similar to a registration failure, the agent that caught the violation will take one of the specific actions specified earlier.Figure 8 illustrates an embodiment of a method for handling conflicts by a core that did not request monitoring (non-originating core). At 801, the non-originating core writes or reads (accesses) a memory or cache address.At 803, it is determined that the access is to a protected address and is of the type of access being monitored. For example, determining a write to an address in the monitored address space (access type). Of course, access to an unmonitored address or access to a monitored address that is not of the type being monitored will not cause a conflict.At 805, the non-originating core sends a violation notification to the core that requested monitoring. Depending on how the core is arranged, this notification may go through the fabric or the interconnect interface.Figure 9 shows an example of conflict resolution. In this example, setting (PROTECT_RANGE) has occurred. Protection specifies that only the requestor that violates the request must cause a page fault. Second, issue a set of read operations by the same thread (for example, this could be a database transaction to part of an index server). Then, node C performs a false write operation in the area protected by node A. The proxy HFI in Node B identifies that the given address is being protected by Node A. Following the action specified in the protection entry, HFI returns a completion to the requestor in node C indicating that a violation has occurred. CHA propagates the violation to the requesting core. The context in which the check caused the violation generates a page fault. Note that given a protection zone action notification, node A will continue to perform normal operations despite the violation having occurred.The following figures detail exemplary architectures and systems for implementing the above-described embodiments. In some embodiments, one or more of the hardware components and/or instructions described above are emulated or implemented as software modules as described in detail below.Example register architectureFigure 12 is a block diagram of a register architecture 1200 according to one embodiment of the invention. In the embodiment shown, there are 32 512-bit wide vector registers 1210; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlayed on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm register) are overlaid on registers xmm0-15.Scalar operations are operations performed on the lowest-order data element positions in the zmm/ymm/xmm registers; depending on the embodiment, higher-order data element positions either remain the same as before the instruction or are reset to zero.Write Mask Registers 1215 - In the embodiment shown, there are 8 write mask registers (k0 to k7), each write mask register is 64 bits in size. In an alternative embodiment, the size of write mask register 1215 is 16 bits. As mentioned previously, in one embodiment of the invention, vector mask register k0 cannot be used as a write mask; when the encoding that normally indicates k0 is used as a write mask, it selects the hardwired write mask of 0xFFFF , effectively disabling write masking for that instruction.General Purpose Registers 1225 - In the embodiment shown, there are sixteen 64-bit general purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 1245, overlaid with MMX packed integer flat register file 1250 - in the embodiment shown, the x87 stack is used to use x87 instruction set extensions to 32/64/ An eight-element stack that performs scalar floating point operations on 80-bit floating point data; the MMX registers are used to perform operations on 64-bit packed integer data and to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, fewer, or different register files and registers.Exemplary core architecture, processor and computer architectureProcessor cores can be implemented in different processors in different ways, for different purposes. For example, implementations of such cores may include: 1) general-purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores intended for general-purpose computing; 3) intended primarily for graphics and/or or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general-purpose in-order cores intended for general-purpose computation and/or one or more general-purpose out-of-order cores intended for general purpose computation; and 2 ) coprocessor, which consists of one or more dedicated cores intended primarily for graphics and/or scientific (throughput) use. Such different processors lead to different computer system architectures, which may include: 1) a co-processor on a separate chip from the CPU; 2) in the same package as the CPU but on a separate die coprocessor; 3) coprocessor on the same die as the CPU (in this case, such coprocessor is sometimes called dedicated logic or called a dedicated core, the dedicated logic such as integrated graphics and / or scientific (throughput) logic); and 4) a system on a chip that can combine the described CPU (sometimes referred to as application core(s) or application processor(s)), the co-processing described above processor and additional functionality included on the same die. An example core architecture is described next, followed by an example processor and computer architecture.Example core architectureIn-order and out-of-order kernel block diagram13A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline in accordance with embodiments of the invention. 13B is a block diagram illustrating an exemplary embodiment of an in-order architecture core and an exemplary register renaming out-of-order issue/execution architecture core to be included in a processor in accordance with embodiments of the invention. The solid boxes in Figures 13A-13B show in-order pipelines and in-order cores, while the optional addition of dashed boxes shows register-renamed, out-of-order issue/execution pipelines and cores. The out-of-order aspects will be described considering that the in-order aspects are a subset of the out-of-order aspects.In Figure 13A, processor pipeline 1300 includes fetch stage 1302, length decode stage 1304, decode stage 1306, allocation stage 1308, rename stage 1310, dispatch (also called dispatch or issue) stage 1312, register read/memory Read level 1314, execution level 1316, writeback/memory write level 1318, exception handling level 1322, and commit level 1324.13B illustrates processor core 1390 that includes front-end unit 1330 coupled to execution engine unit 1350, and both front-end unit 1330 and execution engine unit 1350 coupled to memory unit 1370. Core 1390 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, core 1390 may be a specialized core, such as, for example, a network or communications core, a compression engine, a coprocessor core, a general computing graphics processing unit (GPGPU) core, a graphics core, or the like.Front-end unit 1330 includes branch prediction unit 1332 coupled to instruction cache unit 1334 coupled to instruction translation lookaside buffer (TLB) 1336 coupled to instruction translation lookaside buffer 1336 Fetch unit 1338, which is coupled to decode unit 1340. Decoding unit 1340 (or decoder) may decode the instruction and generate one or more micro-ops, microcode entry points, micro- instructions, other instructions, or other control signals as output. Decoding unit 1340 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementation, programmable logic array (PLA), microcode read-only memory (ROM), etc. In one embodiment, core 1390 includes microcode ROM or other media that stores microcode for certain macroinstructions (eg, in decode unit 1340, or otherwise within front-end unit 1330). Decoding unit 1340 is coupled to renaming/distributor unit 1352 in execution engine unit 1350 .Execution engine unit 1350 includes a rename/distributor unit 1352 coupled to a retirement unit 1354 and a set of one or more scheduler units 1356. Scheduler unit(s) 1356 represents any number of different schedulers, including reservation stations, central command windows, and the like. Scheduler unit(s) 1356 are coupled to physical register file unit(s) 1358 . Each of the physical register file unit(s) 1358 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integers, scalar floats, etc. point, packed integer, packed floating point, vector integer, vector floating point, state (e.g., the instruction pointer that is the address of the next instruction to be executed), and so on. In one embodiment, physical register file unit(s) 1358 include vector register units, write mask register units, and scalar register units. These register units provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 1358 are overlapped by retirement unit(s) 1354 to illustrate the various ways in which register renaming and out-of-order execution may be implemented (e.g., using reorder buffer(s) and retirement register file(s) ; use future file(s), history buffer(s), retirement register file(s); use register maps and register pools, etc.). Retirement unit 1354 and physical register file unit(s) 1358 are coupled to execution cluster(s) 1360 . Execution cluster(s) 1360 includes a set of one or more execution units 1362 and a set of one or more memory access units 1364. Execution unit 1362 may perform various operations (eg, shifts, additions, subtractions, multiplications) and may perform on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). Although some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit(s) 1356 , physical register file unit(s) 1358 , and execution cluster(s) 1360 are shown as possible, as some embodiments create separate ones for certain types of data/operations. Pipelines (e.g., scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or each has its own scheduler unit, physical register file unit(s), and/or Execution cluster's memory access pipeline—and in the case of separate memory access pipelines, implementing certain embodiments in which only the execution cluster of the pipeline has memory access unit(s) 1364). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the remaining pipelines may be in-order.A set of memory access units 1364 is coupled to a memory unit 1370 that includes a data TLB unit 1372 that is coupled to a data cache unit 1374 that is coupled to a second level (L2) cache Cache unit 1376. In one exemplary embodiment, memory access unit 1364 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 1372 in memory unit 1370 . Instruction cache unit 1334 is also coupled to a second level (L2) cache unit 1376 in memory unit 1370 . L2 cache unit 1376 is coupled to one or more other levels of cache and ultimately to main memory.As an example, an exemplary register renamed out-of-order issue/execution core architecture may implement pipeline 1300 as follows: 1) instruction fetch 1338 executes fetch stage 1302 and length decode stage 1304; 2) decode unit 1340 executes decode stage 1306; 3) Renaming/allocator unit 1352 executes allocation stage 1308 and renaming stage 1310; 4) Scheduler unit(s) 1356 executes scheduling stage 1312; 5) Physical register file unit(s) 1358 and memory unit 1370 execute Register read/memory read stage 1314; Execution cluster 1360 executes execution stage 1316; 6) Memory unit 1370 and physical register file unit(s) 1358 execute writeback/memory write stage 1318; 7) Each unit may be involved in Exception handling stage 1322; and 8) retirement unit 1354 and physical register file unit(s) 1358 execute commit stage 1324.Core 1390 may support one or more instruction sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); MIPS Instruction Set, MIPS Technologies, Inc., Sunnyvale, Calif.; Sunnyvale, Calif. ARM Holdings Inc.'s ARM instruction set (with optional additional extensions such as NEON), which includes the instruction(s) described in this article. In one embodiment, core 1390 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing the use of packed data to perform operations used by many multimedia applications.It should be understood that the core may support multi-threading (executing two or more parallel operations or collections of threads), and that multi-threading may be accomplished in a variety of ways, including time-division multi-threading, simultaneous multi-threading, Threading (where a single physical core provides a logical core to each of the threads that the physical core is simultaneously multithreading), or a combination thereof (e.g., time-divided fetch and decode and thereafter simultaneous multithreading in technologies such as hyper-threading change).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in in-order architectures. Although the illustrated processor embodiment also includes separate instruction and data cache units 1334/1374 and a shared L2 cache unit 1376, alternative embodiments may have a single internal cache for both instructions and data, Such as, for example, a first level (L1) internal cache or multiple levels of internal cache. In some embodiments, the system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific exemplary ordered core architecture14A-14B show block diagrams of a more specific exemplary in-order core architecture, which core will be one of several logic blocks in the chip (including other cores of the same type and/or different types). Depending on the application, the logic blocks communicate through a high-bandwidth interconnection network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic.Figure 14A is a block diagram of a single processor core and its connection to an on-die interconnect network 1402 and a local subset of its second level (L2) cache 1404, in accordance with an embodiment of the invention. In one embodiment, instruction decoder 1400 supports the x86 instruction set with packed data instruction set extensions. L1 cache 1406 allows low latency access to cache memory into scalar and vector units. Although in one embodiment (to simplify design), scalar unit 1408 and vector unit 1410 use separate sets of registers (scalar register 1412 and vector register 1414, respectively), and data transferred between these registers is written to memory , and subsequently read back from the first level (L1) cache 1406, but alternative embodiments of the invention may use different approaches (e.g., use a single register set or include allowing data to be transferred between the two register files without communication path being written and read back).The local subset of the L2 cache 1404 is part of the global L2 cache, which is divided into multiple separate local subsets, one for each processor core. Each processor core has a direct access path to its own local subset 1404 of the L2 cache. Data read by a processor core is stored in its L2 cache subset 1404 and may be accessed quickly in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1404 and flushed from other subsets if necessary. Ring networks ensure consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other blocks of logic to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.Figure 14B is an expanded view of a portion of the processor core in Figure 14A, in accordance with an embodiment of the present invention. 14B includes the L1 data cache 1406A portion of the L1 cache 1404, as well as more details about the vector unit 1410 and vector register 1414. Specifically, vector unit 1410 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 1428) that executes one or more of integer, single-precision floating point, and double-precision floating point instructions. The VPU supports mixing of register inputs through mixing unit 1420, numerical conversion through numerical conversion units 1422A-B, and copying of memory inputs through copy unit 1424. Write mask register 1426 allows predicted vector writes.Figure 15 is a block diagram of a processor 1500 that may have more than one core, may have an integrated memory controller, and may have integrated graphics devices, in accordance with embodiments of the invention. The solid box in Figure 15 shows a processor 1500 with a single core 1502A, a system agent 1510, a set of one or more bus controller units 1516, while the optional addition of a dashed box shows a system with multiple cores 1502A-N. A set of one or more integrated memory controller units 1514 in the agent unit 1510 and a replacement processor 1500 for dedicated logic 1508 .Accordingly, different implementations of processor 1500 may include: 1) a CPU, where dedicated logic 1508 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and cores 1502A-N are one or Multiple general-purpose cores (e.g., general-purpose in-order core, general-purpose out-of-order core, combination of both); 2) co-processor, where cores 1502A-N are intended primarily for graphics and/or science (throughput) a large number of specialized cores; and 3) a co-processor, where cores 1502A-N are a large number of general purpose in-order cores. Thus, the processor 1500 may be a general-purpose processor, a co-processor, or a special-purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (general-purpose graphics processing unit), a high-throughput integrated many-core (MIC) coprocessors (including 30 or more cores), embedded processors, etc. The processor may be implemented on one or more chips. Processor 1500 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a variety of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more cache levels within the core, one or more sets of shared cache units 1506 , and external memory (not shown) coupled to a set of integrated memory controller units 1514 . The set of shared cache units 1506 may include one or more intermediate levels of cache, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, final level cache, etc. Caching (LLC) and/or a combination of the above. Although in one embodiment, the ring-based interconnect unit 1512 interconnects the integrated graphics logic 1508, the set of shared cache units 1506, and the system agent unit 1510/integrated memory controller unit(s) 1514, alternative embodiments Any number of well-known techniques may be used to interconnect such units. In one embodiment, coherence is maintained between one or more cache units 1506 and cores 1502A-N.In some embodiments, one or more cores 1502A-N are capable of multi-threading. System agent 1510 includes those components that coordinate and operate cores 1502A-N. System agent unit 1510 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or include the logic and components required to regulate the power states of cores 1502A-N and integrated graphics logic 1508 . The display unit is used to drive one or more externally connected displays.Cores 1502A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of cores 1502A-N may be able to execute the same set of instructions while other cores may be able to execute the same set of instructions. A mere subset of a set or a different set of instructions.Example computer architectureFigures 16-19 are block diagrams of exemplary computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices Other system designs and configurations for video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and a variety of other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of containing processors and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 16, shown is a block diagram of a system 1600 in accordance with one embodiment of the present invention. System 1600 may include one or more processors 1610, 1615 coupled to controller hub 1620. In one embodiment, controller hub 1620 includes graphics memory controller hub (GMCH) 1690 and input/output hub (IOH) 1650 (which may be on separate chips); GMCH 1690 includes memory and graphics controller, memory 1640 A coprocessor 1645 is coupled to the memory and graphics controller; an IOH 1650 couples an input/output (I/O) device 1660 to the GMCH 1690 . Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), the memory 1640 and coprocessor 1645 are directly coupled to the processor 1610, and the controller hub 1620 is interfaced with the IOH 1650 in a single chip.The optionality of additional processors 1615 is indicated in Figure 16 by dashed lines. Each processor 1610, 1615 may include one or more of the processing cores described herein, and may be some version of processor 1500.Memory 1640 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1620 communicates with the process(s) via a multi-drop bus such as a front-side bus (FSB), a point-to-point interface such as a quick path interconnect (QPI), or similar connection 1695 devices 1610 and 1615 to communicate.In one embodiment, coprocessor 1645 is a special purpose processor such as, for example, a high throughput MIC processor, a network or communications processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, or the like. In one embodiment, controller hub 1620 may include an integrated graphics accelerator.There may be various differences between physical resources 1610 and 1615 in a series of quality measures including architecture, micro-architecture, thermal, power consumption characteristics, etc.In one embodiment, processor 1610 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. Processor 1610 identifies these coprocessor instructions as being of a type that should be executed by attached coprocessor 1645 . Accordingly, processor 1610 issues these coprocessor instructions (or control signals representing the coprocessor instructions) to coprocessor 1645 over the coprocessor bus or other interconnect. Coprocessor(s) 1645 accepts and executes received coprocessor instructions.Referring now to Figure 17, shown is a block diagram of a first more specific exemplary system 1700 in accordance with embodiments of the present invention. As shown in FIG. 17 , multiprocessor system 1700 is a point-to-point interconnect system and includes a first processor 1770 and a second processor 1780 coupled via a point-to-point interconnect 1750 . Each of processors 1770 and 1780 may be some version of processor 1500. In one embodiment of the invention, processors 1770 and 1780 are processors 2010 and 1615, respectively, and coprocessor 1738 is coprocessor 1645. In another embodiment, processors 1770 and 1780 are processor 1610 and coprocessor 1645, respectively.Processors 1770 and 1780 are shown including integrated memory controller (IMC) units 1772 and 1782, respectively. Processor 1770 also includes point-to-point (P-P) interfaces 1776 and 1778 as part of its bus controller unit; similarly, second processor 1780 includes P-P interfaces 1786 and 1788. Processors 1770, 1780 may exchange information via a P-P interface 1750 using point-to-point (P-P) interface circuits 1778, 1788. As shown in Figure 17, IMCs 1772 and 1782 couple the processors to respective memories, namely memory 1732 and memory 1734, which may be portions of main memory locally attached to the respective processors.The processors 1770, 1780 may each exchange information with the chipset 1790 via respective P-P interfaces 1752, 1754 using point-to-point interface circuits 1776, 1794, 1786, 1798. Chipset 1790 may optionally exchange information with coprocessor 1738 via high performance interface 1739 . In one embodiment, coprocessor 1738 is a special purpose processor such as, for example, a high throughput MIC processor, a network or communications processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, or the like.A shared cache (not shown) may be included in either processor, or external to both processors but connected to the processors via a P-P interconnect, such that if the processor is placed in a low power mode, either processor Local cache information for one or both processors can be stored in the shared cache.Chipset 1790 may be coupled to first bus 1716 via interface 1796 . In one embodiment, the first bus 1716 may be a Peripheral Component Interconnect (PCI) bus or a bus such as the PCI Express bus or another third generation I/O interconnect bus, but the scope of the invention is not limited thereto. .As shown in FIG. 17 , various I/O devices 1714 may be coupled to the first bus 1716 along with a bus bridge 1718 that couples the first bus 1716 to the second bus 1720 . In one embodiment, one or more processors, such as a coprocessor, a high throughput MIC processor, a GPGPU, an accelerator (such as, for example, a graphics accelerator or a digital signal processing (DSP) unit), a field programmable gate array, or any other processor A plurality of additional processors 1715 are coupled to first bus 1716 . In one embodiment, the second bus 1720 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1720 , including, for example, a keyboard and/or mouse 1722 , a communication device 1727 , and a storage unit 1728 such as a memory unit 1728 that may include instructions/code and data 1730 disk drive or other mass storage device. Additionally, audio I/O 1724 may be coupled to second bus 1720. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 17, the system may implement a multi-drop bus or other such architecture.Referring now to Figure 18, shown is a block diagram of a second more specific exemplary system 1800 in accordance with embodiments of the present invention. Similar elements in Figures 17 and 18 use similar reference numbers, and certain aspects of Figure 17 have been omitted from Figure 18 to avoid obscuring other aspects of Figure 17.Figure 18 shows that processors 1770, 1780 may include integrated memory and I/O control logic ("CL") 1772 and 1782, respectively. Therefore, CL 1772, 1782 includes an integrated memory controller unit and includes I/O control logic. Figure 18 shows that not only memories 1732, 1734 are coupled to CLs 1772, 1782, but also I/O devices 1814 are coupled to control logic 1772, 1782. Legacy I/O devices 1815 are coupled to chipset 1790 .Referring now to Figure 19, shown is a block diagram of a SoC 1900 in accordance with an embodiment of the invention. Similar elements in Figure 15 use similar reference numbers. Additionally, the dashed box is an optional feature on more advanced SoCs. In Figure 19, interconnect unit(s) 1902 are coupled to: an application processor 1910, which includes a set of one or more cores 202A-N and a shared cache unit(s) 1506; a system agent unit 1510; bus controller unit(s) 1516; integrated memory controller unit(s) 1514; a set of one or more co-processors 1920, which may include integrated graphics logic, image processors, audio processors, and video processor; static random access memory (SRAM) unit 1930; direct memory access (DMA) unit 1932; and display unit 1940 for coupling to one or more external displays. In one embodiment, coprocessor(s) 1920 include a special purpose processor such as, for example, a network or communications processor, a compression engine, a GPGPU, a high throughput MIC processor, or an embedded processor, among others.Various embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments of the invention may be implemented as a computer program or program code executing on a programmable system including at least one processor, a memory system (including volatile and non-volatile memory and/or storage elements) , at least one input device and at least one output device.Program code, such as code 1730 shown in Figure 17, may be applied to input instructions to perform the functions described herein and generate output information. Output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.Program code may be implemented in a high-level procedural programming language or an object-oriented programming language to communicate with the processing system. If necessary, the program code can also be implemented in assembly language or machine language. In fact, the mechanisms described in this article are not limited to the scope of any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, representing various logic in a processor, which instructions, when read by a machine, cause the machine to use logic for performing the techniques described in this article. Such representations, known as "IP cores," may be stored on tangible machine-readable media and may be supplied to various customers or production facilities for loading into the manufacturing machines that actually manufacture the logic or processors.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles manufactured or formed by machines or equipment, including storage media such as hard disks; any other type of disk, including floppy disks, optical disks, compact disks, Disc-Read-Only Memory (CD-ROM), Compact Disk-Rewritable (CD-RW), and Magneto-Optical Discs; Semiconductor devices such as Read-Only Memory (ROM), such as Dynamic Random Access Memory (DRAM), and Static Random Access Memory random access memory (RAM), erasable programmable read-only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM); phase change memory (PCM); magnetic card or Optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory tangible machine-readable media that embody instructions or contain design data, such as a hardware description language (HDL) that defines the structures, circuits, devices, processors described herein and/or system characteristics. These embodiments are also referred to as program products.Simulation (including binary transformation, code deformation, etc.)In some cases, an instruction converter may be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter may transform (eg, using static binary transformation, dynamic binary transformation including dynamic compilation), transform, emulate, or otherwise transform the instruction into one or more other instructions to be processed by the core. The instruction converter can be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on-processor, off-processor, or partially on-processor and partially off-processor.20 is a block diagram illustrating the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set, in accordance with an embodiment of the present invention. In the illustrated embodiment, the instruction translator is a software instruction translator, but alternatively, the instruction translator may be implemented in software, firmware, hardware, or various combinations thereof. 20 illustrates that a program in the form of a high-level language 2002 can be compiled using an x86 compiler 2004 to generate x86 binary code 2006 that can be natively executed by a processor 2016 having at least one x86 instruction set core. Processor with at least one x86 instruction set core 2016 means any processor that performs substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise performing the following: 1) Intel x86 An essential part of the instruction set of an instruction set core, or 2) an application that is targeted at running on an Intel processor with at least one x86 instruction set core to achieve substantially the same results as an Intel processor with at least one x86 instruction set core, or Object code versions of other software. x86 compiler 2004 means a compiler operable to generate x86 binary code 2006 (eg, object code) that is executable on a processor 2016 having at least one x86 instruction set core, with or without additional linking . Similarly, FIG. 20 illustrates that an alternative instruction set compiler 2008 may be used to compile a program in the form of a high-level language 2002 to generate a program that may be executed by a processor 2014 that does not have at least one Alternative instruction set binary code 2010 natively executed by a processor that executes the MIPS instruction set of MIPS Technologies Inc. of Sunnyvale, Calif., and/or a processor that executes a core of the ARM instruction set of ARM Holdings Inc. of Sunnyvale, Calif. Instruction converter 2012 is used to convert x86 binary code 2006 into code that can be natively executed by processor 2014 that does not have an x86 instruction set core. This converted code is unlikely to be identical to the alternative instruction set binary code 2010, as an instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform general operations and consist of instructions from the alternative instruction set. Thus, instruction translator 2012 represents software, firmware, hardware, or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 2006 through emulation, emulation, or any other process. |
A method and apparatus for enhanced packet traffic arbitration comprising conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and commencing a slot timing for use by the user device. |
CLAIMS: 1. A method for enhanced packet traffic arbitration comprising: conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and commencing a slot timing for use by the user device. 2. The method of claim 1 wherein the at least one scrambling code is for one or more of the following: device identification, initial synchronization, data whitening, collision detection or collision avoidance. 3. The method of claim 1 further comprising avoiding collision by assigning device priority. 4. The method of claim 3 wherein the single line shared multi-drop data bus is pulled high at all times and driven low during transmission. 5. The method of claim 4 wherein the transmission starts with a sync byte and terminates with a stop byte. 6. The method of claim 5 wherein the single line shared multi-drop data bus is oversampled. 7. The method of claim 6 wherein the single line shared multi-drop data bus is an ePTA bus. 8. The method of claim 1 wherein the user device is a Bluetooth device. 9. The method of claim 8 wherein the Bluetooth device is activated by asserting a BT ACTIVE signal HIGH. 10. The method of claim 1 wherein the user device is operationally compatible with one or more of the following: a WLAN system, a WiFi system, an IEEE 802.11 wireless system, a LTE system or a WiMax system. 11. A user device comprising a processor and a memory, the memory containing program code executable by the processor for performing the following: conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and commencing a slot timing for use by the user device. 12. The user device of claim 11 wherein the at least one scrambling code is for one or more of the following: device identification, initial synchronization, data whitening, collision detection or collision avoidance. 13. The user device of claim 11 wherein the memory further comprises program code for avoiding collision by assigning device priority. 14. The user device of claim 13 wherein the single line shared multi-drop data bus is pulled high at all times and driven low during transmission. 15. The user device of claim 14 wherein the transmission starts with a sync byte and terminates with a stop byte. 16. The user device of claim 15 wherein the single line shared multi-drop data bus is oversampled. 17. The user device of claim 16 wherein the single line shared multi-drop data bus is an ePTA bus. 18. The user device of claim 11 wherein the user device is a Bluetooth device. 19. The user device of claim 18 wherein the Bluetooth device is activated by asserting a BT ACTIVE signal HIGH. 20. The user device of claim 11 wherein the user device is operationally compatible with one or more of the following: a WLAN system, a WiFi system, an IEEE 802.11 wireless system, a LTE system or a WiMax system. 21. An apparatus for enhanced packet traffic arbitration comprising: means for conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and means for commencing a slot timing for use by the user device. 22. The apparatus of claim 21 wherein the at least one scrambling code is for one or more of the following: device identification, initial synchronization, data whitening, collision detection or collision avoidance. 23. The apparatus of claim 21 further comprising means for avoiding collision by assigning device priority. 24. The apparatus of claim 23 wherein the single line shared multi-drop data bus is pulled high at all times and driven low during transmission. 25. The apparatus of claim 24 wherein the transmission starts with a sync byte and terminates with a stop byte. 26. The apparatus of claim 25 wherein the single line shared multi-drop data bus is oversampled. 27. The apparatus of claim 26 wherein the single line shared multi-drop data bus is an ePTA bus. 28. The apparatus of claim 21 wherein the apparatus is operationally compatible with one or more of the following: a WLAN system, a WiFi system, an IEEE 802.11 wireless system, a LTE system or a WiMax system. 29. A computer-readable medium storing a computer program, wherein execution of the computer program is for: conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and commencing a slot timing for use by the user device. 30. The computer-readable medium of claim 29 wherein the at least one scrambling code is for one or more of the following: device identification, initial synchronization, data whitening, collision detection or collision avoidance. 31. The computer-readable medium of claim 29 wherein execution of the computer program is also for avoiding collision by assigning device priority. 32. The computer-readable medium of claim 31 wherein the single line shared multi-drop data bus is pulled high at all times and driven low during transmission. 33. The computer-readable medium of claim 32 wherein the transmission starts with a sync byte and terminates with a stop byte. 34. The computer-readable medium of claim 33 wherein the single line shared multi-drop data bus is oversampled. 35. The computer-readable medium of claim 34 wherein the single line shared multi-drop data bus is an ePTA bus. 36. The computer-readable medium of claim 29 wherein execution of the computer program is operationally compatible with one or more of the following: a WLANsystem, a WiFi system, an IEEE 802.11 wireless system, a LTE system or a WiMax system. |
METHOD AND APPARATUS FOR ENHANCED PACKET TRAFFIC ARBITRATION CLAIM OF PRIORITY The present Application for Patent claims priority to Provisional Application No. 61/238,585 entitled Method and Apparatus for Enhanced Packet Traffic Arbitration filed August 31, 2009, and assigned to the assignee hereof and hereby expressly incorporated by reference herein. FIELD [0001] This disclosure relates generally to wireless communications. More particularly, the present disclosure relates to an enhanced packet traffic arbitration scheme among wireless communications systems such as, but not limited to, WiFi and Bluetooth. BACKGROUND [0002] In many communication systems, communications networks are used to exchange messages among several interacting nodes which are separated apart in space. There are many types of networks which may be classified in different aspects. In one example, the geographic scope of the network could be over a wide area, a metropolitan area, a local area, or a personal area, and the corresponding networks are designated as wide area network (WAN), metropolitan area network (MAN), local area network (LAN), or personal area network (PAN). Networks may also differ in the switching/routing technique used to interconnect the various network nodes and devices (e.g. circuit switching, packet switching, etc.), in the type of physical media employed for waveform propagation (e.g. wired vs. wireless), or in the set of communication protocols used (e.g. Internet protocol suite, SONET (Synchronous Optical Networking), Ethernet, wireless LAN protocols, etc.).SUMMARY [0003] Disclosed is a method and apparatus for enhanced packet traffic arbitration. According to one aspect, a method for enhanced packet traffic arbitration comprising conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and commencing a slot timing for use by the user device. [0004] According to another aspect, A user device comprising a processor and a memory, the memory containing program code executable by the processor for performing the following: conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and commencing a slot timing for use by the user device. [0005] According to another aspect, An apparatus for enhanced packet traffic arbitration comprising means for conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code with good autocorrelation and cross- correlation properties and shares a single line shared multi-drop data bus; and means for commencing a slot timing for use by the user device. [0006] According to another aspect, A computer-readable medium storing a computer program, wherein execution of the computer program is for conveying one or more of the following: a priority status, an operational status or a frequency status relating to a user device, wherein the conveying uses at least one scrambling code withgood autocorrelation and cross-correlation properties and shares a single line shared multi-drop data bus; and commencing a slot timing for use by the user device. [0007] A potential advantage of the present disclosure includes improved wireless network response to packet traffic. [0008] It is understood that other aspects will become readily apparent to those skilled in the art from the following detailed description, wherein it is shown and described various aspects by way of illustration. The drawings and detailed description are to be regarded as illustrative in nature and not as restrictive. BRIEF DESCRIPTION OF THE DRAWINGS [0009] Figure 1 is a block diagram illustrating an example of an access node/UE system. [0010] Figure 2 illustrates an example of a wireless communications system that supports a plurality of user devices. [0011] Figure 3 illustrates an example of an ePTA system. [0012] Figure 4 illustrates an example of a physical interface for the ePTA system. [0013] Figure 5 illustrates an example of a signal timeline for two user devices. [0014] Figure 6 illustrates an example of a signal timeline for three user devices. [0015] Figure 7 illustrates an example of usage of an ePTA protocol as a replacement for the PTA protocol for packet traffic arbitration (PTA). [0016] Figure 8 illustrates an example flow diagram for enhanced packet traffic arbitration. [0017] Figure 9 illustrates an example of a device comprising a processor in communication with a memory for executing the processes for enhanced packet traffic arbitration.[0018] Figure 10 illustrates an example of a device suitable for enhanced packet traffic arbitration. DETAILED DESCRIPTION [0019] The detailed description set forth below in connection with the appended drawings is intended as a description of various aspects of the present disclosure and is not intended to represent the only aspects in which the present disclosure may be practiced. Each aspect described in this disclosure is provided merely as an example or illustration of the present disclosure, and should not necessarily be construed as preferred or advantageous over other aspects. The detailed description includes specific details for the purpose of providing a thorough understanding of the present disclosure. However, it will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the present disclosure. Acronyms and other descriptive terminology may be used merely for convenience and clarity and are not intended to limit the scope of the present disclosure. [0020] While for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more aspects, occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with one or more aspects. [0021] The techniques described herein may be used for various wireless communication networks such as Code Division Multiple Access (CDMA) networks,Time Division Multiple Access (TDMA) networks, Frequency Division Multiple Access (FDMA) networks, Orthogonal FDMA (OFDMA) networks, Single-Carrier FDMA (SC-FDMA) networks, etc. The terms "networks" and "systems" are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband-CDMA (W-CDMA) and Low Chip Rate (LCR). Cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as Evolved UTRA (E-UTRA), IEEE 802.11, IEEE 802.16, IEEE 802.20, Flash-OFDM®, etc. UTRA, E-UTRA, and GSM are part of Universal Mobile Telecommunication System (UMTS). Long Term Evolution (LTE) is an upcoming release of UMTS that uses E-UTRA. UTRA, E-UTRA, GSM, UMTS and LTE are described in documents from an organization named "3rd Generation Partnership Project" (3GPP). cdma2000 is described in documents from an organization named "3rd Generation Partnership Project 2" (3GPP2). These various radio technologies and standards are known in the art. [0022] Figure 1 is a block diagram illustrating an example of an access node/UE system 100. One skilled in the art would understand that the example access node/UE system 100 illustrated in Figure 1 may be implemented in an FDMA environment, an OFDMA environment, a CDMA environment, a WCDMA environment, a TDMA environment, a SDMA environment or any other suitable wireless environment. [0023] The access node/UE system 100 includes an access node 101 (e.g., base station) and a user equipment or UE 201 (e.g., wireless communication device). In the downlink leg, the access node 101 (e.g., base station) includes a transmit (TX) data processor A I lO that accepts, formats, codes, interleaves and modulates (or symbolmaps) traffic data and provides modulation symbols (e.g., data symbols). The TX data processor A 110 is in communication with a symbol modulator A 120. The symbol modulator A 120 accepts and processes the data symbols and downlink pilot symbols and provides a stream of symbols. In one aspect, it is the symbol modulator A 120 that modulates (or symbol maps) traffic data and provides modulation symbols (e.g., data symbols). In one aspect, symbol modulator A 120 is in communication with processor A 180 which provides configuration information. Symbol modulator A 120 is in communication with a transmitter unit (TMTR) A 130. The symbol modulator A 120 multiplexes the data symbols and downlink pilot symbols and provides them to the transmitter unit A 130. [0024] Each symbol to be transmitted may be a data symbol, a downlink pilot symbol or a signal value of zero. The downlink pilot symbols may be sent continuously in each symbol period. In one aspect, the downlink pilot symbols are frequency division multiplexed (FDM). In another aspect, the downlink pilot symbols are orthogonal frequency division multiplexed (OFDM). In yet another aspect, the downlink pilot symbols are code division multiplexed (CDM). In one aspect, the transmitter unit A 130 receives and converts the stream of symbols into one or more analog signals and further conditions, for example, amplifies, filters and/or frequency upconverts the analog signals, to generate an analog downlink signal suitable for wireless transmission. The analog downlink signal is then transmitted through antenna 140. [0025] In the downlink leg, the UE 201 includes antenna 210 for receiving the analog downlink signal and inputting the analog downlink signal to a receiver unit (RCVR) B 220. In one aspect, the receiver unit B 220 conditions, for example, filters, amplifies, and frequency downconverts the analog downlink signal to a first"conditioned" signal. The first "conditioned" signal is then sampled. The receiver unit B 220 is in communication with a symbol demodulator B 230. The symbol demodulator B 230 demodulates the first "conditioned" and "sampled" signal (e.g., data symbols) outputted from the receiver unit B 220. One skilled in the art would understand that an alternative is to implement the sampling process in the symbol demodulator B 230. The symbol demodulator B 230 is in communication with a processor B 240. Processor B 240 receives downlink pilot symbols from symbol demodulator B 230 and performs channel estimation on the downlink pilot symbols. In one aspect, the channel estimation is the process of characterizing the current propagation environment. The symbol demodulator B 230 receives a frequency response estimate for the downlink leg from processor B 240. The symbol demodulator B 230 performs data demodulation on the data symbols to obtain data symbol estimates on the downlink path. The data symbol estimates on the downlink path are estimates of the data symbols that were transmitted. The symbol demodulator B 230 is also in communication with a RX data processor B 250. [0026] The RX data processor B 250 receives the data symbol estimates on the downlink path from the symbol demodulator B 230 and, for example, demodulates (i.e., symbol demaps), deinterleaves and/or decodes the data symbol estimates on the downlink path to recover the traffic data. In one aspect, the processing by the symbol demodulator B 230 and the RX data processor B 250 is complementary to the processing by the symbol modulator A 120 and TX data processor A I lO, respectively. [0027] In the uplink leg, the UE 201 includes a TX data processor B 260. The TX data processor B 260 accepts and processes traffic data to output data symbols. The TX data processor B 260 is in communication with a symbol modulator D 270. The symbol modulator D 270 accepts and multiplexes the data symbols with uplink pilotsymbols, performs modulation and provides a stream of symbols. In one aspect, symbol modulator D 270 is in communication with processor B 240 which provides configuration information. The symbol modulator D 270 is in communication with a transmitter unit B 280. [0028] Each symbol to be transmitted may be a data symbol, an uplink pilot symbol or a signal value of zero. The uplink pilot symbols may be sent continuously in each symbol period. In one aspect, the uplink pilot symbols are frequency division multiplexed (FDM). In another aspect, the uplink pilot symbols are orthogonal frequency division multiplexed (OFDM). In yet another aspect, the uplink pilot symbols are code division multiplexed (CDM). In one aspect, the transmitter unit B 280 receives and converts the stream of symbols into one or more analog signals and further conditions, for example, amplifies, filters and/or frequency upconverts the analog signals, to generate an analog uplink signal suitable for wireless transmission. The analog uplink signal is then transmitted through antenna 210. [0029] The analog uplink signal from UE 201 is received by antenna 140 and processed by a receiver unit A 150 to obtain samples. In one aspect, the receiver unit A 150 conditions, for example, filters, amplifies and frequency downconverts the analog uplink signal to a second "conditioned" signal. The second "conditioned" signal is then sampled. The receiver unit A 150 is in communication with a symbol demodulator C 160. One skilled in the art would understand that an alternative is to implement the sampling process in the symbol demodulator C 160. The symbol demodulator C 160 performs data demodulation on the data symbols to obtain data symbol estimates on the uplink path and then provides the uplink pilot symbols and the data symbol estimates on the uplink path to the RX data processor A 170. The data symbol estimates on the uplink path are estimates of the data symbols that were transmitted. The RX dataprocessor A 170 processes the data symbol estimates on the uplink path to recover the traffic data transmitted by the wireless communication device 201. The symbol demodulator C 160 is also in communication with processor A 180. Processor A 180 performs channel estimation for each active terminal transmitting on the uplink leg. In one aspect, multiple terminals may transmit pilot symbols concurrently on the uplink leg on their respective assigned sets of pilot subbands where the pilot subband sets may be interlaced. [0030] Processor A 180 and processor B 240 direct (i.e., control, coordinate or manage, etc.) operation at the access node 101 (e.g., base station) and at the UE 201, respectively. In one aspect, either or both processor A 180 and processor B 240 are associated with one or more memory units (not shown) for storing of program codes and/or data. In one aspect, either or both processor A 180 or processor B 240 or both perform computations to derive frequency and impulse response estimates for the uplink leg and downlink leg, respectively. [0031] In one aspect, the access node/UE system 100 is a multiple-access system. For a multiple-access system (e.g., frequency division multiple access (FDMA), orthogonal frequency division multiple access (OFDMA), code division multiple access (CDMA), time division multiple access (TDMA), space division multiple access (SDMA), etc.), multiple terminals transmit concurrently on the uplink leg, allowing access to a plurality of UEs. In one aspect, for the multiple-access system, the pilot subbands may be shared among different terminals. Channel estimation techniques are used in cases where the pilot subbands for each terminal span the entire operating band (possibly except for the band edges). Such a pilot subband structure is desirable to obtain frequency diversity for each terminal.[0032] Figure 2 illustrates an example of a wireless communications system 290 that supports a plurality of user devices. In Figure 2, reference numerals 292A to 292G refer to cells, reference numerals 298A to 298G refer to base stations (BS) or node Bs and reference numerals 296A to 296J refer to access user devices (a.k.a. user equipments (UE)). Cell size may vary. Any of a variety of algorithms and methods may be used to schedule transmissions in system 290. System 290 provides communication for a number of cells 292 A through 292G, each of which is serviced by a corresponding base station 298A through 298G, respectively. [0033] One important characteristic of communications networks is the choice of wired or wireless media for the transmission of electrical signals among the network nodes. In the case of wired networks, tangible physical media such as copper wire, coaxial cable, fiber optic cable, etc. are employed to propagate guided electromagnetic waveforms which carry message traffic over a distance. Wired networks are a traditional form of communications networks and may be favored for interconnection of fixed network elements or for bulk data transfer. For example, fiber optic cables are often the preferred transmission media for very high throughput transport applications over long distances between large network hubs, for example, bulk data transport across or between continents over the Earth's surface. [0034] On the other hand, in many cases, wireless networks are preferred when the network elements are mobile with dynamic connectivity or if the network architecture is formed in an ad hoc, rather than fixed, topology. Wireless networks employ intangible physical media in an unguided propagation mode using electromagnetic waves in the radio, microwave, infrared, optical, etc. frequency bands. Wireless networks have the distinct advantage of facilitating user mobility and rapid field deployment compared to fixed wired networks. However, usage of wirelesspropagation requires significant active resource management among the network users and high levels of mutual coordination and cooperation for compatible spectrum utilization. [0035] For example, popular wireless network technologies include Bluetooth (BT) and wireless local area networks (WLAN). Bluetooth is a widely used wireless communications protocol to implement a personal area network (PAN) over very short distances, typically for a coverage area of a few meters radius, as an alternative to wired interconnection among local components. In one example, Bluetooth may be used to connect personal computers, personal digital assistants (PDA), mobile phones, wireless headsets, etc. Alternatively, a WLAN may be used to interconnect nearby devices together for both business and consumer applications, employing widely used networking protocols such as WiFi or, more generally, the IEEE 802.11 wireless protocol family, and connections between phones and laptops. [0036] A consideration with wireless network technologies is that users often share the same radio frequency band in the same geographic area for transmission. Thus, co-channel interference is a problem that must be actively managed. For example, both Bluetooth and WLAN systems may use the same unlicensed Industrial, Scientific, and Medical (ISM) radio band centered near a frequency of 2.4 GHz. In one example, to save costs, a mobile device may share a common antenna which accesses both wireless technologies. To support user scenarios with simultaneous BT and WLAN operation, coexistence algorithms are required. Thus, a coexistence algorithm is needed to arbitrate usage between Bluetooth and WLAN access technologies for co-located wireless devices. [0037] To mediate among co-existing wireless protocols, in one example, a coexistence mechanism is used. Coexistence mechanisms may be either collaborative,where information is shared between the communicating parties, or non-collaborative, where information is not shared. One collaborative coexistence mechanism is known as packet traffic arbitration (PTA). PTA typically uses the media access control (MAC) layer for traffic control. [0038] In current wireless practice, packet traffic arbitration (PTA) protocol is used to implement coexistence among different access technologies. In one example, the PTA may be implemented through 2, 3, or 4 wire interfaces between BT and WLAN electronic chips in a wireless device. Each access technology makes channel requests for individual packets with an optional priority indication for that request. In one example, the arbitrator between the BT access technology and the WLAN technology operates at the medium access control (MAC) layer. [0039] The PTA makes decisions on who gets access when both access technologies contend for a channel request simultaneously. This mechanism may prevent some collisions between the technologies for transmit traffic but does not prevent collisions between receive traffic. A collision is a conflict when two or more data sources attempt to transmit over the same medium at the same time. [0040] PTA is typically implemented as a hardware interface and arbitration protocol within user devices between a WLAN module and a Bluetooth module, for example. In one example, the WLAN module and Bluetooth module are comprised of integrated circuits (ICs). The PTA typically comprises at least three signals: • BT ACTIVE- asserted for transmission duration • BT STATUS- indicates priority and direction of transmission • TX CONFIRM- indicates if transmission can proceed in next slot • FREQ (Optional)- indicates the frequency status[0041 ] The PTA assumes in general that the WLAN module (e.g. WLAN IC) is the arbiter. Although PTA is a viable coexistence protocol, it has several limitations. For example, PTA requires three signals, it is not easily extendable, and it does not cover other wireless standards such as LTE or WiMax. PTA is a point-to-point protocol that does not scale up to support communication among more than two user devices in the system. [0042] The present disclosure includes improvements to the PTA protocol. This improved protocol is called Enhanced PTA (ePTA). There are several requirements for ePTA to provide higher user satisfaction. In one aspect, a minimal (preferably one) off- chip input/output (I/O) interface is preferred. For example, a parallel digital interface for on-chip ePTA may be used for BT-WLAN integration. In another aspect, a larger information transfer is desired. For example, more bits should be allocated for radio activity, priority level, transmit/receive status, frame synchronization and frame utilization status, etc. This does not include RF sharing control bits (e.g. LNA gain, power amplifier setting, TX/RX switch control) since the power on reset (PoR) control is maintained by the BT/WLAN modules. In another aspect, the timing should be low latency (e.g. below 10 microseconds) for request or priority signals and have flexible response time. In one aspect, ePTA could replace the existing PTA protocol without loss of functionality or stricter timing requirements. Other desired attributes include multi-drop capability (e.g. BT, WLAN, and LTE/WiMax), functionality when one or more user device is asleep, avoiding waking up devices from sleep, and low power utilization. [0043] Figure 3 illustrates an example of an ePTA system. Shown are three example devices each with their own reference clocks. In one aspect, reference clocks are used for synchronizing communication among the devices. The three devices areinterconnected through an ePTA bus which is a single line shared multi-drop data bus. The ePTA bus is pulled high by all devices at all times, but driven low for transmission. The reference clocks are required to be active for the actively communicating devices. In one aspect, the reference clocks do not need to be balanced or the same frequency since the ePTA bus is oversampled. In one example, a single wire serial bus interface (SSBI) is not used since it is a point-to-point, synchronous protocol. [0044] Figure 4 illustrates an example of a physical interface for the ePTA system. In one example, scrambling codes are selected for good autocorrelation and cross-correlation properties. In one aspect, scrambling codes are used for device identification, initial synchronization, data whitening and collision detection. In one example, transmission starts with a sync byte (scrambled in hex format OxFF) and terminates with a stop byte (unscrambled hex format OxFF). Collision avoidance may be obtained via assigned device priority (e.g. back off time). When two devices are on the same die, direct (e.g. back office) connections are used instead. [0045] Figure 5 illustrates an example of a signal timeline for two user devices. In one example, the first device goes first after initial synchronization and the second device accepts the sync word on time. [0046] Figure 6 illustrates an example of a signal timeline for three user devices. In this example, all devices attempt to communicate at the same time after synchronization and all devices fail to accept the other's sync words the first time. [0047] In one aspect, the protocol for data exchange depends on the access technology being used by each device. In one example, for Bluetooth (BT), a binary signal indicates if BT is active or not, a 3 bit signal indicates the BT priority state, another binary signal indicates the transmit/receive state, and another status signal indicates various other state variables such as frequency, etc. In another example, forWLAN (e.g. WiFi) a status signal indicates the grant state of the interface. In another example, for LTE/WiMax, a Frame_sync signal may be used by other access technologies for synchronization, a Frame used signal may be used to indicate whether the current LTE/WiMax frame is being used or available for BT or WLAN to use, and a grant signal may be used to indicate if access is granted or not. In one example, a Bluetooth device requires good timing off of the Frame sync signal for good utility. This may be achieved by having the Bluetooth device capture the Frame sync clock when the initial request is low and then interrupt software if the Frame sync bit is set. [0048] Figure 7 illustrates an example of usage of an ePTA protocol as a replacement for the PTA protocol for packet traffic arbitration (PTA). In one aspect, the existing PTA signaling is replaced by 5 ePTA exchanges. Shown in Figure 7 are the BT slot timing, BT ACTIVE signal, BT STATUS signal, FREQ status signal, and TX CONFIRM signal. [0049] Figure 8 illustrates an example flow diagram for enhanced packet traffic arbitration. In block 810, activate a user device. In one example, the user device is a Bluetooth device. In one example, the user device is activated by asserting a BT ACTIVE signal HIGH. Following block 810, in block 820, convey a priority status. In one example, the priority status is that of a user device. In one example, the priority status is conveyed to a second user device, a base station, a network controller, a resource manager, etc. One skilled in the art would understand that the list of recipients of the priority status is not comprehensive or exclusive and that other recipients may be included without affecting the scope or spirit of the present disclosure. [0050] In block 830, convey an operational status. In one example, the operational status is that of a user device. In one example, the operational status is abilevel status indicating either the user device is in a transmit status or a receive status. In block 840, convey a frequency status. In one example, the frequency status is that of a user device. [0051] In one aspect, one or more of the conveying steps in blocks 820, 830 or 840 uses at least one scrambling codes with good autocorrelation and cross-correlation properties. In one example, the scrambling codes are used for device identification, initial synchronization, data whitening and collision detection. For example, transmission starts with a sync byte (scrambled in hex format OxFF) and terminates with a stop byte (unscrambled hex format OxFF). Avoiding collision may be achieved by assigning device priority (e.g. back off time). In one aspect, the conveying steps in blocks 820, 830 and 840 all share a single line shared multi-drop data bus, for example, an ePTA bus. In one example, the single line shared multi-drop data bus is pulled high at all times, but driven low for transmission. In one example, the single line shared multi-drop data bus is oversampled. [0052] One skilled in the art would understand that although the steps in blocks 820, 830 and 840 are written in a sequential manner and in a particular order, that the steps can be performed in a parallel manner and in any order of each other. Additionally, either the operational status or the frequency status may be conveyed to a second user device, a base station, a network controller, a resource manager, etc. One skilled in the art would understand that the list of recipients indicated herein is not comprehensive or exclusive and that other recipients may be included without affecting the scope or spirit of the present disclosure. [0053] In block 850, commence a slot timing for use by the user device. In one example, the slot timing is a Bluetooth slot timing which operates between a HIGH state and a LOW state.[0054] One skilled in the art would understand that the steps disclosed in the example flow diagram in Figure 8 can be interchanged in their order without departing from the scope and spirit of the present disclosure. Also, one skilled in the art would understand that the steps illustrated in the flow diagram are not exclusive and other steps may be included or one or more of the steps in the example flow diagram may be deleted without affecting the scope and spirit of the present disclosure. [0055] Those of skill would further appreciate that the various illustrative components, logical blocks, modules, circuits, and/or algorithm steps described in connection with the examples disclosed herein may be implemented as electronic hardware, firmware, computer software, or combinations thereof. To clearly illustrate this interchangeability of hardware, firmware and software, various illustrative components, blocks, modules, circuits, and/or algorithm steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware, firmware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope or spirit of the present disclosure. [0056] For example, for a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described therein, or a combination thereof. With software, the implementation may be through modules (e.g., procedures, functions, etc.)that perform the functions described therein. The software codes may be stored in memory units and executed by a processor unit. Additionally, the various illustrative flow diagrams, logical blocks, modules and/or algorithm steps described herein may also be coded as computer-readable instructions carried on any computer-readable medium known in the art or implemented in any computer program product known in the art. [0057] In one or more examples, the steps or functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discsreproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0058] In one example, the illustrative components, flow diagrams, logical blocks, modules and/or algorithm steps described herein are implemented or performed with one or more processors. In one aspect, a processor is coupled with a memory which stores data, metadata, program instructions, etc. to be executed by the processor for implementing or performing the various flow diagrams, logical blocks and/or modules described herein. Figure 9 illustrates an example of a device 900 comprising a processor 910 in communication with a memory 920 for executing the processes for enhanced packet traffic arbitration. In one example, the device 900 is used to implement the algorithm illustrated in Figure 8. In one aspect, the memory 920 is located within the processor 910. In another aspect, the memory 920 is external to the processor 910. In one aspect, the processor includes circuitry for implementing or performing the various flow diagrams, logical blocks and/or modules described herein. [0059] Figure 10 illustrates an example of a device 1000 suitable for enhanced packet traffic arbitration. In one aspect, the device 1000 is implemented by at least one processor comprising one or more modules configured to provide different aspects of enhanced packet traffic arbitration as described herein in blocks 1010, 1020, 1030, 1040 and 1050. For example, each module comprises hardware, firmware, software, or any combination thereof. In one aspect, the device 1000 is also implemented by at least one memory in communication with the at least one processor. [0060] The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and thegeneric principles defined herein may be applied to other aspects without departing from the spirit or scope of the disclosure. |
Methods, systems, and devices for memory operations that support configuring a channel, such as a command/address (C/A) channel, are described. A configuration of a C/A channel may be dynamically adapted based on power saving considerations, control information execution latency, or both. Configuring a C/A channel may include determining a quantity of pins, or a quantity of cycles, both for communicating control information over the C/A channel. The quantity of pins may be determined based on previous control information transmissions, characteristics of a memory device, or predicted control information transmissions, or any combination thereof in some cases. The determined quantity of pins, quantity of cycles, or both may be explicitly or implicitly indicated to other devices (e.g., that use the C/A channel). |
CLAIMSWhat is claimed is:1. A method, comprising:determining a first quantity of pins of a channel for receiving one or more commands from a host device and a second quantity of cycles for receiving the one or more commands from the host device;configuring a component coupled with the channel based at least in part on the first quantity of pins and the second quantity of cycles, the component comprising a receiver or a decoder or both; andreceiving a command over the channel based at least in part on configuring the component.2. The method of claim 1, further comprising:receiving, from the host device, an indication of the first quantity of pins for receiving the one or more commands, or the second quantity of cycles for the one or more commands, or both, wherein determining the first quantity of pins and the second quantity of cycles is based at least in part on the indication.3. The method of claim 2, wherein receiving the indication comprises receiving, from the host device over the channel, a second command that includes the indication.4. The method of claim 3, wherein the indication is communicated over one or more pins of the channel that are different than one or more pins over which the second command is communicated.5. The method of claim 2, wherein receiving the indication comprises receiving, from the host device over the channel, the indication within a memory array access command.6. The method of claim 2, further comprising:determining that a duration since receiving the indication satisfies a lower- bound timing threshold, wherein receiving the command occurs after the lower-bound timing threshold is satisfied.7. The method of claim 1, further comprising:configuring a second component coupled with a second channel for receiving the one or more commands from the host device based at least in part on a third quantity of pins of the second channel and a fourth quantity of cycles for the one or more commands of the second channel, the second component comprising a second receiver or a second decoder or both; andreceiving a second command over the second channel based at least in part on configuring the second component.8. The method of claim 1, further comprising:identifying an initialization event of a memory device, wherein determining the first quantity of pins and determining the second quantity of cycles is based at least in part on identifying the initialization event.9. The method of claim 1, further comprising:identifying an operation parameter of a memory device, wherein determining the first quantity of pins or determining the second quantity of cycles is based at least in part on identifying the operation parameter of the memory device.10. The method of claim 9, wherein the operation parameter comprises a power consumption parameter associated with the memory device, a third quantity of commands in a buffer of the memory device that satisfies a criteria, or both.11. The method of claim 1, wherein determining the first quantity of pins for receiving the one or more commands is based at least in part on determining the second quantity of cycles for the one or more commands.12. An apparatus, comprising:a first receiver comprising a plurality of pins configured to receive one or more commands over a first channel;a first decoder coupled with the first receiver and configured to decode one or more commands received over the first channel; anda register coupled with the first receiver and the first decoder and programmable to configure a width of the first channel based at least in part on one or more commands received over the first channel.13. The apparatus of claim 12, wherein the register is configured to determine a first quantity of pins of the first channel for receiving the one or more commands and is configured to determine a second quantity of cycles for the one or more commands.14. The apparatus of claim 12, further comprising:a second receiver comprising a plurality of pins configured to receive one or more commands over a second channel; anda second decoder coupled with the second receiver and configured to decode one or more commands received over the second channel, wherein the register isprogrammable to configure a width of the second channel based at least in part on one or more commands received over the second channel, the register coupled with the second receiver and the second decoder.15. The apparatus of claim 14, wherein the first channel is configured to communicate row commands and the second channel is configured to communicate column commands.16. The apparatus of claim 14, wherein a first quantity of pins of the first channel is different than and is independently configurable from a second quantity of pins of the second channel.17. A method, comprising:determining a first quantity of pins of a channel configured to transmit one or more commands to a memory device and a second quantity of cycles for transmitting the one or more commands to the memory device;configuring a component coupled with the channel based at least in part on the first quantity of pins of the channel and the second quantity of cycles for transmitting the one or more commands, the component comprising a driver or an encoder or both; andtransmitting, to the memory device, a command over the channel based at least in part on configuring the component.18. The method of claim 17, further comprising:transmitting, to the memory device, an indication of the first quantity of pins for transmitting the one or more commands, or the second quantity of cycles for the one or more commands, or both.19. The method of claim 18, further comprising:determining that a duration since transmitting the indication satisfies a lower- bound timing threshold, wherein transmitting the command occurs after the lower-bound timing threshold is satisfied based at least in part on the determination.20. The method of claim 17, further comprising:configuring a second component coupled with a second channel for transmitting the one or more commands to the memory device based at least in part on a third quantity of pins of the second channel and a fourth quantity of cycles for the one or more commands of the second channel, the second component comprising a second driver or a second encoder or both; andtransmitting a second command over the second channel based at least in part on configuring the second component.21. The method of claim 17, further comprising:identifying an operation parameter of the memory device, whereindetermining the first quantity of pins or determining the second quantity of cycles is based at least in part on identifying the operation parameter of the memory device.22. The method of claim 21, wherein the operation parameter is a power consumption parameter associated with the memory device, a third quantity of commands in a buffer of the memory device that satisfies a criteria, or both.23. The method of claim 17, further comprising:identifying a start-up event of the memory device, wherein determining the first quantity of pins and determining the second quantity of cycles is based at least in part on identifying the start-up event.24. An apparatus, comprising:a first driver comprising a plurality of pins configured to transmit one or more commands over a first channel;a first encoder coupled with the first driver and configured to encode the one or more commands transmitted over the first channel; and a register coupled with the first driver and the first encoder and programmable to configure a width of the first channel based at least in part on the one or more commands transmitted over the first channel.25. The apparatus of claim 24, wherein the register is configured to determine a first quantity of pins of the first channel for transmitting the one or more commands and is configured to determine a second quantity of cycles for the one or more commands.26. The apparatus of claim 24, further comprising:a second driver comprising a plurality of pins configured to transmit the one or more commands over a second channel; anda second encoder coupled with the second driver and configured to encode the one or more commands transmitted over the second channel, wherein the register is programmable to configure a width of the second channel based at least in part on the one or more commands transmitted over the second channel, the register coupled with the second driver and the second encoder. |
CONFIGURING COMMAND/ADDRESS CHANNEL FOR MEMORYCROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Patent Application No. 16/674,987 by Richter et al., entitled“CONFIGURING COMMAND/ADDRESSCHANNEL FOR MEMORY,” filed November 5, 2019, and U.S. Provisional Patent Application No. 62/771,420 by Richter et al., entitled“CONFIGURINGCOMMAND/ADDRESS CHANNEL FOR MEMORY,” filed November 26, 2018, each of which is assigned to the assignee hereof.BACKGROUND[0002] The following relates generally to operating a memory device and more specifically to configuring a command/address (C/A) channel.[0003] Memory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, digital displays, and the like. Information is stored by programing different states of a memory device. For example, binary devices have two states, often denoted by a logic“1” or a logic“0.” In other systems, more than two states may be stored. To access the stored information, a component of the electronic device may read, or sense, the stored state in the memory device. To store information, a component of the electronic device may write, or program, the state in the memory device.[0004] Various types of memory devices exist, including magnetic hard disks, random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others. Memory devices may be volatile or non-volatile. Non-volatile memory (e.g., FeRAM, PCM, RRAM) may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory devices (e.g., DRAM) may lose their stored state over time unless they are periodically refreshed by an external power source.[0005] Control information for operating/accessing a memory device may becommunicated between an external controller and a memory device. In some cases, channels between a host device and a memory device may use one or more pins for communication. BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 illustrates aspects of an exemplary system that supports configuring a command/address (C/A) channel as disclosed herein.[0007] FIG. 2 illustrates aspects of an exemplary memory device that supports configuring a C/A channel as disclosed herein.[0008] FIG. 3 illustrates aspects of an exemplary system that supports configuring a C/A channel as disclosed herein.[0009] FIGs. 4A and 4B illustrate aspects of exemplary host devices that support configuring a C/A channel as disclosed herein. [0010] FIGs. 5 A and 5B illustrate aspects of exemplary device controllers that support configuring a C/A channel as disclosed herein.[0011] FIGs. 6A through 6C illustrate exemplary timing diagrams for configuring a C/A channel as disclosed herein.[0012] FIG. 7 illustrates a process flow for configuring a C/A channel as disclosed herein.[0013] FIG. 8 illustrates a block diagram representing aspects of a controller that supports configuring a C/A channel as disclosed herein.[0014] FIGs. 9 and 10 illustrate flowcharts of a method or methods for configuring a C/A channel as disclosed herein. DETAILED DESCRIPTION[0015] Data and control information may be communicated in a system that supports the processing and storage of data. In some cases, data may include information for operating a user application, such as data for a word processing application. For memory access operations, control information may be used to enable the storing and reading of data in a memory array.[0016] In some cases, control information is generated at an external controller (or“host device”) to access (e.g., to read to or write from) a memory array— e.g., in response to receiving a request from a user application to access or to store data in the memory array. In some cases, the generated control information includes a command for accessing the memory array. Some commands for accessing a memory array include an activation (“ACT”) command, a read (“RD”) command, a write (“WR”) command, and a precharge (“PRE”) command. The generated control information may also include a memory address that indicates a memory cell or set of memory cells that are subject to a corresponding command. A memory address may include a memory bank address, a row address, and/or a column address. Commands that are associated with a row address may be referred to as“row commands” and commands that are associated with a column address may be referred to as “column commands.” In some cases, a size of certain command/address (C/A) combinations is larger than a size of other C/A combinations— e.g., a row ACT command may be larger than a column read command.[0017] After the control information is generated, an information signal representing the control information may be produced at the external controller. To deliver the signal to the memory array, the external controller may apply the information signal to pins (or“nodes”) located at the external controller. The pins may provide an interface between the interior of the external controller and a transmission path that connects the external controller to other devices in a system, such as the memory device. That is, pins may be used to distribute internally generated signals from one device to another device. In some examples, multiple pins may be used to communicate information signals that simultaneously convey multiple bits of information. In some cases, as the quantity of pins used to communicate aninformation signal increases, the amount of information bits that may be conveyed in a particular period of time also increases. However, a footprint (or physical size) of a device often increases as the quantity of pins at the device increases.[0018] In some examples, to avoid increasing a footprint of a device, a device may be configured with a decreased quantity of pins for transmitting an information signal.Additionally, power consumption at a device may be decreased by using less pins. But as suggested above, decreasing the quantity of pins may decrease the amount of information bits that can be conveyed in a particular period of time. Thus, larger pieces of information— e.g., a row ACT command— may be communicated over multiple time periods (or“cycles”)— e.g., because all of the bits included in the row ACT command cannot be sent in a single cycle over the decreased quantity of pins. And increasing the duration of information transmissions may introduce latency into a memory system— e.g., by delaying the execution of other commands— and/or decrease the throughput of a memory system. [0019] In some cases, control information is signaled serially over a C/A channel— e.g., a row ACT command may be sent, then a PRE command, and so on. In such cases, an increase in transmission time for one command may delay the transmission of one or more subsequent commands. In some cases, serially transmitted commands may otherwise be processed in parallel (e.g., concurrently) at a memory device; thus, delaying the transmission of subsequent commands may introduce additional latency into a memory system. In some examples, the transmission delay for commands may be decreased by concurrently transmitting row commands over a row C/A channel and column commands over a column C/A channel. That said, some delay in transmitting commands may remain— e.g., between consecutive row and/or column commands.[0020] Also, when larger commands, like row ACT commands, are sent at a high rate, the latency introduced into the memory system by increasing command transmission time may be compounded. The ratio at which row ACT commands are issued may be referred to as a“page hit rate”— a high page hit rate is associated with a low rate of ACT commands. Thus, decreasing the pins used at a device may save power at the device but introduce latency into information transmissions from the device, while increasing the pins used at a device may reduce latency in the information transmissions but increase a footprint and power consumption of the device.[0021] To reduce power consumption at a device while mitigating latency in the transmission and execution of consecutive control information, the quantity of pins and/or cycles used to communicate control information may be dynamically configured.[0022] In some cases, the quantity of pins and/or cycles used to communicate control information may be configured based on previously observed information. For example, an external memory controller may activate additional pins after identifying that a quantity of control information waiting to be sent has exceeded a threshold or by counting a quantity of unused command slots over a past period of time. After or concurrently with activating the additional pins, the external memory controller may also reduce a quantity of cycles used to transmit control information. By reducing the amount of time for transmitting individual pieces of control information, a backlog of control information in a queue may be reduced. Similarly, the external memory controller may deactivate certain pins and increase a quantity of cycle used to transmit control information after identifying that a quantity of control information waiting to be sent is below a threshold. [0023] In some cases, the quantity of pins and/or cycles used to communicate control information may be adapted based on currently observed information. For example, an external memory controller may deactivate a quantity of pins after identifying that a temperature of the memory device is below a threshold— e.g., because less refresh commands may be sent at lower temperatures.[0024] In some cases, the quantity of pins and/or cycles used to communicate control information is adapted based on predicted information. For example, an external memory controller may deactivate a quantity of pins after identifying that a page hit rate for the memory device exceeds a threshold value. A page hit rate may be associated with a quantity of times a row of memory cells in a memory bank is accessed before another row of memory cells in the same memory bank is accessed.[0025] In any event, the external memory controller may indicate to a memory device how many and/or which pins are activated at the memory controller, and the memory device may similarly activate those pins to receive control information from the memory controller. In some cases, the external memory controller or a controller on the memory die may determine the quantity of pins to use and/or the cycles for transmitting commands based on previously observed information, currently observed information, or predicted information, or a combination thereof.[0026] In some cases, a specialized component is used to support the dynamic adaptation of the quantity of pins and/or cycles (which may also be referred to as the“C/A channel configuration”) used to transmit control information. For example, a bus width configuration component may be included in an external memory controller. The bus width configuration component may be configured to determine a quantity of pins and/or cycles to use for subsequent transmissions of control information to a memory device— e.g., based on backward and/or forward-looking information. The bus width configuration component may be further configured to indicate to an encoder at the external memory controller the determined quantity of pins and/or cycles. For example, the bus width configuration component may be configured to signal a value corresponding to a particular quantity of pins to the encoder, and the encoder may generate a command based on the received value. The bus width configuration component may also indicate to a transmitter at the external memory controller the determined quantity of pins and/or cycles, and the transmitter may activate/deactivate particular drivers corresponding to particular pins at the external memory controller.[0027] Similarly, a second bus width configuration component may be included in a memory device. The second bus width configuration component may be configured to store a quantity of pins and/or cycles to use for subsequent transmissions of control information to a memory device— e.g., based on backward and/or forward-looking information, a received configuration message, or receiving an indication of what pins have been disabled after a reset. The second bus width configuration component may be further configured to indicate to a decoder at the memory device the determined quantity of pins and/or cycles. For example, the second bus width configuration component may be configured to signal a value corresponding to a particular quantity of pins to the decoder, and the decoder may decode a received signal based on the received value. The second bus width configuration component may also indicate to a receiver at the memory device the determined quantity of pins and/or cycles, and the receiver may activate/deactivate particular drivers that are coupled with pins corresponding to particular pins at the memory device.[0028] Features of the disclosure are described below in the context of a memory system in FIGs. 1 through 3. Features of the disclosure are described in the context of circuit diagrams, timing diagrams, and a process flow in FIGs. 4A through 7. These and other features of the disclosure are further illustrated by and described with reference to FIGs. 8 through 10, which include an apparatus diagram and flowcharts that relate to configuring a C/A channel.[0029] FIG. 1 illustrates aspects of an exemplary system that utilizes one or more memory devices that support configuring a C/A channel as disclosed herein.[0030] The system 100 may include an external memory controller 105, a memory device 110, and a plurality of channels 115 coupling the external memory controller 105 with the memory device 110. The system 100 may include one or more memory devices, but for ease of description the one or more memory devices may be described as a single memory device 110.[0031] The system 100 may include aspects of an electronic device, such as a computing device, a mobile computing device, a wireless device, or a graphics processing device. The system 100 may be an example of a portable electronic device. The system 100 may be an example of a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, or the like. The memory device 110 may be a component of the system configured to store data for one or more other components of the system 100. In some examples, the system 100 is configured for bi-directional wireless communication with other systems or devices using a base station or access point. In some examples, the system 100 is capable of machine-type communication (MTC), machine-to- machine (M2M) communication, or device-to-device (D2D) communication.[0032] At least portions of the system 100 may be examples of a host device. Such a host device may be an example of a device that uses memory to execute processes such as a computing device, a mobile computing device, a wireless device, a graphics processing device, a graphics process unit (GPU), a computer, a laptop computer, a tablet computer, a smartphone, a cellular phone, a wearable device, an internet-connected device, some other stationary or portable electronic device, or the like. In some cases, the host device may refer to the hardware, firmware, software, or a combination thereof that implements the functions of the external memory controller 105. In some cases, the external memory controller 105 may be referred to as a host device or host device. In some examples, system 100 may be a graphics card. The host device may include a plurality of drivers and a plurality of channels linking the host device with the memory device.[0033] In some cases, a memory device 110 may be an independent device or component that is configured to be in communication with other components of the system 100 and provide physical memory addresses/space to potentially be used or referenced by the system 100. In some examples, a memory device 110 may be configurable to work with at least one or a plurality of different types of systems 100. Signaling between the components of the system 100 and the memory device 110 may be operable to support modulation schemes to modulate the signals, different pin designs for communicating the signals, distinct packaging of the system 100 and the memory device 110, clock signaling and synchronization between the system 100 and the memory device 110, timing conventions, and/or other factors.[0034] The memory device 110 may be configured to store data for the components of the system 100. In some cases, the memory device 110 may act as a slave-type device to the system 100 (e.g., responding to and executing commands provided by the system 100 through the external memory controller 105). Such commands may include an access command for an access operation, such as a write command for a write operation, a read command for a read operation, a refresh command for a refresh operation, or other commands. An access command may, in some cases, be a command that prompts a memory device to store or read data from one or more memory cells. The memory device 110 may include two or more memory dice 160 (e.g., memory chips) to support a desired or specified capacity for data storage. The memory device 110 including two or more memory dice may be referred to as a multi-die memory or package (also referred to as multi-chip memory or package).[0035] The system 100 may further include a processor 120, a basic input/output system (BIOS) component 125, one or more peripheral components 130, and an input/output (I/O) controller 135. The components of system 100 may be coupled with or in electronic communication with one another using a bus 140.[0036] The processor 120 may be configured to control at least portions of the system 100. The processor 120 may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or it may be a combination of these types of components. In such cases, the processor 120 may be an example of a central processing unit (CPU), a GPU, a general purpose graphic processing unit (GPGPU), or a system on a chip (SoC), among other examples.[0037] In some cases, the processor 120 may be incorporated into or part of the external memory controller 105. In some cases, the processor 120 may be a GPU. The processor 120 may perform aspects of configuring bus transmission lines (e.g., data bus transmission lines) as described herein. For example, the processor 120 may divide a data bus into two sets of transmission lines: a first set to transfer control signals, and a second set to transfer data signals. If the quantity of data and control signals to be transferred changes, the processor 120 may reassign or reconfigure transmission lines from one set to another set to increase the efficiency and use of the bus.[0038] The BIOS component 125 may be a software component that includes a BIOS operated as firmware, which may initialize and run various hardware components of the system 100. The BIOS component 125 may also manage data flow between the processor 120 and the various components of the system 100, e.g., the peripheral components 130, the I/O controller 135, etc. The BIOS component 125 may include a program or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory. [0039] The peripheral component(s) 130 may be any input device or output device, or an interface for such devices, that may be integrated into or with the system 100. Examples may include disk controllers, sound controller, graphics controller, Ethernet controller, modem, universal serial bus (USB) controller, a serial or parallel port, or peripheral card slots, such as peripheral component interconnect (PCI) or specialized graphics ports. The peripheral component s) 130 may be other components as would be understood by persons of ordinary skill in the art as peripherals.[0040] The I/O controller 135 may manage data communication between the processor 120 and the peripheral component(s) 130, input devices 145, or output devices 150. The I/O controller 135 may manage peripherals that are not integrated into or with the system 100. In some cases, the I/O controller 135 may represent a physical connection or port to external peripheral components.[0041] The input device 145 may represent a device or signal external to the system 100 that may provide information, signals, or data to the system 100 or its components. This may include a user interface or interface with or between other devices. In some cases, the input device 145 may be a peripheral that interfaces with system 100 via one or more peripheral components 130 or may be managed by the EO controller 135.[0042] The output device 150 may represent a device or signal external to the system 100 configured to receive an output from the system 100 or any of its components. Examples of the output device 150 may include a display, audio speakers, a printing device, or another processor on printed circuit board, etc. In some cases, the output device 150 may be a peripheral that interfaces with the system 100 via one or more peripheral components 130 or may be managed by the I/O controller 135.[0043] The components of system 100 may be made up of general-purpose or special purpose circuitry designed to carry out their functions. This may include output driver circuitry and various other circuit elements, for example, conductive lines, transistors, capacitors, inductors, resistors, amplifiers, or other active or passive elements, configured to carry out the functions described herein.[0044] The memory device 110 may include a device memory controller 155 and one or more memory dice 160. Each memory die 160 may include a local memory controller 165 (e.g., local memory controller 165-a, local memory controller 165-b, and/or local memory controller 165-/V) and a memory array 170 (e.g., memory array 170-a, memory array 170-b, and/or memory array 170-/V). A memory array 170 may be a collection (e.g., a grid) of memory cells, with each memory cell being configured to store at least one bit of digital data. Features of memory arrays 170 and/or memory cells are further described with reference to FIG. 2.[0045] The memory arrays 170 may be examples of two-dimensional (2D) arrays of memory cells or may be examples of a three-dimensional (3D) arrays of memory cells. For example, a 2D memory device may include a single memory die 160. A 3D memory device may include two or more memory dice 160 (e.g., memory die 160-a, memory die 160-b, and/or any quantity of memory dice 160-/V). In a 3D memory device, multiple memory dice 160-A may be stacked on top of one another. In some cases, memory dice 160-A in a 3D memory device may be referred to as decks, levels, layers, or dies. A 3D memory device may include any quantity of stacked memory dice 160-A (e.g., two high, three high, four high, five high, six high, seven high, eight high). This may increase the quantity of memory cells that may be positioned on a substrate as compared with a single 2D memory device, which in turn may reduce production costs, increase the performance of the memory array, or both. In some 3D memory devices, different decks may share at least one common access line such that some decks may share at least one of a word line, a digit line, and/or a plate line.[0046] The device memory controller 155 may include circuits or components configured to control operation of the memory device 110. As such, the device memory controller 155 may include the hardware, firmware, and software that enables the memory device 110 to perform commands and may be configured to receive, transmit, or execute commands, data, or control information related to the memory device 110. The device memory controller 155 may perform, or facilitate, aspects of configuring data bus transmission lines as described herein. For example, the device memory controller 155 may receive control and data signals over multiple different sets of transmissions lines that may be part of a data bus. When the two sets of transmission lines are reconfigured, the device memory controller 155 may receive control and/or data signals over the reconfigured transmission lines.[0047] The device memory controller 155 may be configured to communicate with the external memory controller 105, the one or more memory dice 160, or the processor 120. In some cases, the memory device 110 may receive data and/or control signals (e.g., commands and addresses) from the external memory controller 105. For example, the memory device 110 may receive a write command indicating that the memory device 110 is to store certain data on behalf of a component of the system 100 (e.g., the processor 120) or a read command indicating that the memory device 110 is to provide certain data stored in a memory die 160 to a component of the system 100 (e.g., the processor 120). In some cases, the device memory controller 155 may control operation of the memory device 110 described herein in conjunction with the local memory controller 165 of the memory die 160. Examples of the components included in the device memory controller 155 and/or the local memory controllers 165 may include receivers for demodulating signals received from the external memory controller 105, encoders for modulating and transmitting signals to the external memory controller 105, logic, decoders, amplifiers, filters, or the like.[0048] The local memory controller 165 (e.g., local to a memory die 160) may be configured to control operations of the memory die 160. Also, the local memory controller 165 may be configured to communicate (e.g., receive and transmit data and/or commands) with the device memory controller 155. The local memory controller 165 may support the device memory controller 155 to control operation of the memory device 110 described herein. In some cases, the memory device 110 does not include the device memory controller 155, and the local memory controller 165 or the external memory controller 105 may perform the various functions described herein. As such, the local memory controller 165 may be configured to communicate with the device memory controller 155, with other local memory controllers 165, or directly with the external memory controller 105 or the processor 120[0049] The external memory controller 105 may be configured to enable communication of information, data, commands/ and/or addresses between components of the system 100 (e.g., the processor 120) and the memory device 110. The external memory controller 105 may act as a liaison between the components of the system 100 and the memory device 110 so that the components of the system 100 may not need to know the details of the memory device’s operation. The components of the system 100 may present requests to the external memory controller 105 (e.g., read commands or write commands) that the external memory controller 105 satisfies. The external memory controller 105 may convert or translate communications exchanged between the components of the system 100 and the memory device 110. In some cases, the external memory controller 105 may include a system clock that generates a common (source) system clock signal. In some cases, the external memory controller 105 may include a common data clock that generates a common (source) data clock signal. The data clock signal may provide timing for multi-level signals sent over channels 115. For example, the data clock may provide timing information for determining the duration of symbol periods of a multi-level signal.[0050] In some cases, the external memory controller 105 or other components of the system 100, or its functions described herein, may be implemented by the processor 120. For example, the external memory controller 105 may be hardware, firmware, or software, or some combination thereof implemented by the processor 120 or other component of the system 100. Although the external memory controller 105 is depicted as being external to the memory device 110, in some cases, the external memory controller 105, or its functions described herein, may be implemented by a memory device 110. For example, the external memory controller 105 may be hardware, firmware, or software, or some combination thereof implemented by the device memory controller 155 or one or more local memory controllers 165. In some cases, the external memory controller 105 may be distributed across the processor 120 and the memory device 110 such that portions of the external memory controller 105 are implemented by the processor 120 and other portions are implemented by a device memory controller 155 or a local memory controller 165. Likewise, in some cases, one or more functions ascribed herein to the device memory controller 155 or local memory controller 165 may in some cases be performed by the external memory controller 105 (either separate from or as included in the processor 120).[0051] The components of the system 100 may exchange information with the memory device 110 using a plurality of channels 115. In some examples, the channels 115 may enable communications between the external memory controller 105 and the memory device 110. Each channel 115 may include one or more signal paths or transmission mediums (e.g., conductors) between terminals associated with the components of system 100. For example, a channel 115 may include a first terminal including one or more pins or pads at external memory controller 105 and one or more pins or pads at the memory device 110. A pin may be an example of a conductive input or output point of a device of the system 100, and a pin may be configured to act as part of a channel. In some cases, a pin or pad of a terminal may be part of a signal path of the channel 115.[0052] Additional signal paths may be coupled with a terminal of a channel for routing signals within a component of the system 100. For example, the memory device 110 may include signal paths (e.g., signal paths internal to the memory device 110 or its components, such as internal to a memory die 160) that route a signal from a terminal of a channel 115 to the various components of the memory device 110 (e.g., a device memory controller 155, memory dice 160, local memory controllers 165, memory arrays 170). A signal path may be implemented using one or more types of transmission lines, including differential transmission lines and single-ended transmission lines.[0053] Channels 115 (and associated signal paths and terminals) may be dedicated to communicating specific types of information. In some cases, a channel 115 may be an aggregated channel and thus may include multiple individual channels. For example, a data channel 190 may be x4 (e.g., including four signal paths), x8 (e.g., including eight signal paths), xl6 (including sixteen signal paths), etc.[0054] In some cases, the channels 115 may include one or more C/A channels 186. The C/A channels 186 may be configured to communicate commands between the external memory controller 105 and the memory device 110 including control information associated with the commands (e.g., address information). For example, the C/A channel 186 may include a read command with an address of the desired data. In some cases, the C/A channels 186 may be registered on a rising clock signal edge or a falling clock signal edge using a technique which may be referred to as single data rate (SDR) signaling, or to both rising and falling clock signal edges using a technique which may be referred to as double data rate (DDR) signaling. In some cases, a C/A channel 186 may include eight or nine signal paths.[0055] In some cases, the channels 115 may include a row C/A channel and a column C/A channel. The row C/A channel may be configured to communicate row commands between the external memory controller 105 and the memory device 110, such as activation and precharge commands. The column C/A channel may be configured to communicate column commands between the external memory controller 105 and the memory device 110, such as a read and write commands. In some cases, the row C/A channel may be larger than the column C/A channel to accommodate larger row commands that convey more information than column commands. In such cases, the row C/A channel may be configured to include more pins and/or signal paths than the column C/A channel.[0056] In some cases, the channels 115 may include one or more clock signal (CK) channels 188. The CK channels 188 may be configured to communicate one or more common clock signals between the external memory controller 105 and the memory device 110. Each clock signal may be configured to adjust (e.g., oscillate) between a high state and a low state and coordinate the actions of the external memory controller 105 and the memory device 110. In some cases, the clock signal may be a differential output (e.g., a CK_t signal and a CK_c signal) and the signal paths of the CK channels 188 may be configured accordingly. In some cases, the clock signal may be single ended. A CK channel 188 may include any quantity of signal paths. In some cases, the clock signal CK (e.g., a CK_t signal and a CK_c signal) may provide a timing reference for command and addressing operations for the memory device 110, or other system-wide operations for the memory device 110. The clock signal CK therefore may be variously referred to as a control clock signal CK, a command clock signal CK, or a system clock signal CK. The system clock signal CK may be generated by a system clock, which may include one or more hardware components (e.g., oscillators, crystals, logic gates, transistors, or the like).[0057] In some cases, the channels 115 may include one or more data (DQ) channels 190. For example, the channels 115 may include data channels 190-1 through 190-n. Each data channel may be associated with or include one or more transmission lines. The data channels 190 may be configured to communicate data and/or control information between the external memory controller 105 and the memory device 110. For example, the data channels 190 may communicate information (e.g., bi-directional) to be written to the memory device 110 or information read from the memory device 110. The data channels 190 may communicate signals that may be modulated using a variety of different modulation schemes (e.g., non-return-to-zero (NRZ) signaling, or pulse amplitude modulation (PAM) signaling).[0058] In some cases, the channels 115 may include one or more other channels 192 that may be dedicated to other purposes. These other channels 192 may include any quantity of signal paths. In some cases, the other channels 192 may include one or more write clock signal (WCK) channels. Although the‘W’ in WCK may nominally stand for“write,” a write clock signal WCK (e.g., a WCK t signal and a WCK c signal) may provide a timing reference for access operations generally for the memory device 110 (e.g., a timing reference for both read and write operations). Accordingly, the write clock signal WCK may also be referred to as a data clock signal WCK.[0059] The WCK channels may be configured to communicate a common data clock signal between the external memory controller 105 and the memory device 110. The data clock signal may be configured to coordinate an access operation (e.g., a write operation or read operation) of the external memory controller 105 and the memory device 110. In some cases, the write clock signal may be a differential output (e.g., a WCK t signal and a WCK c signal) and the signal paths of the WCK channels may be configured accordingly. A WCK channel may include any quantity of signal paths. The data clock signal WCK may be generated by a data clock, which may include one or more hardware components (e.g., oscillators, crystals, logic gates, transistors, or the like).[0060] In some cases, the other channels 192 may include one or more error detection code (EDC) channels. The EDC channels may be configured to communicate error detection signals, such as checksums, to improve system reliability. An EDC channel may include any quantity of signal paths.[0061] The channels 115 may couple the external memory controller 105 with the memory device 110 using a variety of different architectures. Examples of the various architectures may include a bus, a point-to-point connection, a crossbar, a high-density interposer such as a silicon interposer, or channels formed in an organic substrate or some combination thereof. For example, in some cases, the signal paths may at least partially include a high-density interposer, such as a silicon interposer or a glass interposer.[0062] Signals communicated over the channels 115 (and their associated transmission lines) may be modulated using a variety of different modulation schemes. In some cases, a binary-symbol (or binary-level) modulation scheme may be used to modulate signals communicated between the external memory controller 105 and the memory device 110. A binary-symbol modulation scheme may be an example of an M-ary modulation scheme where M is equal to two. Each symbol of a binary-symbol modulation scheme may be configured to represent one bit of digital data (e.g., a symbol may represent a logic 1 or a logic 0). Examples of binary-symbol modulation schemes include, but are not limited to, NRZ, unipolar encoding, bipolar encoding, Manchester encoding, PAM having two symbols (e.g., PAM2), and/or others.[0063] In some cases, a multi-symbol (or multi-level) modulation scheme may be used to modulate signals communicated between the external memory controller 105 and the memory device 110. A multi-symbol modulation scheme may be an example of a M-ary modulation scheme where M is greater than or equal to three. Each symbol of a multi-symbol modulation scheme may be configured to represent more than one bit of digital data (e.g., a symbol may represent a logic 00, a logic 01, a logic 10, or a logic 11). Examples of multi symbol modulation schemes include, but are not limited to, PAM4, PAM8, etc., quadrature amplitude modulation (QAM), quadrature phase shift keying (QPSK), and/or others. A multi-symbol signal (e.g., a PAM4 signal) may be a signal that is modulated using a modulation scheme that includes at least three levels to encode more than one bit of information. Multi-symbol modulation schemes and symbols may alternatively be referred to as non-binary, multi-bit, or higher-order modulation schemes and symbols.[0064] Data and control information may be communicated within a system 100. For example, data and control information may be communicated between an external controller (e.g., an external memory controller 105) and a memory device (e.g., a memory device 110). Data may include information created by a user application and may be conveyed in data signaling.[0065] Control information may include information that supports the flow of data within the system 100 and may be conveyed in control signaling. In some cases, the control information includes commands that direct the memory device to perform certain operations at a memory device. A memory operation may refer to an operation that manipulates one or more memory cells, access lines (e.g., word lines, digit lines, or plate lines), or memory banks. An access operation may refer to a subset of memory operations that involve or result in data being written to or read from a memory cell in memory device. Possible commands include precharge, row activation, read, and write commands.[0066] The control information may also include a memory address (e.g., a row or column address) that identifies a particular memory cell or set of memory cells. As discussed herein, the address of one or more memory cells may be represented in a signal by a quantity of address bits indicating a block address, column address, and/or row address. For example, when memory device 310 includes sixteen (16) banks, and each of those banks includes one hundred and twenty-eight (128) columns and 16384 rows (e.g., as in a 16 Gb GDDR6 architecture or an 8 Gb GDDR5 architecture), the bank address may be represented by four address bits, the column address may be represented by seven address bits, and the row address may be represented by fourteen address bits. In some cases, a row of memory cells may be referred to as a memory page. In another example (e.g., in a 16 Gb DDR4xl6 architecture), memory device 310 may include eight (8) banks, and each of those banks includes 1024 columns and 131072 rows, in which case the bank address may be represented by three address bits, the column address may be represented by ten address bits, and the row address may be represented by 17 address bits. However, these are illustrative examples and other quantities of address bits may be used and are specifically contemplated. [0067] A control signal including both a command and a memory address may indicate that a command applies to a memory cell or set of memory cells identified by the address. In some cases, a control signal including both a command and a memory address is referred to as a C/A signal. In one example, a C/A signal including an activate (ACT) command and a row address may indicate to a memory device that a row of memory cells is to be activated in preparation for or in conjunction with another memory operation (e.g., a read or write operation). Activating a memory cell may refer to energizing the word line corresponding to that memory cell. When an ACT command is transferred to the memory device, the ACT command may include or be followed by a quantity of address bits that indicate the bank targeted for an upcoming read or write operation, as well as the row within that bank that is to be activated. Thus, a transmitted ACT command may include or be followed by the bank and row addresses relevant to the upcoming memory operation. When an ACT command includes a row address, the ACT command may be referred to as a row command. An external controller may transmit an ACT command each time a new row in a bank is targeted for a read or write operation. In some cases, when a rate of ACT commands decreases as a page hit rate increases— e.g., when there are consecutives requests to access a same row.[0068] In an example of an access operation, a read command may indicate to a memory device that one or more memory cells are to undergo a read operation so that their stored information (e.g., as represented by logic states) can be transferred to an external controller. Reading a memory cell may refer to the process of applying a voltage across the memory cell so that the memory cell discharges onto a digit line for sensing. When a read command is transmitted to the memory device, the read command may include or be followed by the bank address and column address of the memory cell(s) targeted for the read operation. When a read command includes a column address, the read command may be referred to as a column command. In some cases, the read command may also indicate the quantity of memory cells that are to be read, starting at an initial address point. The quantity of memory cells to be read in response to a read command may be referred to as the read burst length.[0069] In another example of an access operation, a write command may indicate to a memory device that one or more memory cells are to undergo a write operation so that information from an external controller can be stored in one of the memory banks of a memory device. Writing a memory cell may refer to the process of applying a voltage across the memory cell so that the memory cell charges to a state indicative of a logic one or zero. When a write command is transmitted to the memory device, the write command may include or be followed by the bank address and column address of the memory cell(s) targeted for the write operation. When a write command includes a column address, the write command may be referred to as a column command. In some cases, the write command may also indicate the quantity of memory cells that are to be written, starting at an initial address point. The quantity of memory cells to be written in response to a write command may be referred to as the write burst length.[0070] To communicate data and/or control information within the system 100, data and control information may be signaled over one or more channels, such as channel 115, that electronically connect devices within the memory system. In some cases, each of the one or more channels may include multiple signal paths (or transmission paths). Devices may access the one or more channels via pins located at the devices, where the pins may act as a conductive interface between a device and a channel. For example, to communicate information between two devices, such as an external memory controller 105 and a memory device 110, the signal paths may be coupled with a first set of pins located at the external memory controller 105 and a second set of pins located at the memory device 110. In some cases, a grouping of signal paths and pins may be referred to as a“bus.”[0071] In some cases, to signal data and/or control information between two devices, a first device may apply an internally generated signal to a first set of pins located at the first device. In some cases, the internally generated signal is composed of one or more voltages (e.g., multiple parallel and/or series voltages) and the first set of pins may be coupled with a set of signal paths of a channel that connects the two devices. In some examples, an external memory controller 105 may signal data and/or control information to a memory device 110 by applying one or more voltages of a signal to a first set of pins at the external memory controller 105 that are coupled with a channel 115 connecting the external memory controller 105 and the memory device 110. In some cases, all of the one or more voltages are applied at a same time within a single time period. In other cases, subsets of the one or more voltage are sequentially applied across multiple time periods. The memory device 1 10 may receive the one or more voltages of the signal at a second set of pins located at the memory device 110 and may decode the signal to determine the signaled data and/or control information.Similarly, the memory device 110 may signal data to the external memory controller 105 by applying one or more voltages to the second set of pins. [0072] In some cases, data and control information may be signaled over separate data and control channels that electronically connect devices within the memory system via pins located at the device. For example, data may be signaled over a channel (e.g., a DQ channel 190) that is dedicated to data (a“data channel”) and control information may be signaled over a channel (e.g., a C/A channel 186) that is dedicated to control information (a“control channel”). When data and control information is signaled over separate channels, a first set of signal paths of a data channel may be coupled with a first set of pins located at a first device and a first set of pins located at a second device. And a second set of signal paths of a control channel may be coupled with a second set of pins located at the first device and a second set of pins located at the second device. In some cases, when separate channels are used to signal data and control information, control information and data may be signaled according to a timing or protocol that indicates which control information corresponds to which data.[0073] In some cases, a quantity of pins and a quantity of signal paths used to signal control information may be based on the quantity of information bits used to convey a largest type of control information (e.g., the largest C/A combination) and/or a quantity of cycles used to transmit the different types of control information. In some cases, the quantity of pins and signal paths may be the same. In some examples, the quantity of pins and signal paths (or the size of a bus) used to signal control information may be reduced by decreasing a size of the largest type of control information (e.g., by encoding control information). In some examples, the quantity of pins and signal paths (or the size of a bus) used to signal control information may be reduced by increasing the quantity of cycles used to transmit the largest type of control information, or vice versa.[0074] In some cases, commands are signaled over a control channel in a serial fashion. That is, a first command may be sent, then a second command may be sent, then a third command, and so on. However, in some cases, a memory device may capable of performing operations triggered by different commands in parallel— e.g., a memory device may perform a read operation for memory cells located at a first column address in a first memory bank at a same time as performing a row activation operation for memory cells located in a different memory bank. By processing two commands in parallel, a throughput of a memory device may be increased.[0075] In some cases, multiple control channels may be used to communicate parallel streams of control information. For example, a first channel may be used to communicate row commands (a“row control channel”) and a second channel may be used to communicate column commands (a“column control channel”). In such cases, the quantity of pins used to transmit row control information may be based on the size of the largest type of row control information (e.g., the largest row C/A combination) and/or a quantity of cycles used to transmit the different types of row control information. And the quantity of pins used to transmit column control information may be based on the size of the largest type of column control information (e.g., the largest column C/A combination) and/or a quantity of cycles for transmitting the different types of row control information.[0076] As the quantity of pins used to signal control information increases, the size (or footprint) of a memory die and/or the power consumption of a memory die may also increase. In some cases, to reduce the size and power consumption of a memory die, the quantity of pins used to signal control information may be decreased and the quantity of cycles used to transmit control information may be increased. But increasing the quantity of cycles used to transmit control information may cause a delay in the execution of commands. That is, by increasing the quantity of cycles used to transmit control information, the transmission of discrete control information may take longer, delaying the transmission of subsequent control information.[0077] To reduce power consumption at a memory die without introducing latency into the execution of consecutive commands, the quantity of pins and/or cycles used to communicate control information may be dynamically adapted.[0078] In some cases, the quantity of pins and/or cycles used to communicate control information is adapted based on previously observed information. For example, an external memory controller (e.g., an external memory controller 105) may activate additional pins after identifying that a quantity of control information waiting to be sent (e.g., in a queue) has exceeded a threshold. After or concurrently with activating the additional pins, the external memory controller may also reduce a quantity of cycles used to transmit control information. By reducing the amount of time for transmitting individual pieces of control information, a backlog of control information in a queue may be reduced. Similarly, the external memory controller may deactivate certain pins after identifying that a quantity of control information waiting to be sent is below a threshold.[0079] In some cases, the quantity of pins and/or cycles used to communicate control information is adapted based on predicted information. For example, an external memory controller (e.g., an external memory controller 105) may deactivate a quantity of pins after identifying that a page hit rate for the memory device exceeds a threshold value. A page hit rate may be associated with a quantity of times a row of memory cells in a memory bank is accessed before another row of memory cells in the memory bank is accessed. Higher page hit rates may correspond to a pattern of commands that has a decreased rate of activation commands, which are often longer commands.[0080] In some cases, a specialized component is used to support the dynamic adaptation of the quantity of pins and/or cycles used to transmit control information. For example, a bus width configuration component may be implemented using at least portions of one or more memory controllers (e.g., external memory controller 105, device memory controller 155, local memory controllers 165 or 260, or a combination thereof) and/or a register. The bus width configuration component may be configured to determine a quantity of pins and/or cycles to use for subsequent transmissions of control information to a memory device— e.g., based on backward and/or forward-looking information. The bus width configuration component may be further configured to indicate to an encoder at the external memory controller the determined quantity of pins and/or cycles. For example, the bus width configuration component may be configured to signal a value corresponding to a particular quantity of pins to the encoder, and the encoder may generate a command based on the received value. The bus width configuration component may also indicate to a transmitter at the external memory controller the determined quantity of pins and/or cycles, and the transmitter may activate/deactivate particular drivers corresponding to particular pins at the external memory controller.[0081] Similarly, a second bus width configuration component may be included in a memory device. The second bus width configuration component may be configured to determine a quantity of pins and/or cycles to use for subsequent transmissions of control information to a memory device— e.g., based on backward and/or forward-looking information, a received configuration message, or determining which pins have been disabled. The second bus width configuration component may be further configured to indicate to a decoder at the memory device the determined quantity of pins and/or cycles. For example, the second bus width configuration component may be configured to signal a value corresponding to a particular quantity of pins to the decoder, and the decoder may decode a received signal based on the received value. The second bus width configuration component may also indicate to a receiver at the memory device the determined quantity of pins and/or cycles, and the receiver may activate/deactivate particular circuits that are coupled with pins corresponding to particular pins at the memory device.[0082] FIG. 2 illustrates aspects of an exemplary memory device that supports configuring a C/A channel as disclosed herein. The memory device 200 may be an example of the memory dice 160 described with reference to FIG. 1. In some cases, the memory device 200 may be referred to as a memory chip, a memory device, or an electronic memory apparatus. The memory device 200 may include one or more memory cells 205 that are programmable to store different logic states. Each memory cell 205 may be programmable to store two or more states. For example, the memory cell 205 may be configured to store one bit of digital logic at a time (e.g., a logic 0 and a logic 1). In some cases, a single memory cell 205 (e.g., a multi-level memory cell) may be configured to store more than one bit of digit logic at a time (e.g., a logic 00, logic 01, logic 10, or a logic 11).[0083] A memory cell 205 may store a charge representative of the programmable states in a capacitor. In dynamic random access memory (DRAM) architectures, a memory cell, such as memory cell 205, may include a capacitor that includes a dielectric material to store a charge representative of the programmable state. In other memory architectures, other storage devices and components are possible. For example, nonlinear dielectric materials may be employed.[0084] Operations such as reading and writing may be performed on memory cells 205 by activating or selecting access lines such as a word line 210 and/or a digit line 215. In some cases, digit lines 215 may also be referred to as bit lines. References to access lines, word lines and digit lines, or their analogues, are interchangeable without loss of understanding or operation. Activating or selecting a word line 210 or a digit line 215 may include applying a voltage to the respective line.[0085] The memory device 200 may include the access lines (e.g., the word lines 210 and the digit lines 215) arranged in a grid-like pattern. Memory cells 205 may be positioned at intersections of the word lines 210 and the digit lines 215. By biasing a word line 210 and a digit line 215 (e.g., applying a voltage to the word line 210 or the digit line 215), a single memory cell 205 may be accessed at their intersection. The memory device 200 may include a quantity of memory banks, at least some of which, if not each of which, may have a unique address and which may include a multitude of rows and columns. [0086] Accessing the memory cells 205 in a memory bank may be controlled through a row decoder 220 or a column decoder 225. For example, a row decoder 220 may receive a row address from the local memory controller 260 and activate a word line 210 based on the received row address. A column decoder 225 may receive a column address from the local memory controller 260 and may activate a digit line 215 based on the received column address. For example, the memory device 200 may include multiple word lines 210, labeled WL_1 through WL_M, and multiple digit lines 215, labeled DL_1 through DL_N, where M and N depend on the size of the memory array. Thus, by activating a word line 210 and a digit line 215, e.g., WL_1 and DL_3, the memory cell 205 at their intersection may be accessed. The intersection of a word line 210 and a digit line 215, in either a two-dimensional or three-dimensional configuration, may be referred to as an address of a memory cell 205.[0087] The memory cell 205 may include a logic storage component, such as capacitor 230 and a switching component 235. The capacitor 230 may be an example of a dielectric capacitor or a ferroelectric capacitor. A first node of the capacitor 230 may be coupled with the switching component 235 and a second node of the capacitor 230 may be coupled with a voltage source 240. In some cases, the voltage source 240 is a ground such as Vss. In some cases, the voltage source 240 may be an example of a plate line coupled with a plate line driver. The switching component 235 may be an example of a transistor or any other type of switch device that selectively establishes or de-establishes (e.g., ceases) electronic communication between two components.[0088] Selecting or deselecting the memory cell 205 may be accomplished by activating or deactivating the switching component 235. The capacitor 230 may be in electronic communication with the digit line 215 using the switching component 235. For example, the capacitor 230 may be isolated from digit line 215 when the switching component 235 is deactivated, and the capacitor 230 may be coupled with digit line 215 when the switching component 235 is activated. In some cases, the switching component 235 may be or include a transistor and its operation may be controlled by applying a voltage to the transistor gate, where the voltage differential between the transistor gate and transistor source may be greater or less than a threshold voltage of the transistor. In some cases, the switching component 235 may be or include a p-type transistor or an n-type transistor. The word line 210 may be in electronic communication with the gate of the switching component 235 and mayactivate/deactivate the switching component 235 based on a voltage being applied to word line 210. [0089] A word line 210 may be a conductive line in electronic communication with a memory cell 205 that may be used to perform access operations on the memory cell 205. In some architectures, the word line 210 may be in electronic communication with a gate of a switching component 235 of a memory cell 205 and may be configured to control the switching component 235 of the memory cell. In some architectures, the word line 210 may be in electronic communication with a node of the capacitor of the memory cell 205 and the memory cell 205 may not include a switching component.[0090] A digit line 215 may be a conductive line that connects the memory cell 205 with a sense component 245. In some architectures, the memory cell 205 may be selectively coupled with the digit line 215 during portions of an access operation. For example, the word line 210 and the switching component 235 of the memory cell 205 may be configured to couple and/or isolate the capacitor 230 of the memory cell 205 and the digit line 215. In some architectures, the memory cell 205 may be in electronic communication with the digit line 215.[0091] The sense component 245 may be configured to detect a state (e.g., a charge) stored on the capacitor 230 of the memory cell 205 and determine a logic state of the memory cell 205 based on the stored state. The charge stored by a memory cell 205 may be small, in some cases. As such, the sense component 245 may include one or more sense amplifiers to amplify the signal output by the memory cell 205. The sense amplifiers may detect small changes in the charge of a digit line 215 during a read operation and may produce signals corresponding to a logic state 0 or a logic state 1 based on the detected charge.[0092] During a read operation, the capacitor 230 of memory cell 205 may output a signal (e.g., discharge a charge) to its corresponding digit line 215. The signal may cause a voltage of the digit line 215 to change. The sense component 245 may be configured to compare the signal received from the memory cell 205 across the digit line 215 to a reference signal 250 (e.g., reference voltage). The sense component 245 may determine the stored state of the memory cell 205 based on the comparison. For example, in binary-signaling, if digit line 215 has a higher voltage than the reference signal 250, the sense component 245 may determine that the stored state of memory cell 205 is a logic 1 and, if the digit line 215 has a lower voltage than the reference signal 250, the sense component 245 may determine that the stored state of the memory cell 205 is a logic 0. [0093] The sense component 245 may include various transistors or amplifiers to detect and amplify a difference in the signals. In some cases, the sense component 245 may be part of another component (e.g., a column decoder 225, row decoder 220). In some cases, the sense component 245 may be in electronic communication with the row decoder 220 or the column decoder 225[0094] The detected logic states of memory cells 205, as determined by the sense component 245 as one example, may be output through column decoder 225 as output 255. Output 255 may pass the detected logic states to one or more intermediary components (e.g., a local memory controller) for transfer over one or more channels (e.g., for transmission over one or more transmission lines). Thus, the detected logic state of memory cells 205 may be conveyed to devices or components external to memory device 200. For example, the detected logic states may be transferred (e.g., to an external memory controller 105) via one or more transmission lines.[0095] The local memory controller 260 may control the operation of memory cells 205 through the various components (e.g., row decoder 220, column decoder 225, and sense component 245). The local memory controller 260 may be an example of the local memory controller 165 described with reference to FIG. 1. In some cases, one or more of the row decoder 220, column decoder 225, and sense component 245 may be co-located with the local memory controller 260. The local memory controller 260 may be configured to receive commands and/or data from an external memory controller 105 (or a device memory controller 155 described with reference to FIG. 1), translate the commands and/or data into information that can be used by the memory device 200, perform one or more operations on the memory device 200, and communicate data from the memory device 200 to the external memory controller 105 (or the device memory controller 155) in response to performing the one or more operations. In some cases, the local memory controller 260 may be configured to receive data and control information over different channels. In some cases, local memory controller 260 may be configured to receive different types of commands over different channels. For example, the local memory controller 260 may be configured to receive row commands over a first command channel and column commands over a second command channel.[0096] As also discussed herein, the memory device 200 may receive commands of varying durations over a variable quantity of pins. In some cases, the local memory controller 260 may determine a quantity of pins for receiving one or more commands. For example, the local memory controller 260 may determine the quantity of pins used to signal the one or more commands based on determining that a second quantity of pins used for command transmissions are set to a particular voltage. In another example, the local memory controller 260 may determine the quantity of pins used to signal the one or more commands based on receiving an indication of which or how many pins are to be used. Additionally or alternatively, the local memory controller 260 may determine a quantity of cycles for receiving the one or more commands. In some cases, the local memory controller 260 may determine the quantity of cycles for receiving the one or more commands based on determining the quantity of pins used to transmit the one or more commands. For instance, the local memory controller 260 may be configured to determine that a particular command (e.g., a row activation command) spans a certain quantity of cycles (e.g., 2) based on first determining that the particular command was transmitted over six (6) pins. After determining the quantity of pins and/or the quantity of cycles, the local memory controller 260 may configure a receiver and/or decoder to receive and process commands accordingly.[0097] The memory device 200 may send data to and receive data from one or more external devices via a bus (e.g., a data bus) that includes multiple transmission lines. As described herein, the memory device 200 may use different combinations of the transmission lines to transfer control signals and data signals. The memory device 200 may modify the combinations of transmission lines (e.g., based on a quantity of content to be transferred) so that some transmission lines previously used or previously configured to transfer data may be used or configured to transfer control signals, and vice versa. The quantity of transmission lines selected to transfer a type of content (e.g., control content or data content) may be related to (e.g., proportional to) the quantity of content.[0098] The local memory controller 260 may generate row and column address signals to activate the target word line 210 and the target digit line 215. The local memory controller 260 may also generate and control various voltages or currents used during the operation of the memory device 200. In general, the amplitude, shape, or duration of an applied voltage or current discussed herein may be adjusted or varied and may be different for the various operations discussed in operating the memory device 200.[0099] In some cases, the local memory controller 260 may be configured to perform a write operation (e.g., a programming operation) on one or more memory cells 205 of the memory device 200. The write operation may be for data received from an external device. During a write operation, a memory cell 205 of the memory device 200 may be programmed to store a desired logic state. In some cases, a plurality of memory cells 205 may be programmed during a single write operation. The local memory controller 260 may identify a target memory cell 205 on which to perform the write operation. The local memory controller 260 may identify a target word line 210 and a target digit line 215 in electronic communication with the target memory cell 205 (e.g., the address of the target memory cell 205). The local memory controller 260 may activate the target word line 210 and the target digit line 215 (e.g., applying a voltage to the word line 210 or digit line 215), to access the target memory cell 205. The local memory controller 260 may apply a first signal (e.g., voltage) to the digit line 215 during the write operation to store a first state (e.g., charge) in the capacitor 230 of the memory cell 205, and the first state (e.g., charge) may be indicative of a desired logic state.[0100] In some cases, the local memory controller 260 may be configured to perform a read operation (e.g., a sense operation) on one or more memory cells 205 of the memory device 200. The read operation may be for data requested by, or intended for, an external device. During a read operation, the logic state stored in a memory cell 205 of the memory device 200 may be determined. In some cases, a plurality of memory cells 205 may be sensed during a single read operation. The local memory controller 260 may identify a target memory cell 205 on which to perform the read operation. The local memory controller 260 may identify a target word line 210 and a target digit line 215 in electronic communication with the target memory cell 205 (e.g., the address of the target memory cell 205). The local memory controller 260 may activate the target word line 210 and the target digit line 215 (e.g., applying a voltage to the word line 210 or digit line 215), to access the target memory cell 205.[0101] The target memory cell 205 may transfer a signal to the sense component 245 in response to biasing the access lines. The sense component 245 may amplify the signal. The local memory controller 260 may activate the sense component 245 (e.g., latch the sense component) and thereby compare the signal received from the memory cell 205 to the reference signal 250. Based on that comparison, the sense component 245 may determine a logic state that is stored on the memory cell 205. The local memory controller 260 may communicate the logic state stored on the memory cell 205 to the external memory controller 105 (or the device memory controller 155) as part of the read operation. [0102] In some memory architectures, accessing the memory cell 205 may degrade or destroy the logic state stored in a memory cell 205. For example, a read operation performed in DRAM architectures may partially or completely discharge the capacitor of the target memory cell. The local memory controller 260 may perform a re-write operation or a refresh operation to return the memory cell to its original logic state. The local memory controller 260 may re-write the logic state to the target memory cell after a read operation. In some cases, the re-write operation may be considered part of the read operation. Additionally, activating a single access line, such as a word line 210, may disturb the state stored in some memory cells in electronic communication with that access line. Thus, a re-write operation or refresh operation may be performed on one or more memory cells that may not have been accessed.[0103] FIG. 3 illustrates aspects of an exemplary system that supports configuring a C/A channel in accordance with various aspects of the present disclosure.[0104] System 300 may be an example of a system 100 described with reference to FIG.1. System 300 may include controller 305, memory device 310, and transmission lines 315.In some cases, controller 305 may be an example of an external memory controller 105 (also referred to as a host device, host device controller, or host device) or a processor 120 (e.g., a GPU, a GPGPU, a CPU), as described with reference to FIG. 1. In some cases, memory device 310 may be an example of a memory device 110, memory die 160, device memory controller 155, local memory controller 165, or memory device 200, as described with reference to FIGs. 1 and 2.[0105] Controller 305 may be configured to determine a quantity of pins and/or a quantity of cycles for transmitting one or more commands to a memory device, such as memory device 310. Controller 305 may include controller transceiver 320.[0106] Controller transceiver 320 may be configured to transmit command, address, and data signaling to other devices, such as memory device 310. In some cases, controller transceiver 320 may transmit C/A signaling based on the quantity of pins and cycles for transmitting the command determined by controller 305. In some cases, controller transceiver 320 uses multi-level signaling techniques, such PAM4 signaling, and/or other signaling techniques to increase a communication rate, such as DDR signaling— when DDR signaling is used, first information may be signaled at a rising edge of a clock pulse and second information may signaled at a falling edge of the clock pulse. Controller transceiver 320 may include and/or be in electronic communication with controller pins 335.[0107] Controller pins 335 may be configured to provide an interface between the transmitting and receiving components of controller 305 and transmission lines 315. In some cases, pins located at controller 305 (e.g., controller pin la (“CP la”)) may correspond to pins located at memory device 310 (e.g., memory device pin la (“MDP la”)). In some cases, controller pins 335 may be referred to as nodes, pads, terminals, transmission line interfaces, interface components, or connection points). In some cases, the controller pins 335 may be made of a conductive material that is capable of transferring current or voltage to and from the transmission lines 315 and controller 305.[0108] Controller transceiver 320 may transmit command, address, and data signaling using controller pins 335. That is, controller transceiver 320 may apply aspects of a signal to controller pins 335 to transmit the signal to memory device 310. In some cases, controller transceiver 320 transmits command, address, and data signaling over different busses. For example, controller transceiver 320 may transmit C/A signaling over the controller pins 335 included in C/A bus 325 and may transmit data signaling over the controller pins 335 included in data bus 330. In some cases, the C/A signaling transmitted over C/A bus 325 includes information that enables memory device 310 to process corresponding data signaling received over data bus 330. For example, C/A signaling transmitted over C/A bus 325 may indicate that information included in corresponding data signaling transmitted over data bus 330 is to be stored at (e.g., if a write command is signaled) or read from (e.g., if a read command is signaled) particular memory cell(s). In some cases, controller transceiver 320 may include an encoder that is configured to generate a set of information bits for a particular command. In some cases, controller transceiver 320 may be configured to receive command, address, and/or data signaling from other devices, such as memory device 310.[0109] Memory device 310 may store information in the memory cells of one or more memory banks (e.g., in memory banks 0 through x). The memory cells may be arranged in the memory banks in sets of rows and columns as described with reference to FIG. 2.Partitioning the memory array into one or more banks allows some level of parallelism in accessing the memory array, which may increase the overall bandwidth of the memory device. Thus, to target a particular memory cell for a memory operation, memory device 310 may need to identify or be directed to the bank, column, and row of the memory cell at issue. The bank, column, and row associated with the memory cell may be indicated or represented by addresses. For example, the bank that includes the memory cell may be associated with a bank address, the column that includes the memory cell may be associated with a column address, and the row that includes the memory cell may be indicated by the row address. Memory device 310 may be configured to determine a quantity of pins and a quantity of cycles for receiving one or more commands from a host device, such as controller 305. Memory device 310 may include a memory device transceiver 340.[0110] Memory device transceiver 340 may be configured to receive command, address, and data signaling from another device, such as controller 305. In some cases, memory device transceiver 340 may receive C/A signaling based on the quantity of pins and cycles determined by memory device 310. In some cases, memory device transceiver 340 may be configured to receive signaling according to a particular modulation scheme (e.g., NRZ, PAM2, or PAM4) and/or using single data rate (SDR) or DDR techniques. Memory device transceiver 340 may include and/or be in electronic communication with device pins 345. In some cases, memory device transceiver 340 may be configured to transmit command, address, and/or data signaling to other devices, such as controller 305. In some cases, C/A bus 325 may be configured to support transmissions of command and address information to memory device 310 from controller 305, but not to support transmissions of command and address information to controller 305 from memory device 310 (e.g., C/A bus 325 may be unidirectional). In other cases, C/A bus 325 may support bidirectional communications between controller 305 and memory device 310.[0111] Device pins 345 may be configured to provide an interface between the transmitting and receiving components of memory device 310 and transmission lines 315. In some cases, pins located at memory device 310 (e.g., device pin la (“MDP la”)) may correspond to pins located at controller 305 (e.g., controller pin la (“CP la”)). In some cases, device pins 345 may be referred to as nodes, pads, terminals, transmission line interfaces, interface components, or connection points). In some cases, the device pins 345 may be made of a conductive material that is capable of transferring current or voltage to and from the transmission lines 315 and memory device 310.[0112] Memory device transceiver 340 may receive command, address, and data signaling using device pins 345. That is, memory device transceiver 340 may receive aspects of a signal over device pins 345 and may combine the aspects of the signal to reconstruct the transmitted signal. For example, memory device transceiver 340 may receive C/A signaling over the device pins 345 included in C/A bus 325 and data signaling over the device pins 345 included in data bus 330. In some cases, the C/A signaling received over C/A bus 325 provides information for receiving corresponding data signaling over data bus 330. For example, C/A signaling received over C/A bus 325 may direct the memory device 310 to store (e.g., if a write command is received) information included in a corresponding data signaling received over data bus 330 at particular memory cell(s)— e.g., based on the address received in the C/A signaling. In another example, C/A signaling received over C/A bus 325 may direct the memory device 310 to read data from (e.g., if a read command is signaled) particular memory cell(s)— e.g., based on the command and address received in the C/A signaling. In some cases, memory device transceiver 340 may include a decoder that is configured to identify a particular command corresponding to a command, address, or data signal received over device pins 345.[0113] Transmission lines 315 may be configured to electronically connect controller 305 and memory device 310. As shown in system 300, transmission lines 315 may originate at one component (e.g., controller 305) and terminate at another component (e.g., memory device 310) which may or may not be within the same device. Transmission lines 315 may be conductive wires or traces. In some cases, transmission lines 315 provide a one-to-one mapping between controller pins 335 and device pins 345. In some cases, transmission lines 315 are included in a channel, such as a channel 115 as described with reference to FIG. 1. For example, transmission lines la through A/ may be included in a control channel, such as C/A channel 185) and transmission lines lb to N may be included in a data channel, such as DQ channel 190-1.[0114] C/A bus 325 may be configured to communicate C/A signaling between controller 305 and memory device 310. In some cases, C/A bus 325 includes a subset of controller pins 335 (e.g., CP la to CP_M), a subset of device pins 345 (e.g., MDP la to MDP_M), and a subset of transmission lines 315 (e.g., TL la to TL A/) In other cases, C/A bus 325 is defined to include the subset of transmission lines 315 and is equivalent to a channel, such as a channel 115 as described with reference to FIG. 1. In some cases, C/A bus 325 is further partitioned into a row C/A bus and a column C/A bus. The row C/A bus may be configured to communicate row commands and row addresses to memory device 310 and the column C/A bus may be configured to communicate column commands and column addresses to memory device 310. In some examples, more bits of information are used to convey row commands than column commands— e.g., when a memory array is configured so that there are more rows than columns— and the row C/A bus may include more pins than the column C/A bus.[0115] Data bus 330 may be configured to communicate data signaling between controller 305 and memory device 310. In some cases, data bus 330 includes a subset of controller pins 335 (e.g., CP lb to CP_/V), a subset of device pins 345 (e.g., MDP lb to MDP /V), and a subset of transmission lines 315 (e.g., TL lb to TL_/V). In some cases, data signals communicated over data bus 330 correspond to C/A signals transmitted over C/A bus 325— e.g., instructions for processing data information conveyed in a data signal may be included in a prior C/A signal.[0116] In some cases, a configuration of C/A bus 325 may be dynamically configured— e.g., based on past, current, or predicted operation, or combinations thereof of the system 300. For example, controller 305 may disable particular controller pins 335 based on determining that a rate of ACT command transmissions is below a threshold. In another example, controller 305 may enable additional controller pins 335 based on identifying that a quantity of commands in a command queue has exceeded a threshold. In another example, controller 305 may enable additional controller pins 335 based on predicting that a rate of ACT command transmissions will increase. By dynamically adapting a configuration of C/A bus 325, controller 305 may conserve power without sacrificing the timely execution of commands. In some examples, controller 305 indicates to memory device 310 aconfiguration of C/A bus 325. For example, controller 305 may transmit a bus width configuration message to memory device 310 indicating which transmission lines 315 of C/A bus 325 are being used to convey information.[0117] Memory device 310 may receive control information over the dynamically adapted C/A bus 325. In some cases, memory device 310 determines a configuration of C/A bus 325 based on a signal transmitted over C/A bus 325 based on either an implicit or explicit message. For example, based on identifying that a voltage on certain transmission lines 315 is set to a high or low voltage. In other cases, memory device may determine a configuration of C/A bus 325 based on a message received from controller 305, as discussed herein. For example, memory device 310 may receive the configuration message and disable the device pins 345 based on the indicated configuration of C/A bus 325— e.g., memory device 310 may determine that transmission lines 3a toA/are disabled based on the bus width configuration message and may disable device pins 3a toM By disabling the device pins 345, memory device 310 may conserve power.[0118] In some cases, a quantity of cycles used to communicate C/A information is also configured based on a configuration of C/A bus 325. That is, as the quantity of pins used to communicate C/A information is adapted, a quantity of cycles used to communicate C/A information over the pins may be increased or decreased. For example, if controller 305 decreases the quantity of pins used to transmit C/A information, then a quantity of cycles used to transmit C/A information may be increased— e.g., so that the full command may be transmitted— as discussed in more detail herein and with respect to FIGs. 6A through 6C.The quantity of cycles used to receive C/A information at memory device 310 may be similarly adapted at memory device 310 as the configuration of C/A bus 325 is adapted.[0119] In some cases, instead of disabling pins dedicated to C/A bus 325, the unused controller and device pins dedicated to C/A bus 325 may be repurposed for communicating data information and included in data bus 330. By reassigning pins dedicated to C/A bus 325 to be used for data bus 330, the rate of data transfer between memory device 310 and controller 305 may be increased.[0120] FIG. 4A illustrates aspects of an exemplary host device that supports configuring a C/A channel in accordance with various aspects of the present disclosure.[0121] Host device 400-a may be configured to dynamically configure aspects of a C/A channel for communicating control and address information with other devices. Host device 400-a may also be configured to dynamically configure aspects of a data channel for communicating data with other devices. In some cases, host device 400-a may be an example of an external memory controller 105 or controller 305, as described with reference to FIGs.1 and 3. Host device 400-a may include command encoder 405-a, bus width configuration component 410-a, driver component 415-a, first pin 420-a and A/th pin 425-a. Host device 400-a may be coupled with and communicate with a memory device (e.g., device controller 500-a as described with reference to FIG. 5A) over C/A channel 430-a, which may be an example of a channel 115, as described with reference to FIG. 1.[0122] In some cases, a transmitter, such as controller transceiver 320 or memory device transceiver 340 as described with reference to FIG. 3, may include any one of command encoder 405-a, bus width configuration component 410-a, driver component 415-a, and first pin 420-a to A/th pin 425-a, or any combination thereof. [0123] Command encoder 405-a may be configured to generate an encoded set of information bits for a received command and memory address according to a configured modulation and encoding scheme. For example, if command encoder 405-a receives a signal representative of a row activation command and a row address, command encoder 405-a may generate one (1) command bit, four (4) bank address bits, and sixteen (16) row address bits— 21 total bits. In another example, if command encoder 405-a receives a signal representative of a read command and a column address, command encoder 405-a may generate three (3) command bits, four (4) bank address bits, and six (6) column address bits— 13 total bits. Command encoder 405-a may signal an encoded set of information bits to driver component 415-a.[0124] In some cases, command encoder 405-a may be further configured to generate the encoded set of bits based on a configured bus width. That is, command encoder 405-a may be configured to encode a command and memory address based on a quantity of enabled drivers/pins located at host device 400-a. For example, if command encoder 405-a receives a signal representative of a row activation command and a row address, eleven (11) pins are enabled, and a binary modulation scheme and double data rate signaling is used, command encoder 405-a may generate and signal eleven (11) of the 21 information bits before a rising edge of a first signaling period and the remaining ten (10) of the 21 information bits before a falling edge of the first signaling period. In another example, if command encoder 405-a receives a signal representative of a row activation command and a row address, six (6) pins are enabled, and a binary modulation scheme and double data rate signaling is used, command encoder 405-a may generate and signal the beginning six (6) of the 21 information bits before a rising edge of a first signaling period, the next six (6) of the 21 information bits before a falling edge of the first signaling period, the following six (6) of the 21 information bits before a rising edge of a second signaling period, and the last three (3) of the 21 information bits before a falling edge of the second signaling period.[0125] Bus width configuration component 410-a may be configured to dynamically determine a quantity of pins and/or cycles to use for transmitting a C/A signal over C/A channel 430-a. For example, bus width configuration component 410-a may determine a quantity of pins and/or cycles to use for transmitting a command based on a quantity of commands waiting in a queue to be transmitted from host device 400-a— e.g., bus width configuration component 410-a may determine that an increased quantity of pins (e.g., all available pins) and a decreased quantity of cycles (e.g., one (1) cycle) should be used to transmit commands if the quantity of commands in the queue exceeds a threshold value. In another example, bus width configuration component 410-a may determine a quantity of pins and/or cycles to use for transmitting command based on the type of commands waiting in a queue to be transmitted from host device 400-a— e.g., bus width configuration component 410-a may determine that an increased quantity of pins (e.g., all available pins) and a decreased quantity of cycles (e.g., one (1) cycle) should be used to transmit commands if a quantity of row commands included in the queue exceeds a threshold value. In another example, bus width configuration component 410-a may determine a quantity of pins and/or cycles to use for transmitting command based on an expected or predicted hit rate for a memory bank— e.g., bus width configuration component 410-a may determine that a decreased quantity of pins (e.g., six (6) pins) and an increased quantity of cycles (e.g., two (2) cycle) if a predicted hit rate is greater than or equal to two.[0126] Bus width configuration component 410-a may be further configured to store a value corresponding to the determined quantity of pins and/or cycles for transmitting the C/A signal. In some cases, bus width configuration component 410-a stores the value in a bus width configuration register that is included within or external to bus width configuration component 410-a. For example, bus width configuration component 410-a may store a first value (e.g.,“00”) corresponding to a first quantity of enabled pins (e.g., eleven (11) pins) and a first quantity of cycles (e.g., one (1) cycle), a second value (e.g.,“01”) corresponding a second quantity of enabled pins (e.g., six (6) pins) and/or a second quantity of cycles (e.g., two (2) cycles), and so on. In some cases, aspects of bus width configuration component 410- a are implemented in different areas of host device 400-a. For example, a component of bus width configuration component 410-a used to determine the quantity of pins and/or cycles may be located in a processing portion of host device 400-a and the bus width configuration register may be located elsewhere (e.g., within a transceiver). In some cases, a bus width configuration component 410-a may be implemented using at least portions of one or more memory controllers (e.g., external memory controller 105, device memory controller 155, local memory controllers 165 or 260, or a combination thereof) and/or a register. Also, the bus width configuration register may be in direct electronic communication with command encoder 405-a and driver component 415-a, while a component of bus width configuration component 410-a used to determine the quantity of pins and/or cycles may be in indirect electronic communication with command encoder 405-a and driver component 415-a. [0127] Bus width configuration component 410-a may be further configured to program an encoding scheme of command encoder 405-a by causing the bus width configuration register indicating the stored value to command encoder 405-a. For example, bus width configuration component 410-a may indicate to command encoder 405-a a value stored in the bus width configuration register that corresponds to a pin configuration, and command encoder 405-a may use an encoding scheme corresponding to the indicated value/pin configuration. In some cases, command encoder 405-a may generate a bus widthconfiguration command based on receiving an indication of the bus width from bus width configuration component 410-a and transmit the bus width configuration command over C/A channel 430-a.[0128] Bus width configuration component 410-a may be further configured to activate/deactivate one or more drivers of driver component 415-a by indicating the value stored in the bus width configuration register to driver component 415-a. For example, bus width configuration component 410-a may indicate to driver component 415-a a value stored in the bus width configuration register that corresponds to a pin configuration, and driver component 415-a may enable/disable one or more drivers corresponding to the indicated value and/or pin configuration.[0129] Driver component 415-a may be configured to generate a signal based on a received set of information bits. In some cases, driver component 415-a generates a signal based on an encoded set of information bits received from command encoder 405-a. In some examples, driver component 415-a may be configured to generate a signal according to a configured modulation scheme (e.g., NRZ, PAM2, or PAM 4) and/or signaling scheme (e.g., SDR or DDR signaling). Driver component 415-a may be in electronic communication with first pin 420-a to A th pin 425-a.[0130] First pin 420-a to A/th pin 425-a may be configured to provide a signaling interface between host device 400-a and C/A channel 430-a. In some cases, the quantity of pins (M) included in host device 400-a is selected to accommodate the transmission of the largest C/A combination within a single cycle. For example, if 21 bits are used to convey the largest C/A combination and a binary modulation scheme and double data rate signaling are used, then M may be equal to eleven (11) pins.[0131] FIG. 4B illustrates aspects of an exemplary host device that supports configuring a C/A channel in accordance with various aspects of the present disclosure. [0132] Host device 400-b may be an example of host device 400-a, as described with reference to FIG. 4A. Host device 400-b may include bus width configuration component 410-b and driver component 415-b, which may be examples of bus width configuration component 410-a and driver component 415-a, as described with reference to FIG. 4A. Host device 400-b may also include row command encoder 435-b, column command encoder 440- b, first pin 445-b, Ath pin 450-b, second pin 455-b, and Mth pin 460-b. Host device 400-b may be coupled with and communicate over row C/A channel 465-b and column C/A channel 470-b, which may examples of channels 115, as described with reference to FIG. 1.[0133] In some cases, a transmitter, such as controller transceiver 320 as described with reference to FIG. 3, may include any one of bus width configuration component 410-b, driver component 415-b, row command encoder 435-b, column command encoder 440-b, first pin 445-b to Ath pin 450-b, and second pin 455-b to Mh pin 460-b, or any combination thereof.[0134] Row command encoder 435-b may be configured to generate an encoded set of information bits for a received row command and row address according to a configured encoding scheme. For example, if row command encoder 435-b receives a signalrepresentative of a row activation command and a row address, row command encoder 435-b may generate one (1) command bit, four (4) bank address bits, and sixteen (16) row address bits— 21 total bits. Row command encoder 435-b may signal an encoded set of information bits to driver component 415-b. In some cases, row command encoder 435-b may be further configured to generate the encoded set of bits based on a configured row bus width, similar to command encoder 405-a as described with reference to FIG. 4A. In some cases, the grouping of row C/A channel 465-b and first pin 445-b to Ath pin 450-b may be referred to as a row bus.[0135] Column command encoder 440-b may be configured to generate an encoded set of information bits for a received column command and column address according to a configured encoding scheme. For example, if column command encoder 440-b receives a signal representative of a read command and a column address, column command encoder 440-b may generate three (3) command bits, four (4) bank address bits, and six (6) column address bits— 13 total bits. Column command encoder 440-b may signal an encoded set of information bits to driver component 415-b. In some cases, column command encoder 440-b may be further configured to generate the encoded set of bits based on a configured column bus width, similar to command encoder 405-a as described with reference to FIG. 4A. In some cases, the grouping of column C/A channel 470-b and second pin 455-b to Mth pin 460- b may be referred to as a column bus.[0136] Bus width configuration component 410-b may be configured to dynamically determine a quantity of pins and/or cycles to use for transmitting a row C/A signal over row C/A channel 465-b. For example, bus width configuration component 410-b may determine a quantity of pins and/or cycles to use for transmitting a row command based on a quantity of row commands waiting to be transmitted, a type of row commands waiting to be transmitted, and/or based on a predicted page hit rate. Bus width configuration component 410-b may be similarly configured to dynamically determine a quantity of pins and/or cycles to use for transmitting a column C/A signal over column C/A channel 470-b.[0137] Bus width configuration component 410-b may be further configured to store one or more values corresponding to the determined quantity of pins and/or cycles for transmitting over row C/A channel 465-b and column C/A channel 470-b— e.g., in a bus width configuration register that is included within or external to bus width configuration component 410-b.[0138] Bus width configuration component 410-b may indicate, or cause the bus width configuration register to indicate, the one or more stored values to row command encoder 435-b and column command encoder 440-b, and row command encoder 435-b and column command encoder 440-b may encode commands based on the received value. Bus width configuration component 410-b may indicate, or cause the bus width configuration register to indicate, the one or more stored values to driver component 415-b, and driver component 415-b may activate/deactivate particular drivers based on the received value. In some cases, row command encoder 435-b and/or column command encoder 440-b may generate a bus width configuration command based on receiving an indication of the bus width from bus width configuration component 410-b and transmit the bus width configuration command over row C/A channel 465-b or column C/A channel 470-b, respectively.[0139] Driver component 415-b may be configured to generate parallel row and column C/A signals based on a pin configuration determined by bus width configuration component 410-b. Driver component 415-b may transmit row C/A/ signals using one or more of first pin 445-b to h pin 450-b and may transmit column C/A signals using one or more of second pin 455-b to Mh pin 460-b. In some examples, driver component 415-b may be configured to transmit row C/A signals using one or more of first pin 445-b to Mth pin 450-b and one or more of second pin 455-b to Mth pin 460-b. And driver component 415-b may be configured to transmit column C/A signals using one or more of first pin 445-b to Mh pin 450-b and one or more of second pin 455-b to Mh pin 460-b. By receiving row C/A signals over drivers that are connected to one or more of second pin 455-b or Mth pin 460-b, and vice versa, the quantity of pins used to convey row and column C/A signaling may be reduced.[0140] First pin 445-b to Mth pin 450-b may be configured to provide a signaling interface between host device 400-b and row C/A channel 465-b. In some cases, the quantity of pins ( M) included in host device 400-b is selected to accommodate the transmission of the largest row C/A combination within a single cycle. Second pin 455-b to Mth pin 460-b may be configured to provide a signaling interface between host device 400-b and column C/A channel 470-b. In some cases, the quantity of pins (M) included in host device 400-b is selected to accommodate the transmission of the largest column C/A combination within a single cycle.[0141] FIG. 5A illustrates aspects of an exemplary device controller that supports configuring a C/A channel in accordance with various aspects of the present disclosure.[0142] Device controller 500-a may be configured to receive control and address information over a dynamically configurable C/A channel. Device controller 500-a may be configured to receive data information over a dynamically configurable data channel. In some cases, device controller may be an example of a device memory controller 155, a local memory controller 165, a local memory controller 260, or a memory device 310, as described with reference to FIGs. 1 through 3. Device controller 500-a may include command decoder 505-a, bus width configuration component 510-a, driver component 515-a, and first pin 520- a to Mth pin 525-a. Device controller 500-a may be coupled with and communicate with an external controller (e.g., host device 400-a as described with reference to FIG. 4A) over C/A channel 530-a.[0143] In some cases, a receiver, such as memory device transceiver 340 as described with reference to FIG. 3, may include any one of command decoder 505-a, bus width configuration component 510-a, driver component 515-a, and first pin 520-a to Mth pin 525- a, or any combination thereof.[0144] Command decoder 505-a may be configured to decode a received signal according to a configured modulation and encoding scheme. Command decoder 505-a may be further configured to obtain a command and memory address based on a configuration of C/A channel 530-a. In some cases, command decoder 505-a may decode an encoded signal received from driver component 515-a to obtain a command and memory address represented by a signal transmitted over C/A channel 530-a. In some cases, command decoder 505-a decodes a received signal based on a configured bus width. That is, command decoder 505-a may be configured to decode a command and memory address based on a determined configuration of C/A channel 530-a. For example, if a subset of signal paths included in C/A channel 530-a are used to convey a C/A signal, then command decoder may be configured to decode a received C/A signal using a corresponding subset of drivers and over a determined quantity of cycles.[0145] Bus width configuration component 510-a may be configured to determine a quantity of pins and/or cycles to use for receiving a C/A signal over a dynamically configured C/A channel 530-a. For example, bus width configuration component 510-a may determine a quantity of pins and/or cycles to use for receiving a C/A signal based on a received signal— e.g., bus width configuration component 510-a may determine that certain pins are disabled based on a voltage pattern across the signal paths of C/A channel 530-a . In another example, bus width configuration component 510-a may determine a quantity of pins and/or cycles to use for receiving a C/A signal based on a received bus width configuration command.[0146] Bus width configuration component 510-a may be further configured to store a value corresponding to the determined quantity of pins and/or cycles for receiving a C/A signal— e.g., in a bus width configuration register that is included within or external to bus width configuration component 510-a. Bus width configuration component 510-a may be further configured to program the command decoder 505-a by indicating, or causing the bus width configuration register to indicate, the stored value to command decoder 505-a. For example, bus width configuration component 510-a may indicate, or cause the bus width configuration register to indicate, to command decoder 505-a a stored value corresponding to a pin configuration, and command decoder 505-a may use decoding scheme corresponding to the indicated value/pin configuration— e.g., if a binary modulation scheme DDR signaling is used and bus width configuration component 510-a indicates to command decoder 505-a that six (6) pins are enabled, then command decoder 505-a may determine that received row activation commands are spread across two cycles. [0147] Bus width configuration component 510-a may be similarly configured to program the driver component 515-a by indicating, or causing the bus width configuration register to indicate, the stored value to driver component 515-a. For example, bus width configuration component 510-a may indicate to driver component 515-a a stored value corresponding to a pin configuration, and driver component 515-a may activate/disable drivers according to the indicated value/pin configuration— e.g., if the pin configuration indicates that pins 1 and 2 should be active and pins 3, 4, and M should be inactive, then driver component 515-a may disable the corresponding drivers 3, 4, andM[0148] Driver component 515-a may be configured to receive a signal and output one or more voltages representing a binary value. That is, if a voltage of a signal received at a driver of driver component 515-a is at or near a voltage level that is representative of a binary value, the driver may output that voltage level to command decoder 505-a. In some cases, driver component 515-a outputs voltages using only the activated drivers in driver component 515- a.[0149] First pin 520-a to th pin 525-a may be configured to provide a signaling interface between device controller 500-a and C/A channel 530-a. In some cases, the quantity of pins {M) included in device controller 500-a is selected to accommodate the transmission of the largest C/A combination within a single cycle. For example, if 21 bits are used to convey the largest C/A combination and a binary modulation scheme and double data rate signaling are used, than A/rnay be equal to eleven (11) pins.[0150] FIG. 5B illustrates aspects of an exemplary device controller that supports configuring a C/A channel in accordance with various aspects of the present disclosure.[0151] Device controller 500-b may be an example of device controller 500-a, as described with reference to FIG. 5A. Device controller 500-b may include bus width configuration component 510-b and driver component 515-b, which may be examples of bus width configuration component 510-a and driver component 515-a, as described with reference to FIG. 5 A. Device controller 500-b may also include row command decoder 535- b, column command decoder 540-b, first pin 545-b, Ath pin 550-b, second pin 555-b, and Mh pin 560-b. Device controller 500-b may be coupled with and communicate over row C/A channel 565-b and column C/A channel 570-b, which may be examples of channels 115, as described with reference to FIG. 1. [0152] In some cases, a receiver, such as memory device transceiver 340 as described with reference to FIG. 3, may include any one of bus width configuration component 510-b, driver component 515-b, row command decoder 535-b, column command decoder 540-b, first pin 545-b to Ath pin 550-b, and second pin 555-b to /Vth pin 560-b, or any combination thereof.[0153] Row command decoder 535-b may be configured to decode a received signal to identify a row command and row address conveyed in the signal. Row command decoder 535-b may be further configured to identify the row command and row address based on a configuration of row C/A channel 565-b. For example, row command decoder 535-b may decode a row command and row address over multiple clock cycles based on a quantity of pins used to convey a C/A signal. In some cases, row command decoder 535-b decodes a received signal based on a value received from bus width configuration component 510-b that corresponds to a pin configuration.[0154] Bus width configuration component 510-b may be configured to determine a quantity of pins and/or cycles to use for receiving a row C/A signal over row C/A channel 565-b. For example, bus width configuration component 510-b may determine a quantity of pins and/or cycles to use for receiving a row C/A command based on a quantity of active signal path in row C/A channel 565-b or a received bus width configuration message received over row C/A channel. Bus width configuration component 510-b may be similarly configured to dynamically determine a quantity of pins and/or cycles to use for receiving a C/A signal over column C/A channel 570-b.[0155] Bus width configuration component 510-b may be further configured to store one or more values corresponding to the determined quantity pins and/or cycles for transmitting over row C/A channel 565-b and column C/A channel 570-b— e.g., in a bus width configuration register that is included within or external to bus width configuration component 510-b.[0156] Bus width configuration component 510-a may indicate, or cause the bus width configuration register to indicate, the one or more stored values to row command decoder 535-b and column command decoder 540-b, and row command decoder 535-b and column command decoder 540-b may decode commands based on a pin configuration and corresponding timings that corresponds to the stored value. For example, if bus width configuration component 510-b indicates that six of first pin 545-b to Ath pin 550-b are active and binary and DDR signaling is used, then row command decoder may decode a received 21 -bit row activation command over two clock cycles. Bus width configuration component 510-a may also indicate, or cause the bus width configuration register to indicate, the one or more stored values to driver component 515-b, and driver component 515-b may activate/deactivate drivers corresponding to the activated/deactivated pins indicated by the stored value/pin configuration.[0157] Driver component 515-b may be configured to receive parallel row and column CA / signals based on a pin configuration indicated by bus width configuration component 510-b. Driver component 515-b may receive row C/A signals using one or more of first pin 545-b to Mh pin 550-b and may receive column C/A signals using one or more of second pin 555-b to Mh pin 560-b. In some examples, driver component 515-b may be configured to receive row C/A signals using one or more of first pin 545-b to Mh pin 550-b and one or more of second pin 555-b to Mh pin 560-b. And driver component 515-b may be configured to receive column C/A signals using one or more of first pin 545-b to Mh pin 550-b and one or more of second pin 555-b to Mh pin 560-b. By receiving row C/A signals over drivers that are connected to one or more of second pin 555-b or Mh pin 560-b, and vice versa, the quantity of pins used to convey row and column C/A signaling may be reduced.[0158] First pin 545-b to Mh pin 550-b may be configured to provide a signaling interface between device controller 500-b and row C/A channel 565-b. In some cases, the quantity of pins ( K) included in device controller 500-b is selected to accommodate the transmission of the largest row C/A combination within a single cycle. Second pin 555-b to Mh pin 560-b may be configured to provide a signaling interface between device controller 500-b and column C/A channel 570-b. In some cases, the quantity of pins (TV) included in device controller 500-b is selected to accommodate the transmission of the largest column C/A combination within a single cycle.[0159] FIG. 6A illustrates an exemplary timing diagram for configuring a C/A channel as disclosed herein.[0160] FIGs. 6A through 6C illustrate one or more operations of a dynamically configurable C/A channel. Timing diagram 600-a depicts an exemplary communication between host device 400-b described with reference to FIG. 4B and device controller 500-b described with reference to FIG. 5B over column channel 610-a and row channel 615-a. In the example of FIGs. 6 A through 6C, DDR signaling and a binary modulation scheme may be used to communicate information.[0161] Clock signal 605-a may provide a signal that informs a receiving device, such as device controller 500-b, when to latch (e.g., store) information communicated over a channel. For example, a receiving device may process a signal on a channel after identifying that clock signal 605-a has transitioned from a low voltage to a high voltage. As discussed herein, when DDR signaling is used, first information may be signaled at a rising edge of a clock pulse and second information may be signaled at a falling edge of the clock pulse. Thus, when DDR signaling is used, the receiving device may also process a second signal on the channel after identifying that clock signal 605-a has transitioned from the high voltage to the low voltage.[0162] As depicted in FIG. 6A, a read command may be transmitted over column channel 610-a every cycle 620-a and alternating ACT and PRE commands may be transmitted over row channel 615-a every cycle 620-a. In some cases, an ACT command may include twenty-one (21) bits— 1 command bit, 4 bank address bits, and 16 row address bits— a read may include thirteen (13) bits— 3 command bits, 4 bank address bits, and 7 column address bits— and a PRE command may include seven (7) bits— 3 command bits and 4 bank address bits. An equation to determine the quantity of pins ( NPins) used to communicate over column channel 610-a and row channel 615-a may be represented by the following equation:Npins = RoundUp (N ts) ; where NBitsrepresents the quantity of bits used to convey a\2*NCyCleScommand, /Vcyciesrepresents the quantity of cycles used to transmit the command, and where N cycles is multiplied by two (2) because DDR signaling is used.[0163] In some cases, a maximum quantity of pins dedicated to a C/A channel may be determined to be used to communicate signals based on identifying a quantity of pins capable of transmitting row and column commands within a single cycle 620-a. Applying the above equation, eleven (11) row pins may be used by host device 400-b and device controller 500-b to transmit and receive row commands over row channel 615-a— e.g., based on the largest row command including 21 bits. Similarly, seven (7) column pins may be used to transmit and receive column commands over column channel 610-a— e.g., based on the read command including 13 bits. Accordingly, in the example as described with reference to FIG. 6A, a maximum quantity of pins dedicated to a C/A channel may equal eighteen (18) pins. In some cases, a C/A channel is configured to use the maximum quantity of pins as a default configuration to ensure that control information and data may be transferred at a desired rate.[0164] FIG. 6B illustrates an exemplary timing diagram for configuring a C/A channel as disclosed herein.[0165] Timing diagram 600-b depicts one or more communications between host device 400-b as described with reference to FIG. 4B and device controller 500-b as described with reference to FIG. 5B over column channel 610-b and row channel 615-b. Clock signal 605-b may be similarly configured to clock signal 605-a as described with reference to FIG. 6A.[0166] In one example, the quantity of pins used to convey C/A signaling may be reduced relative to FIG. 6A. A quantity of pins used to convey C/A signaling may be reduced to conserve power or to free up pins to be used for data transmissions. The quantity of pins may be reduced based on determining that a quantity of commands in a command queue is below a threshold, based on determining that a page hit rate is above a threshold, or a combination thereof. In some examples, the row pin count may be reduced to six (6) row pins and the column pin count may be maintained at four (4) column pins, reducing the total quantity of pins used by a C/A channel to ten (10) pins.[0167] As illustrated in FIG. 6B, as the quantity of pins is reduced, the length of some commands may increase relative to FIG. 6 A. For example, applying the equation provided above, if the row pin count is reduced to six (6) row pins and the column pin count is reduced to four (4) column pins, then a quantity of cycles 620-b used to communicate ACT and read commands may be increased to two (2) cycles 620-b. In some cases, a quantity of cycles 620- b used to communicate a PRE command may remain the same— e.g., at one (1) cycle 620-b. In some cases, when the length of a read command is increased, a burst length may also be increased to ensure full utilization of a data bus— e.g., if a length of a read command is doubled, then a burst length may also be doubled.[0168] In another example, the quantity of pins used to convey C/A signaling may be increased relative to FIG. 6C. As discussed herein, a quantity of pins used to convey C/A signaling may be increased to avoid latency in command execution. As also discussed herein, the quantity of pins may be increased based on determining that a quantity of commands in a command queue is above a threshold, based on determining that a predicted page hit rate is below a threshold, or the like. [0169] FIG. 6C illustrates an exemplary timing diagram for configuring a C/A channel as disclosed herein.[0170] Timing diagram 600-c depicts an exemplary communication between host device 400-b as described with reference to FIG. 4B and device controller 500-b as described with reference to FIG. 5B over column channel 610-c and row channel 615-c. Clock signal 605-c may be similarly configured to clock signal 605-a and clock signal 605-b of FIGs. 6A and 6B.[0171] In one example, the quantity of pins used to convey C/A signaling may be reduced relative to FIGs. 6A and/or 6B, similar to the discussion in FIG. 6B— e.g., to increase power savings or to make additional pins available for data communications.[0172] In FIG. 6C, the row pin count may be reduced to four (4) row pins and the column pin count may be maintained at four (4) column pins, reducing the total quantity of pins used by a C/A channel to eight (8) pins. Using the above equation, a quantity of cycles 620-c used to communicate ACT commands may be increased to three (3) cycles 620-c and a quantity of cycles 620-c used to communicate read commands may be maintained at two (2) cycles. In some cases, a quantity of cycles 620-b used to communicate a PRE command may remain the same— e.g., at one (1) cycle 620-b.[0173] In some cases, the row pin count may be reduced to four (4) pins afterdetermining that a page hit rate is equal to or greater than two (2)— or based on determining that a rate of ACT commands is below a threshold.[0174] FIG. 7 illustrates a process flow for configuring a C/A channel as disclosed herein.[0175] Process flow 700 may illustrate one or more communications between external controller 705 and memory device controller 710 over a dynamically configurable C/A channel or one or more functions performed by the external controller 705 and/or the memory device controller 710. External controller 705 may be an example of an external memory controller 105, a controller 305, a host device 400-a, or a host device 400-b, as described with reference to FIGs. 1, 3, 4 A, and 4B. Memory device controller 710 may be an example of a memory device 110, a device memory controller 155, a local memory controller 165, a local memory controller 260, a memory device 310, a device controller 500-a, or a device controller 500-b, as described with reference to FIGs. 1 through 3, 5A, and 5B. [0176] At 715, external controller 705 may determine a C/A channel configuration.Determining a C/A channel configuration may include determining a quantity of pins of the C/A channel to configure for communicating C/A signaling with memory device controller 710. As discussed herein, the quantity of pins to configure may be based on prior or future operation of a memory system, a size of different types of C/A signaling, a state of the system (e.g., a power-on or initialization state), a quantity of commands in a command queue, and the like. For example, determining the quantity of pins to configure may be based on identifying a quantity of unused command slots in a prior time period— e.g., external controller 705 may disable pins based on determining that the quantity of unused command slots exceeds a threshold value. In some cases, determining the quantity of pins to configure may be based on an operating parameter of memory device controller 710. For example, external controller 705 may disable pins based on determining that a temperature of memory device controller 710 is below a threshold— e.g., because a rate of refresh commands may be reduced at lower temperatures. In another example, external controller 705 may disable pins to support a power consumption level of memory device controller 710— e.g., because a rate of refresh commands may be reduced at lower temperatures.[0177] In some cases, determining a C/A channel configuration also includesdetermining a quantity of cycles for communicating different types of C/A signaling with memory device controller 710. As discussed herein, the quantity of cycles for a type of C/A signaling may be adapted as the quantity of pins is adapted— e.g., the quantity of cycles for a particular type of C/A signaling may be increased as the quantity of pins is decreased. In some cases, determining the C/A channel configuration may include determining whether a single C/A channel or dual C/A channels (e.g., a row C/A and column C/A channel) are used. In some cases, determining the C/A channel configuration may include identifying a default C/A channel configuration (e.g., at power-on). In some cases, a bus width configuration component located at external controller 705 determines the C/A channel configuration and stores a value that corresponds to the current C/A channel configuration.[0178] In some cases, determining the C/A channel configuration may include determining whether a single C/A channel or dual C/A channels (e.g., a row C/A and column C/A channel) are used. For example, when dual C/A channels are used, external controller 705 may separately configure the different C/A channels. [0179] At 720, external controller 705 may indicate the C/A channel configuration to memory device controller 710. In some cases, external controller 705 transmits an explicit indication of the C/A channel configuration to memory device controller 710. For example, external controller 705 may transmit, in a bus width configuration message, a value stored at a bus width configuration component that corresponds to a particular C/A channel configuration. Memory device may receive the bus width configuration message and determine a configuration of the C/A channel based on the value in the bus widthconfiguration message. In some examples, external controller 705 may transmit the value of the bus width configuration component with an access command, such as a RD, WR, ACT, or PRE command. In another example, external controller 705 may implicitly indicate the C/A channel configuration by driving certain pins to either high or low levels after a reset occurs— e.g., pins that are driven to high levels may be identified by memory device controller 710 as being disabled. In some cases, external controller 705 may transmit the indication over different pins than those used to transmit C/A signaling to memory device controller 710.[0180] At 725, memory device controller 710 may determine a C/A channelconfiguration. In some cases, memory device controller 710 may determine the C/A channel configuration without receiving an indication from external controller 705— e.g., by identifying a default configuration at power up. In other cases, memory device controller 710 may determine the C/A channel configuration based on an explicit (e.g., using a bus width configuration message) or implicit (e.g., using a voltage pattern) indication received from external controller 705. In some cases, the memory device controller 710 may determine a C/A channel configuration using the same procedures described with reference to 715 and/or the external controller 705.[0181] At 730, external controller 705 may configure the C/A channel. Configuring the C/A channel may include activating and deceiving particular pins at external controller 705 based on the determined quantity of pins. For example, if the determined quantity of pins is less than an available quantity of pins, then external controller 705 may disable the unused pins. This may also be referred to as configured a C/A bus width. In some examples, external controller 705 may program a C/A bus width by driving certain pins to either a high or low level (e.g., after a reset occurs). Configuring the C/A channel may also include configuring one or more components (e.g., an encoder, driver component, and/or transmitter) at external controller 705 that are coupled with the C/A channel based on the determined quantity of pins and/or quantity of cycles. For example, external controller 705 may configure an encoder to generate a command that spans multiple cycles if the quantity of pins for transmitting the command is reduced. Also, external controller 705 may configure a driver component to disable drivers that are coupled with disabled pins.[0182] At 735, memory device controller 710 may configure the C/A channel.Configuring the C/A channel may include activating and deceiving particular pins at memory device controller 710 based on the determined quantity of pins. For example, if the determined quantity of pins is less than an available quantity of pins, then memory device controller 710 may disable the unused pins. Configuring the C/A channel may also include configuring one or more components (e.g., a decoder, and/or receiver) at memory device controller 710 that are coupled with the C/A channel based on the determined quantity of pins and/or quantity of cycles. For example, memory device controller 710 may configure a decoder to process a received signal over spans multiple cycles if a quantity of pins is disabled. Also, memory device controller 710 may configure a receiver to disable receiving components or drivers that are coupled with disabled pins.[0183] As discussed herein, in some cases, memory device controller 710 may adapt the C/A channel configuration during operation of a system that includes the external controller 705 and the memory device controller 710 (e.g., the C/A channel configuration may be adapted on-the-fly). For example, the C/A channel configuration may be adapted based on receiving the bus width configuration message. In some cases, memory device controller 710 observes a time-out duration between reconfiguring the C/A channel and processing a subsequent command. During a time-out period, memory device controller 710 and/or external controller 705 may refrain from communicating any additional commands. Memory device controller 710 and/or external controller 705 may use the time-out period to reconfigure components (e.g., encoders, decoders, receivers, transmitters, and/or drivers) to support the current C/A channel configuration. The time-out period may also allow previous commands to be processed according to the previous C/A channel configuration.[0184] In some cases, external controller 705 and memory device controller 710 both include a command truth table for each of the different C/A channel configurations that may be used to identify and process commands received in C/A signaling.[0185] At 740, external controller 705 may transmit C/A signaling to memory device controller 710 over the C/A channel according to the current C/A channel configuration. That is, external controller 705 may transmit C/A signaling using the enabled pins located at external controller 705 and according to the corresponding command timings— e.g., if six (6) row pins are enabled, external controller 705 may transmit an ACT command over a row C/A channel using the six (6) enabled row pins and over two cycles. When a dual C/A channel is used, external controller 705 may transmit row C/A signaling (e.g., ACT commands) over a row C/A channel and may transmit column C/A signaling (e.g., read and write commands) over a column C/A channel.[0186] Also, memory device controller 710 may receive the C/A signaling from external controller 705 over the C/A channel according to the current C/A channel configuration. That is, memory device may receive the C/A signaling using the enabled pins located at memory device controller 710 and according to the corresponding command timings, where the enabled pins at memory device controller 710 may correspond to the enabled pins located at external controller 705. For example, if six (6) row pins are enabled, memory device controller 710 may receive an ACT command over a row C/A channel using the six (6) enabled row pins and over two cycles. In some cases, memory device controller 710 identifies that an ACT command is, or is in the process of being, received based on determining that a first information bit of a received signal is a logic“0” or some other predetermined symbol (e.g., in the case of modulation schemes that include three or more symbols).[0187] After receiving the C/A signaling, memory device controller 710 may decode the received C/A signaling to identify the command and address information. Memory device controller 710 may then provide the decoded command and memory address to a memory array, which may access the memory cell(s) located at the identified memory address according to the decoded command.[0188] The order that some operations described above are performed may be rearranged, omitted, and/or performed in parallel. For example, the determination at 725 may be performed after or concurrently with the configuration at 730.[0189] FIG. 8 illustrates a block diagram representing aspects of a controller 800 that supports configuring a C/A channel as disclosed herein. The controller 800 may be an example of an external memory controller 105, a device memory controller 155, a local memory controller 165, a local memory controller 260, or a combination thereof as described with reference to FIGs. 1 and 2. [0190] Controller 800 includes biasing component 810, timing component 815, bus width configuration component 820, and command processing component 825. Controller 800 may be an example of external memory controller 105, device memory controller 155, local memory controller 165, local memory controller 260, or controller 305, as described with reference to FIGs. 1 through 3.[0191] Biasing component 810 may be configured to apply voltages and or currents in a memory system. In some cases, biasing component 810 may be configured to apply voltages of a larger signal to a channel that connect devices in a memory system.[0192] Timing component 815 may be configured to provide one or more clock signals throughout a memory system. In some cases, timing component 815 may be configured to trigger the biasing component 810 to apply voltages or currents in the memory system.Timing component 815 may be further configured to trigger other components in the memory system to process signals and to perform memory operations.[0193] In some examples, bus width configuration component 820 may be configured to determine a first quantity of pins of a channel for receiving one or more commands (e.g., row or column commands and/or addresses) from a host device and/or a second quantity of cycles for receiving the one or more commands from the host device. Bus width configuration component 820 may also be configured to configure a component (e.g., a receiver and/or decoder) coupled with the channel based on the first quantity of pins and the second quantity of cycles. Command processing component 825 may be configured to receive a command over the channel based on configuring the component[0194] In some cases, bus width configuration component 820 may also be configured to receive, from the host device, an indication of the first quantity of pins for receiving the one or more commands, or the second quantity of cycles for the one or more commands, or both, and may determine the first quantity of pins and the second quantity of cycles is based on the indication. In some cases, the indication is a second command. In other cases, the indication is an access command that includes the indication. In some cases, the indication is communicated over different pins of a channel than the pins used to communicate the access command. In some cases, bus width configuration component 820 may also be configured to determine that a duration since receiving the indication satisfies a timing threshold, wherein receiving the command occurs after the timing threshold is satisfied. [0195] In some cases, bus width configuration component 820 may also configure a second component (e.g., a second receiver and/or a second decoder) coupled with a second channel for receiving one or more commands (e.g., row commands and/or addresses) from the host device based on a third quantity of pins of the second channel and a fourth quantity of cycles for the one or more commands of the second channel. Command processing component 825 may be configured to receiving a second command over the second channel based on configuring the second component.[0196] In some cases, bus width configuration component 820 may determine the first quantity and/or second quantity of cycles based on identifying an initialization event (e.g., a start-up, power-on, or reset event) of a memory device. In some cases, bus widthconfiguration component 820 may determine the first quantity and/or second quantity of cycles based on identifying the operation parameter of the memory device. In some cases, the operation parameter comprises a power consumption parameter associated with the memory device, a third quantity of commands in a buffer of the memory device that satisfies a criteria, or both.[0197] In some cases, bus width configuration component 820 may determine the first quantity of pins for receiving the one or more commands based at least in part ondetermining the second quantity of cycles for the one or more commands.[0198] In some examples, bus width configuration component 820 may be configured to determine a first quantity of pins of a channel for transmitting one or more commands to a memory device and a second quantity of cycles for transmitting the one or more commands (e.g., row or column commands and/or addresses) to the memory device. Bus width configuration component 820 may also be configured to configure a component (e.g., a driver or encoder) coupled with the channel based at least in part on the first quantity of pins of the channel and the second quantity of cycles for transmitting the one or more commands. Command processing component 825 may be configured to transmit, to the memory device, a command over the channel based at least in part on configuring the component.[0199] In some cases, bus width configuration component 820 and/or command processing component 825 may be configured to transmit, to the memory device, an indication (e.g., a bus width configuration command) of the first quantity of pins for transmitting the one or more commands, or the second quantity of cycles for the one or more commands, or both. In some cases, command processing component 825 may determine that a duration since transmitting the indication satisfies a timing threshold, wherein transmitting the command occurs after the timing threshold is satisfied based at least in part on the determination. In some cases, the timing threshold may indicate a lower-bound of a duration since receiving an indication. In some cases, the lower-bound may be a minimum amount of time.[0200] In some cases, bus width configuration component 820 may be further configured to configure a second component (e.g., a second driver and/or second encoder) coupled with a second channel for transmitting one or more commands to the memory device based at least in part on a third quantity of pins of the second channel and a fourth quantity of cycles for the one or more commands of the second channel. In some cases, command processing component 825 may be configured to transmit a second command (e.g., a row command and/or address) over the second channel based at least in part on configuring the second component.[0201] In some examples, bus width configuration component 820 may determine the first quantity of pins or the second quantity of cycles based on identifying an operation parameter of the memory device. In some cases, the operation parameter is a power consumption parameter associated with the memory device, a third quantity of commands in a buffer of the memory device that satisfies a criteria, or both. In bus width configuration component 820 may determine the first quantity of pins or the second quantity of cycles based on identifying a start-up event of a memory device.[0202] FIG. 9 illustrates a flowchart of a method 900 or methods for configuring a C/A channel in accordance with various examples as disclosed herein. In some cases, the method 900 may be implemented by a controller 800 as described with reference to FIG. 8.[0203] At block 905, the method may include determining a first quantity of pins of a channel for receiving one or more commands from a host device and a second quantity of cycles for receiving the one or more commands from the host device, as described with reference to FIGs. 1 through 7. In certain examples, the operations of block 905 may be performed or facilitated by a controller, as described with reference to FIGs. 1, 2, 3, and 8.[0204] At block 910, the method may include configuring a component coupled with the channel based at least in part on the first quantity of pins and the second quantity of cycles, the component comprising a receiver or a decoder or both, as described with reference to FIGs. 1 through 7. In certain examples, the operations of block 910 may be performed or facilitated by a controller, as described with reference to FIGs. 1, 2, 3, and 8.[0205] At block 915, the method may include receiving a command over the channel based at least in part on configuring the component, as described with reference to FIGs. 1 through 7. In certain examples, the operations of block 915 may be performed or facilitated by a controller, as described with reference to FIGs. 1, 2, 3, and 8.[0206] In some examples, an apparatus as described herein may perform a method or methods, such as the method 900. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for determining a first quantity of pins of a channel for receiving one or more commands from a host device and a second quantity of cycles for receiving the one or more commands from the host device; configuring a component coupled with the channel based at least in part on the first quantity of pins and the second quantity of cycles, the component comprising a receiver or a decoder or both; and receiving a command over the channel based at least in part on configuring the component.[0207] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for receiving, from the host device, an indication of the first quantity of pins for receiving the one or more commands, or the second quantity of cycles for the one or more commands, or both, wherein determining the first quantity of pins and the second quantity of cycles is based at least in part on the indication. In some examples of the method, apparatuses, and non- transitory computer-readable medium described herein, receiving the indication includes receiving, from the host device over the channel, a second command that includes the indication. In some examples of the method, apparatuses, and non-transitory computer- readable medium described herein, receiving the indication includes receiving, from the host device over the channel, an access command that includes the indication. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the indication is communicated over one or more pins of the channel that are different than one or more pins over which the access command is communicated.[0208] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for determining that a duration since receiving the indication satisfies a timing threshold, wherein receiving the command occurs after the timing threshold is satisfied.[0209] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for configuring a second component coupled with a second channel for receiving one or more commands from the host device based at least in part on a third quantity of pins of the second channel and a fourth quantity of cycles for the one or more commands of the second channel, the second component comprising a second receiver or a second decoder or both; and receiving a second command over the second channel based at least in part on configuring the second component.[0210] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for identifying an initialization event of a memory device, wherein determining the first quantity of pins and determining the second quantity of cycles is based at least in part on identifying the initialization event.[0211] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for identifying an operation parameter of a memory device, wherein determining the first quantity of pins or determining the second quantity of cycles is based at least in part on identifying the operation parameter of the memory device. In some examples of the method, apparatuses, and non-transitory computer-readable medium described herein, the operation parameter comprises a power consumption parameter associated with the memory device, a third quantity of commands in a buffer of the memory device that satisfies a criteria, or both.[0212] In some examples of the method, apparatuses, and non-transitory computer- readable medium described herein, determining the first quantity of pins for receiving the one or more commands is based at least in part on determining the second quantity of cycles for the one or more commands.[0213] FIG. 10 illustrates a flowchart of a method 1000 or methods for configuring a C/A channel in accordance with various examples as disclosed herein. In some cases, the method 1000 may be implemented by a controller 800 as described with reference to FIG. 8. [0214] At block 1005, the method may include determining a first quantity of pins of a channel configured to transmit one or more commands to a memory device and a second quantity of cycles for transmitting the one or more commands to the memory device, as described with reference to FIGs. 1 through 7. In certain examples, the operations of block 1005 may be performed or facilitated by a controller, as described with reference to FIGs. 1, 2, 3, and 8.[0215] At block 1010, the method may include configuring a component coupled with the channel based at least in part on the first quantity of pins of the channel and the second quantity of cycles for transmitting the one or more commands, the component comprising a driver or an encoder or both, as described with reference to FIGs. 1 through 7. In certain examples, the operations of block 1010 may be performed or facilitated by a controller, as described with reference to FIGs. 1, 2, 3, and 8.[0216] At block 1015, the method may include transmitting, to the memory device, a command over the channel based at least in part on configuring the component, as described with reference to FIGs. 1 through 7. In certain examples, the operations of block 1015 may be performed or facilitated by a controller, as described with reference to FIGs. 1, 2, 3, and 8.[0217] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1000. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for determining a first quantity of pins of a channel for transmitting one or more commands to a memory device and a second quantity of cycles for transmitting the one or more commands to the memory device; configuring a component coupled with the channel based at least in part on the first quantity of pins of the channel and the second quantity of cycles for transmitting the one or more commands, the component comprising a driver or an encoder or both; and transmitting, to the memory device, a command over the channel based at least in part on configuring the component.[0218] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for transmitting, to the memory device, an indication of the first quantity of pins for transmitting the one or more commands, or the second quantity of cycles for the one or more commands, or both. [0219] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for determining that a duration since transmitting the indication satisfies a timing threshold, wherein transmitting the command occurs after the timing threshold is satisfied based at least in part on the determination.[0220] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for configuring a second component coupled with a second channel for transmitting one or more commands to the memory device based at least in part on a third quantity of pins of the second channel and a fourth quantity of cycles for the one or more commands of the second channel, the second component comprising a second driver or a second encoder or both; and transmitting a second command over the second channel based at least in part on configuring the second component.[0221] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for identifying an operation parameter of the memory device, wherein determining the first quantity of pins or determining the second quantity of cycles is based at least in part on identifying the operation parameter of the memory device.[0222] In some examples of the method, apparatuses, and non-transitory computer- readable medium described herein, the operation parameter is a power consumption parameter associated with the memory device, a third quantity of commands in a buffer of the memory device that satisfies a criteria, or both.[0223] Some examples of the method, apparatuses, and non-transitory computer-readable medium described herein may further include operations, features, means, or instructions for identifying a start-up event of a memory device, wherein determining the first quantity of pins and determining the second quantity of cycles is based at least in part on identifying the start-up event.[0224] It should be noted that the methods described above describe possibleimplementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Furthermore, examples from two or more of the methods may be combined. [0225] In some examples, an apparatus or device may perform aspects of the functions described herein using general, or special-purpose hardware. For example, an apparatus or device may include a first receiver comprising a plurality of pins for receiving one or more commands over a first channel; a first decoder coupled with the first receiver and for decoding the one or more commands received over the first channel; and a register coupled with the first receiver and the first decoder and for configuring a width of the first channel based at least in part on the one or more commands received over the first channel.[0226] In some examples of the apparatus or device, the register is configured to determine a first quantity of pins of the first channel for receiving the one or more commands and is configured to determine a second quantity of cycles for the one or more commands.[0227] In some examples, the apparatus or device includes a second receiver comprising a plurality of pins for receiving one or more commands over a second channel; and a second decoder coupled with the second receiver and for decoding the one or more commands received over the second channel, wherein the register is for configuring a width of the second channel based at least in part on the one or more commands received over the second channel, the register coupled with the second receiver and the second decoder. In some examples of the apparatus or device, the first channel is for communicating row commands and the second channel is for communicating column commands. In some examples of the apparatus or device, a first quantity of pins of the first channel is different than and is independently configurable from a second quantity of pins of the second channel.[0228] In some examples, an apparatus or device may perform aspects of the functions described herein using general, or special-purpose hardware. For example, an apparatus or device may include a first driver comprising a plurality of pins for transmitting one or more commands over a first channel; a first encoder coupled with the first driver and for encoding the one or more commands transmitted over the first channel; and a register coupled with the first driver and the first encoder and for configuring a width of the first channel based at least in part on the one or more commands transmitted over the first channel.[0229] In some examples of the apparatus or device, the register is for determining a first quantity of pins of the first channel for transmitting the one or more commands and determining a second quantity of cycles for the one or more commands[0230] In some examples, the apparatus or device includes a second driver comprising a plurality of pins for transmitting one or more commands over a second channel; and a second encoder coupled with the second driver and for encoding the one or more commands transmitted over the second channel, wherein the register is for configuring a width of the second channel based at least in part on the one or more commands transmitted over the second channel, the register coupled with the second driver and the second encoder.[0231] An apparatus is described. The apparatus may include a first receiver including a set of pins configured to receive one or more commands over a first channel, a first decoder coupled with the first receiver and configured to decode one or more commands received over the first channel, and a register coupled with the first receiver and the first decoder and programmable to configure a width of the first channel based on one or more commands received over the first channel.[0232] In some examples, the register may be configured to determine a first quantity of pins of the first channel for receiving the one or more commands and may be configured to determine a second quantity of cycles for the one or more commands.[0233] Some examples of the apparatus may include a second receiver including a set of pins configured to receive one or more commands over a second channel, and a second decoder coupled with the second receiver and configured to decode one or more commands received over the second channel, where the register may be programmable to configure a width of the second channel based on one or more commands received over the second channel, the register coupled with the second receiver and the second decoder.[0234] In some examples, the first channel may be configured to communicate row commands and the second channel may be configured to communicate column commands.[0235] In some examples, a first quantity of pins of the first channel may be different than and may be independently configurable from a second quantity of pins of the second channel.[0236] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths. [0237] As used herein, the term“virtual ground” refers to a node of an electrical circuit that is held at a voltage of approximately zero volts (0V) but that is not directly coupled with ground. Accordingly, the voltage of a virtual ground may temporarily fluctuate and return to approximately 0V at steady state. A virtual ground may be implemented using various electronic circuit elements, such as a voltage divider consisting of operational amplifiers and resistors. Other implementations are also possible.“Virtual grounding” or“virtually grounded” means connected to approximately 0V.[0238] The terms“electronic communication,”“conductive contact,”“connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some cases, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors.[0239] The term“coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals are capable of being communicated between components over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0240] The term“isolated” refers to a relationship between components in which signals are not presently capable of flowing between the components. Components are isolated from each other if there is an open circuit between them. For example, two components separated by a switch that is positioned between the components are isolated from each other when the switch is open. When a controller isolates two components, the controller affects a change that prevents signals from flowing between the components using a conductive path that previously permitted signals to flow.[0241] The term“layer” used herein refers to a stratum or sheet of a geometrical structure. Each layer may have three dimensions (e.g., height, width, and depth) and may cover at least a portion of a surface. For example, a layer may be a three-dimensional structure where two dimensions are greater than a third, e.g., a thin-film. Layers may include different elements, components, and/or materials. In some cases, one layer may be composed of two or more sublayers. In some of the appended figures, two dimensions of a three- dimensional layer are depicted for purposes of illustration. Those skilled in the art will, however, recognize that the layers are three-dimensional in nature.[0242] The devices discussed herein, including a memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0243] A switching component or a transistor discussed herein may represent a field- effect transistor (FET) and comprise a three terminal device including a source, drain, and gate. The terminals may be connected to other electronic elements through conductive materials, e.g., metals. The source and drain may be conductive and may comprise a heavily- doped, e.g., degenerate, semiconductor region. The source and drain may be separated by a lightly-doped semiconductor region or channel. If the channel is n-type (i.e., majority carriers are signals), then the FET may be referred to as a n-type FET. If the channel is p-type (i.e., majority carriers are holes), then the FET may be referred to as a p-type FET. The channel may be capped by an insulating gate oxide. The channel conductivity may be controlled by applying a voltage to the gate. For example, applying a positive voltage or negative voltage to an n-type FET or a p-type FET, respectively, may result in the channel becoming conductive. A transistor may be“on” or“activated” when a voltage greater than or equal to the transistor’s threshold voltage is applied to the transistor gate. The transistor may be“off’ or“deactivated” when a voltage less than the transistor’s threshold voltage is applied to the transistor gate.[0244] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term“exemplary” used herein means“serving as an example, instance, or illustration,” and not“preferred” or“advantageous over other examples.” The detailed description includes specific details to providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0245] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0246] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0247] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). [0248] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims,“or” as used in a list of items (for example, a list of items prefaced by a phrase such as“at least one of’ or“one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase“based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as“based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase“based on” shall be construed in the same manner as the phrase “based at least in part on.”[0249] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0250] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein. |
The invention discloses ISA opcode parameterization and opcode spatial layout randomization. An embodiment of an apparatus may include a memory to store configuration information; an instruction decoder to decode an instruction having one or more fields including an opcode field; and circuitry communicatively coupled to the instruction decoder and the memory, the circuitry to determine whether an opcode value in an opcode field of the instruction corresponds to a changed opcode value in stored configuration information, the stored configuration information associating one or more changed opcode values with corresponding original opcode values, and if so, the circuitry is to decode the instruction based on one of the original opcode values that is related to the changed opcode value in the stored configuration information. Other embodiments are disclosed and claimed. |
1.A device comprising:memory for storing configuration information;an instruction decoder for decoding an instruction having one or more fields including an opcode field; andcircuitry, communicatively coupled to the instruction decoder and the memory, the circuitry for:determining whether an opcode value in the opcode field of the instruction corresponds to an altered opcode value in stored configuration information that causes one or more altered opcode values relative to the corresponding raw opcode value, and if so determined, thenThe instruction is decoded based on one of the original opcode values associated with the changed opcode value in the stored configuration information.2.The apparatus of claim 1, wherein the circuit is further configured to:storing a random opcode value in the configuration information for the one or more changed opcode values; andA corresponding raw opcode value is associated with each of the random opcode values in the stored configuration information.3.The apparatus of any one of claims 1 to 2, further comprising:Software for altering the instruction-level encoded content of a program at load time to replace the relevant original opcode values with altered opcode values from said stored configuration information.4.The apparatus of claim 3, wherein the software is further for:The configuration information is added to the context of the program to make the program and the configuration information portable.5.The apparatus of claim 3, wherein the software is further for:generating a first set of random opcode values for a first instance of the program in the configuration information; andA second set of random opcode values different from the first set of random opcode values is generated in the configuration information for a second instance of the program.6.The apparatus of claim 5, wherein the software is further for:The random opcode value in the configuration information is limited to a value that avoids conflict with a predetermined non-variable instruction pattern.7.The apparatus of any one of claims 1 to 2, wherein the memory comprises:One or more programmable system registers for storing the configuration information.8.A method that includes:storing configuration information in a programmable register coupled to the decoder, the configuration information correlating one or more altered opcode values with corresponding original opcode values;Fetch an instruction with an opcode field;determining whether the opcode value in the opcode field of the instruction corresponds to a changed opcode value in the stored configuration information; and if so, thenThe instruction is decoded by the decoder based on one of the original opcode values associated with the changed opcode value in the stored configuration information.9.The method of claim 8, further comprising:storing a random opcode value in the configuration information for the one or more changed opcode values; andA corresponding raw opcode value is associated with each of the random opcode values in the stored configuration information.10.The method of any one of claims 8 to 9, further comprising:The instruction-level encoded content of the program is altered at load time to replace the relevant original opcode values with altered opcode values from the stored configuration information.11.The method of claim 10, further comprising:The configuration information is added to the context of the program to make the program and the configuration information portable.12.The method of claim 10, further comprising:generating a first set of random opcode values for a first instance of the program in the configuration information; andA second set of random opcode values different from the first set of random opcode values is generated in the configuration information for a second instance of the program.13.The method of claim 12, further comprising:The random opcode value in the configuration information is limited to a value that avoids conflict with a predetermined non-variable instruction pattern.14.A device comprising:storage circuitry for storing configuration information correlating one or more altered opcode values with corresponding original opcode values;a fetch circuit for fetching a single instruction, the single instruction being used to include an opcode field;an opcode change circuit, communicatively coupled to the store circuit and the fetch circuit, for determining whether an opcode value in the opcode field of the single instruction corresponds to an altered operation in the stored configuration information code value, and if determined to be so, replacing the altered opcode value with one of the original opcode values that is related to the altered opcode value in said stored configuration information;a decoder circuit communicatively coupled to the opcode change circuit for decoding the single instruction based on an opcode value originating from the opcode change circuit; andAn execution circuit is communicatively coupled to the decoder circuit for executing the decoded instructions.15.15. The apparatus of claim 14, further comprising a configuration circuit communicatively coupled to the storage circuit for:storing a random opcode value in the configuration information for the one or more changed opcode values; andA corresponding raw opcode value is associated with each of the random opcode values in the stored configuration information.16.The apparatus of any one of claims 14 to 15, further comprising operating system code for:The instruction-level encoded content of the program is altered at load time to replace the associated original opcode value with the altered opcode value from the configuration information.17.The apparatus of claim 16, wherein the operating system code is further for:The configuration information is added to the context of the program to make the program and the configuration information portable.18.The apparatus of claim 16, wherein the operating system code is further for:generating a first set of random opcode values for a first instance of the program in the configuration information; andA second set of random opcode values different from the first set of random opcode values is generated in the configuration information for a second instance of the program.19.The apparatus of claim 18, wherein the operating system code is further for:The random opcode value in the configuration information is limited to a value that avoids conflict with a predetermined non-variable instruction pattern.20.16. The apparatus of any of claims 14 to 15, wherein the single instruction further includes a field for an identifier of the first source operand, identifying one of a vector register and a memory location.21.A device comprising:means for storing configuration information in a programmable register coupled to the decoder, the configuration information correlating one or more altered opcode values with corresponding original opcode values;means for fetching an instruction having an opcode field;means for determining whether an opcode value in the opcode field of the instruction corresponds to a changed opcode value in the stored configuration information; and if so, thenMeans for decoding, by the decoder, the instruction based on one of the original opcode values associated with a changed opcode value in the stored configuration information.22.The apparatus of claim 21, further comprising:means for storing a random opcode value in the configuration information for the one or more changed opcode values; andMeans for correlating a corresponding raw opcode value with each of the random opcode values in the stored configuration information.23.The apparatus of any one of claims 21 to 22, further comprising:Means for altering the instruction-level encoded content of a program at load time to replace the associated original opcode values with altered opcode values from said stored configuration information.24.The apparatus of claim 23, further comprising:Means for adding the configuration information to the context of the program to make the program and the configuration information portable.25.The apparatus of claim 23, further comprising:means for generating, in the configuration information, a first set of random opcode values for a first instance of the program; andMeans for generating, in the configuration information, a second set of random opcode values different from the first set of random opcode values for a second instance of the program. |
ISA opcode parameterization and opcode space layout randomizationbackground1.technical fieldThe present disclosure generally relates to processor technology and instruction decoding technology.2.Background techniqueProgram hardening techniques at the instruction set architecture (ISA) level typically involve privilege restrictions (eg, page permissions for R/W/X/etc.) and/or control flow integrity (eg, restricting programs relative to a set of preset rules) the ability to perform arbitrary control flow). Permission-based restrictions, while effective, still leave programs that can be interpreted and exploited by malware. The actual program content/representation is well known and can be used for itself (eg, via Return Oriented Programming (ROP), Jump Oriented Programming (JOP), etc.). Control flow integrity mechanisms, while effectively reducing the malware's ability to exploit and use arbitrary parts of a program with infinite flexibility, still leave a program with well-known content/representation that can be interpreted and exploited by malware.Description of drawingsBy way of example and not limitation, various embodiments of the invention are illustrated in the figures of the accompanying drawings, in which:1 is a block diagram of an example of an apparatus according to an embodiment;2A-2B are flowcharts of examples of methods according to embodiments;3 is a block diagram of an example of an apparatus according to an embodiment;4 is a block diagram of an example of hardware according to an embodiment;5A is a diagram of an example of a process flow according to an embodiment;5B is a diagram of another example of a process flow according to an embodiment;5C is a diagram of another example of a process flow according to an embodiment;5D is a diagram of another example of a process flow according to an embodiment;6A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register-renaming out-of-order issue/execution pipeline in accordance with an embodiment of the present invention.6B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execute architecture core to be included in a processor, according to an embodiment of the present invention;Figures 7A-7B illustrate block diagrams of a more specific exemplary in-order core architecture, which core would be one of several logic blocks in a chip (including other cores of the same type and/or different types);8 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment of the present invention;9-12 are block diagrams of exemplary computer architectures; and13 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to an embodiment of the present invention.Detailed waysEmbodiments discussed herein provide techniques and mechanisms for instruction decoding in various ways. The techniques described herein may be implemented in one or more electronic devices. Non-limiting examples of electronic devices that can utilize the techniques described herein include mobile and/or stationary devices of any kind, such as cameras, cellular phones, computer terminals, desktop computers, e-readers, fax machines, automated services computers, laptops, netbook computers, notebook computers, Internet appliances, payment terminals, personal digital assistants, media players and/or recorders, servers (eg, blade servers, rack mount servers, combinations thereof, etc.), set-top boxes , smart phones, tablet PCs, ultra-mobile PCs, wired phones, combinations of the above, and the like. More generally, the techniques described herein may be employed in any of a variety of electronic devices that include integrated circuits operable to decode instructions. For example, embodiments may be implemented in a processing element of an electronic device, such as a central processing unit (CPU), a graphics processing unit (GPU), or the like.In the following description, numerous details are discussed in order to provide a more thorough explanation of embodiments of the present disclosure. However, it will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present disclosure.Note that in the corresponding drawings of the embodiments, signals are represented by lines. Some lines may be thicker to indicate a greater number of component signal paths, and/or have arrows at one or more ends to indicate the direction of information flow. Such instructions are not intended to be limiting. Rather, lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or logic unit. Any represented signal may actually include one or more signals that may travel in either direction, as dictated by design needs or preferences, and may be implemented using any suitable type of signal scheme.Throughout the specification and in the claims, the term "connected" means a direct connection, such as an electrical, mechanical, or magnetic connection, between connected objects without any intervening device. The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, or magnetic connection between connected objects or an indirect connection through one or more passive or active intervening devices. The term "circuit" or "module" may refer to one or more passive and/or active components arranged to cooperate with each other to provide a desired function. The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meanings of "a (a/an)" and "the" include plural references. The meaning of "in" includes "in" and "on".The term "apparatus" may generally refer to a device depending on the context in which that term is used. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures with active and/or passive components, and the like. In general, a device is a three-dimensional structure with a plane along the x-y direction of an x-y-z Cartesian coordinate system and a height along the z direction. The plane of a device may also be the plane of a device that includes the device.The term "scaling" generally refers to converting a design (schematic and layout) from one process technology to another and then being reduced in the layout area. The term "scaling" also generally refers to reducing the size of layouts and devices within the same technology node. The term "scaling" may also refer to an adjustment (eg, deceleration or acceleration - ie, scaling down or scaling up, respectively) of a signal frequency relative to another parameter (eg, a power supply level).The terms "substantially," "nearly," "approximately," "nearly," and "approximately" generally mean within +/- 10% of a target value. For example, the terms "substantially equal", "about equal" and "approximately equal" mean that there are only incidental variations between the objects so described, unless specified otherwise in the explicit context of their use. In the art, such variations are typically no greater than +/- 10% of the predetermined target value.It is to be understood that the terms so used are interchangeable under appropriate circumstances, eg, to enable the embodiments of the invention described herein to be taken in other orientations than those illustrated or otherwise described herein operate.Unless otherwise specified, use of the ordinal adjectives "first," "second," "third," etc. to describe a common object merely indicates that different instances of similar objects are being mentioned, and is not intended to imply that the object so described must be at the time of in a given sequence above, spatially, rank, or in any other way.The terms "left", "right", "front", "rear", "top", "bottom", "above", "below" etc. in the description and claims (if any ) are used for descriptive purposes and not necessarily to describe a permanent relative position. For example, as used herein the terms "above", "below", "front", "rear", "top", "bottom", "above", "at" "Below" and "on" refer to the relative position of a component, structure or material relative to other referenced components, structures or materials in a device where such physical relationship is significant. These terms are employed herein for descriptive purposes only, and are primarily within the context of the device's z-axis, so these terms may be relative to the device's orientation. Thus, a first material that is "above" a second material in the context of the figures provided herein may also be "below" a second material if the device is oriented upside down with respect to the context of the figures provided herein . In the context of materials, one material disposed above or below another material may be in direct contact, or may have one or more intervening materials. Furthermore, a material disposed between the two materials may be in direct contact with the two layers, or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with the second material. A similar distinction is made in the context of component assemblies.The term "between" may be employed in the context of the z-axis, x-axis, or y-axis of a device. A material between two other materials may be in contact with one or both of those two materials, or the material may be separated from both of the other two materials by one or more intervening materials. Thus, a material "between" two other materials can be in contact with either of the other materials, or the material can be coupled to the other two materials through an intervening material. A device between two other devices may be directly connected to one or both of those two devices, or the device may be separated from both of the other two devices by one or more intervening devices.As used throughout the specification and in the claims, a list of items joined by the terms "at least one of" or "one or more of" may mean any combination of the recited items. For example, the phrase "at least one of A, B, or C" can mean A; B; C; A and B; A and C; B and C; It should be noted that those elements of the drawings that have the same reference numerals (or names) as elements of any other drawing can operate or function in any manner similar to that described, but are not limited thereto.Furthermore, the various elements of combinatorial and sequential logic discussed in this disclosure may relate to physical structures (such as AND gates, OR gates, or XOR gates), or to devices that implement logical structures that are the Boolean equivalent of the logic in question synthetic or otherwise optimized collections.1, an embodiment of apparatus 100 may include a memory 111 for storing configuration information, an instruction decoder 112 for decoding instructions having one or more fields including an opcode field, and communicatively coupled to the instruction decoding circuit 113 of the device 112 and the memory 111. Circuitry 113 may be configured to determine whether an opcode value in an opcode field of an instruction corresponds to an altered opcode value in stored configuration information that causes one or more altered opcode values is related to the corresponding original opcode value, and if determined to be so, the circuit 113 may be further configured to be based on one of the original opcode values that is related to the changed opcode value in the stored configuration information Decode the instruction. For example, instructions that were communicated to hardware in the past via the original opcode may now be communicated to the hardware using the altered opcode. In some embodiments, the circuit 113 may be further configured to store a random opcode value in the configuration information for the one or more altered opcode values, and to match the corresponding original opcode value with the one in the stored configuration information Each of the random opcode values is correlated.Some embodiments may further include software 115 for altering the instruction-level encoded content of program 116 at load time to replace the relevant original opcode values with altered opcode values from the stored configuration information. For example, software 115 may be configured to add configuration information to the context of program 116 to make program 116 and the configuration information portable. In some embodiments, software 115 may be further configured to generate a first set of random opcode values in the configuration information for the first instance of program 116 and to generate a first set of random opcode values in the configuration information for the second instance of program 116 A second set of random opcode values with different random opcode values. For example, software 115 may also be configured to limit random opcode values in the configuration information to values that avoid conflicts with predetermined non-variable instruction patterns. In some embodiments, software 115 may include operating system (OS) code, such as loaders, patch layers, and the like.In some embodiments, memory 111 may include one or more programmable system registers 114 for storing configuration information. Registers 114 may include N configurable registers that may be reprogrammed with alternate opcode settings. For example, the registers 114 may be connected to the instruction decoder 112, and when the registers 114 are activated, each register value may be used for instruction determination, rather than a fixed mode. In some embodiments, programmable system registers 114 may have multiple copies to support multiple logical processors for each central processing unit (CPU) core (eg, a CPU core may contain two logical Each of the processors will have its own independent set of registers).Embodiments of memory 111, instruction decoder 112, circuitry 113, and/or registers 114 may be incorporated into a processor including, for example, core 990 (FIG. 6B), cores 1102A-N (FIG. 8, FIG. 12), processing processor 1210 (FIG. 9), coprocessor 1245 (FIG. 9), processor 1370 (FIG. 10-FIG. 11), processor/coprocessor 1380 (FIG. 10-FIG. 11), coprocessor 1338 (FIG. 10-FIG. 11) 11), coprocessor 1520 (FIG. 12), and/or processors 1614, 1616 (FIG. 13). Specifically, embodiments of circuit 113 and/or register 114 may be incorporated into decoding unit 940 (FIG. 6B).2A-2B, an embodiment of the method 200 may include, at block 221, storing configuration information in a programmable register coupled to the decoder, the configuration information causing the one or more altered opcode values to be associated with Corresponding original opcode value; at block 222, fetch the instruction with the opcode field; at block 223, determine if the opcode value in the instruction's opcode field corresponds to the changed operation in the stored configuration information and if determined to do so, at block 224, the instruction is decoded by the decoder based on one of the original opcode values associated with the changed opcode value in the stored configuration information. Some embodiments of the method 200 may further include: at block 225, storing a random opcode value in the configuration information for the one or more altered opcode values; and at block 226, comparing the corresponding original opcode value with the Each of the random opcode values in the stored configuration information is correlated.Some embodiments of the method 200 may further include, at block 227, altering the instruction-level encoded content of the program at load time to replace the associated original opcode value with the altered opcode value from the stored configuration information. For example, method 200 may include, at block 228, adding configuration information to the context of the program to make the program and configuration information portable. For example, portability may involve including configuration information as part of thread and process context, which allows configuration information to automatically follow a program as it is suspended and resumed, context switched, or even migrated between CPUs. Some embodiments of the method 200 may further include: at block 229, generating a first set of random opcode values in the configuration information for the first instance of the program; and at block 230, in the configuration information for the second instance of the program A second set of random opcode values is generated that is different from the first set of random opcode values. The method 200 may also include, at block 231, limiting random opcode values in the stored configuration information to values that avoid conflicts with predetermined non-variable instruction patterns. For example, embodiments of the operations in blocks 227-231 may be performed by OS code such as a loader, patch layer, or the like.3, an embodiment of apparatus 300 may include storage circuitry 334 for storing configuration information correlating one or more altered opcode values with corresponding original opcode values (eg, during instruction decoding before); fetch circuitry 335 for fetching a single instruction for including an opcode field; and opcode change circuitry 336 communicatively coupled to store circuitry 334 and fetch circuitry 335 for determining the opcode of the single instruction Whether the opcode value in the field corresponds to the altered opcode value in the stored configuration information, and if so determined to use the original opcode value and the altered opcode value in the stored configuration information The associated one original opcode value replaces the changed opcode value. For example, instructions that were communicated to hardware in the past via the original opcode may now be communicated to the hardware using the altered opcode. The apparatus 300 may further include: a decoder circuit 337, communicatively coupled to the opcode change circuit 336, for decoding a single instruction based on the opcode value originating from the opcode change circuit 336; and an execution circuit 338, communicatively coupled to the decode A processor circuit for executing decoded instructions. Some embodiments of apparatus 300 may further include configuration circuitry 339 communicatively coupled to storage circuitry 334 for storing random opcode values in the configuration information for the one or more altered opcode values, and for causing corresponding original opcode values to The opcode value is associated with each of the random opcode values in the stored configuration information. For example, a single instruction fetched by fetch circuit 335 may further include a field for an identifier of the first source operand to identify one of a vector register and a memory location.The apparatus 300 may further include operating system (OS) code 340 configured to alter the instruction-level encoded content of the program at load time to replace the relevant original opcode values with altered opcode values from the configuration information. For example, OS code 340 may be configured to add configuration information to the context of a program to make the program and configuration information portable. In some embodiments, the OS code 340 may be further configured to generate a first set of random opcode values in the configuration information for the first instance of the program, and to generate a random set of opcode values corresponding to the first set in the configuration information for the second instance of the program A second set of random opcode values with different opcode values. For example, OS code 340 may be configured to limit random opcode values stored in configuration information to values that avoid conflicts with predetermined non-variable instruction patterns.In some embodiments, a typical flow of an instruction would begin with the following operations: fetch circuit 335 fetches the instruction, decoder circuit 337 decodes the fetched instruction, and execute circuit 338 executes the decoded instruction. Advantageously, the operation of decoder circuit 337 is enhanced by storage circuit 334 , opcode change circuit 336 and configuration circuit 339 . In some embodiments, one or more of storage circuit 334 , opcode change circuit 336 , and configuration circuit 339 may be co-located with decoding circuit 337 in the same hardware block.Embodiments of storage circuit 334, fetch circuit 335, opcode change circuit 336, decoder circuit 337, execution circuit 338, and/or configuration circuit 339 may be incorporated into a processor including, for example, core 990 (FIG. 6B), Cores 1102A-N (FIGS. 8, 12), processors 1210 (FIG. 9), coprocessors 1245 (FIG. 9), processors 1370 (FIGS. 10-11), processors/coprocessors 1380 (FIG. 10) - Figure 11), coprocessor 1338 (Figure 10-11), coprocessor 1520 (Figure 12), and/or processors 1614, 1616 (Figure 13). In particular, embodiments of opcode change circuitry 336, decoder circuitry 337, and/or configuration circuitry 339 may be incorporated into decoding unit 940 (FIG. 6B).Some embodiments provide techniques for instruction set architecture (ISA) opcode parameterization. For example, some embodiments provide techniques for wildcard opcodes and/or opcode space layout randomization (OSLR). Program exploits are often guided using architecture-specific techniques that tamper with and exploit existing program code and logic to perform malicious/unintended actions. In crafting these attacks, malware and exploits can take advantage of well-known standardized facts about the class of central processing units (CPUs) (eg, x86, ARM, PowerPC, MIPS, etc.) because programs running on certain types of machines are often A specific binary encoding that is configured to use various ISA features. Given the fact that all programs for a particular architecture are against a fixed/stable set of ISA encodings, some attacks are well positioned to work.Most traditional program hardening techniques seek to constrain the behavior of the program, rather than changing the program representation. Control flow integrity mechanisms have dynamic overhead that is detrimental to runtime execution (eg, shadow stack maintenance, ENDBRANCH (end branch) execution checks, etc.). There are also techniques that seek to encrypt/transform a disk/memory image of the program. However, once the program code is retrieved, the code is often in clear text and represented by a stable and well-documented ISA code that can be attacked. Encrypted code patterns are effective in protecting images from outside observers/attackers, but at runtime such patterns still leave known/stable programs that can be interpreted and exploited by malware. Address Space Layout Randomization (ASLR) refers to an architecture-independent hardening technique that seeks to change where a program is located at load time/run time so that utilization of an assumed fixed address layout is less efficient.Some embodiments may allow a program to change its instruction-level encoding content via wildcard opcodes and/or OSLR at load time, advantageously providing some degree of hardening to frustrate exploits that seek to exploit a fixed/stable set of ISA encodings. For example, at load time, a program may configure the hardware to convey that a given set of fixed ISA features is re-encoded in a particular way (e.g., using an opcode map configuration) that may differ from all other programs (e.g., in a machine and cross-platform). The program can then utilize the OSLR patch to convert the program's binary to employ the chosen opcode map that customizes the program to use the new encoding, which the hardware now understands for that particular program. Embodiments of OSLR techniques change/randomize opcode values at load time/runtime so that the utilization of assuming fixed opcode values is less efficient.A program and its opcode mapping configuration can be considered portable because the configuration is part of the program's context. Thus, the opcode mapping configuration travels with the program as it migrates and context switches across the computing system. The opcode mapping configuration chosen by the program is different from what the attacker already expected/assumed, to the extent that any attacker/exploitation attempting to tamper with the program will be mitigated. Advantageously, some embodiments provide a level of protection without the penalty runtime overhead inherent in other control flow integrity protection modes or sandboxing modes.In some embodiments, wildcard opcodes and OSLR allow a program to mutate its executable representation in a way that makes the program resistant to attacks/tampering that seek to exploit standardized ISA encodings. Mutation techniques allow programs to change their representation while still remaining executable, even in the face of context switches/migrations, and other programs that may use alternative encodings/mutations. In some embodiments, OSLR uses program representation variation to thwart attacks that require specific assumptions about ISA content and representation. Advantageously, embodiments can be utilized by OSLR-enabled processors to protect sensitive ISA features (eg, indirect control flow, sensitive instructions, etc.). According to some embodiments, the software ecosystem may produce mutated programs that are more secure against attacks/exploitations that require an understanding of a particular ISA encoding at runtime.The degree of variability need not be too high to have a high impact. In some embodiments, a selected (fixed) set of ISA features typically used to utilize a given CPU architecture may be all that is required to make OSLR effective. Furthermore, the degree of flexibility of OSLR does not require changes to both instruction length decoding and opcode selection. In some embodiments, to simplify OSLR hardware implementation, the variation may be limited to opcode selection. Additionally, wildcard settings for OSLR configuration can be restricted to prevent software configuration from conflicting with existing, non-variable instruction patterns. Checking for overlap between wildcard instructions and fixed/non-variable instructions does not need to be done in hardware, and can be performed by a program that checks for exclusiveness against fixed patterns. Applying constraints outside of hardware simplifies the hardware implementation of OSLR.In some embodiments, OSLR techniques may utilize programmable registers as part of the instruction decoding logic of the front end of the CPU. For example, a register can be programmed to represent the use of a particular instruction (eg, return, or RET) by the mutating program, as opposed to being a hard-coded immediate value embedded in the decoder logic. For example, the decoder logic may be configured to perform register comparisons to fixed binary patterns, rather than hard-coding comparisons.FIG. 4 illustrates an embodiment of hardware 400 for processing a set of instructions 401 . As shown, storage 403 stores a set of instructions 401 to be executed. Instructions from a set of instructions 401 are received by decoding circuitry 405 . For example, decode circuit 405 receives instructions from fetch logic/circuitry. The instruction includes fields for the opcode, first and second sources, and destination. In some embodiments, the source and destination are registers, and in other embodiments, one or more of the source and destination are memory locations. In some embodiments, the opcode specifies which arithmetic operation is to be performed.More detailed embodiments of at least one instruction format will be detailed later. Decode circuitry 405 decodes the instruction into one or more operations. In some embodiments, the decoding includes generating a plurality of micro-operations to be executed by an execution circuit, such as execution circuit 409 . The decoding circuit 405 also decodes the instruction prefix.In some embodiments, register renaming, register allocation and/or scheduling circuitry 407 provides functionality for one or more of the following: 1) renaming logical operand values to physical operand values (eg, in some embodiments the register alias table); 2) assign status bits and flags to decoded instructions; and 3) schedule decoded instructions (eg, using reservation stations in some embodiments) for use on execution circuits outside the instruction pool implement.Registers (register file) and/or memory 408 store data as operands for instructions to be operated on by execution circuit 409 . Exemplary register types include packed data registers, general purpose registers, and floating point registers. Execution circuitry 409 executes the decoded instructions. In some embodiments, the retirement/writeback circuit 411 architecturally commits the destination register into the register or memory 408 and retires the instruction.An example of the format of an arithmetic recursion instruction is VXBARARITH DSTREG,SRC1,SRC2. In some embodiments, VXBARARITH{B/W/D/Q} is the opcode mnemonic for the instruction. ARITH can be multiplication, addition, subtraction, division, etc. DSTREG is a field for packed data destination register operands. SRC1 and SRC2 are fields for sources such as packed data registers and/or memory.Storage 403 may also store OSLR configuration 413, which is used to program OSLR registers 415 in decoder 405 to represent the program's altered use of a particular opcode. For example, decoder 405 may be configured to perform a comparison of the opcode field of an instruction against an opcode stored in OSLR register 415 as part of a decoding operation. OSLR's programmable interface allows specific ISA-level instructions (mnemonics) to be mapped to different specific opcodes on a program-by-program basis (and even at runtime, if dynamic OSLR remapping and remapping is done). The program's OSLR configuration 413 must be loaded into hardware (eg, placed in OSLR registers 415) for the program to run successfully. Similarly, a program's context (eg, register state) must be loaded before the program can run, or must be restored.5A, an embodiment of a process flow 500 illustrates how a wildcard opcode may be applied to an example RET opcode. Using x86 as an example, a return instruction (RET) can have the standard raw opcode 0xC3. When Program A is compiled by a standard compiler, the RET instruction may be represented as "0xC3" in the compiled code. At load time or runtime, the OSLR patching layer can apply the OSLR configuration to Program A. The OSLR configuration may select the RET opcode as a wildcard opcode and randomly replace the RET opcode with another opcode value (eg, within a limited set of random opcode values that do not conflict with non-variable opcode values). In the illustrated example, for instance #1 of program A, the applied OLSR configuration X may change the opcode value of the RET instruction to 0x37. For instance #2 of program A, the applied OLSR configuration Y can change the opcode value of the RET instruction to 0x06. At load time or runtime, the decoder can load the appropriate OSLR configuration (and, for example, other inputs) to correctly decode the changed opcode value.In another example, the CPU includes a decoder that allows restricted OSLR for four (4) sensitive instructions including: indirect CALL (call); indirect JMP (jump); return; and ENDBRANCH (end branch). Example OSLR configuration map settings may support programmable registers for holding variable opcode selections, where each of the programmable registers is 64 bits, but the actual register contents are subject to the existing "fixed" for each variable instruction. "Opcode length. The value held in the register is used when decoded to determine what byte pattern a program can use to communicate the use of a particular corresponding instruction. In the example x86 architecture, the raw opcodes for the selected instructions are as follows:INDIRECT_CALL_OPCODE (indirect_call_opcode) = 0x9AINDIRECT_JMP_OPCODE (indirect_jump_opcode) = 0xFFRETURN_OPCODE (return_opcode) = 0xC3ENDBRANCH_OPCODE (end branch_opcode) = 0xF30F1EFAAn example OSLR configuration (e.g., mutated to defend against an attacker assuming the above configuration) could change the opcodes for selected instructions as follows:OSLR_CONFIG_INDIRECT_CALL_OPCODE (OSLR_CONFIG_INDIRECT_CALL_OPCODE) = 0xD4OSLR_CONFIG_INDIRECT_JMP_OPCODE (OSLR_CONFIG_INDIRECT_JMP_OPCODE) = 0xD5OSLR_CONFIG_RETURN_OPCODE (OSLR_CONFIG_RETURN_OPCODE) = 0xCEOSLR_CONFIG_ENDBRANCH_OPCODE (OSLR_CONFIG_ENDBRANCH_OPCODE) = 0xF30F1EFBThese settings become part of the CPU context and are context-switched and persisted with programs that utilize them. CPUs of embodiments that support OSLR may have these settings on a per-logical processor basis, as programs running on different logical processors may utilize different OSLR configurations (eg, in parallel). For example, OSLR registers (which may be Model Specific Registers (MSRs), for example) become part of a "thread context block" managed by the Operating System (OS) and/or Virtual Machine Manager (VMM) and are automatically saved/ recover.On load, for the current example, the program will mutate its existing binary to adopt the new opcode byte pattern according to the OSLR configuration. Mutation is a binary-to-binary transformation process involving instruction decoding and recoding to perform the transformation, and/or a patch map (eg, metadata) that provides pointers (eg, references or relocations) to all instructions that need patching. All patchable instructions can be mutated from their existing encodings to their corresponding mutated encodings using any suitable technique. Furthermore, embodiments may be used with statically compiled code and/or dynamically generated compiled code (eg, just-in-time (JIT) compiled code, etc.).Advantageously, embodiments of OSLR techniques described herein allow copies of the same program to use different OSLR configuration mappings to increase the likelihood of failure of attacks/exploitations seeking to tamper with the program, since the actual program representation is inconsistent with the attacker's assumptions. compatible. Furthermore, any attack that seeks to exploit standard ISA-encoded program "gadgets" that once involved Indirect CALL, Indirect JMP, Return, and ENDBRANCH would be thwarted by the fact that the program representation does not match the attacker's expectations.5B, an embodiment of a process flow 510 illustrates the process of establishing or generating a configuration (eg, selecting random opcodes for up to N instructions). An entropy source (e.g., a random number generator) can be utilized to generate a valid set of random opcodes for invocation of a particular program, subject to the rules and constraints of a particular architecture (e.g., where the constraint can be that only certain M byte opcodes are possible). For CPU architectures that allow up to N instructions that can support opcode parameterization, process flow 510 can generate up to N instruction opcode substitutions that are then used to override the default opcode behavior for those particular instructions . The generated information can be considered an instance of an OSLR configuration and used to reprogram both hardware (decoder) and software (program binary).5C, an embodiment of a process flow 520 illustrates the process of installing an OSLR configuration (eg, setting an active configuration for a given CPU). As shown, the OSLR configuration is programmed into the hardware of a specific CPU by setting specific OSLR configuration registers in the decoding logic of the CPU. For each instruction with a corresponding opcode to be overridden, a set of registers will be programmed to indicate to the hardware that: (a) this particular instruction will have its opcode overridden; and (b) the changed instruction for that instruction The opcode of is a specific value (eg, where that specific value comes from an OSLR configuration). After the OSLR configuration is installed, the opcodes generated by OSLR are interpreted by the CPU as the specific instruction being overridden.Referring to Figure 5D, an embodiment of a process flow 530 illustrates the process of patching and generating a software binary with an OSLR configuration (eg, enabling all necessary instructions to use OSLR-specified opcodes). As shown, process flow 530 takes the OSLR configuration and morphs a piece of software (Program A) in binary form to utilize the OSLR configuration. The warping process is a patching process that rewrites the opcodes of all instructions with the opcode being overridden. Any suitable technique can be utilized for such patching. For example, two of the most straightforward techniques include: 1) a full binary scan, where all overridden instructions are found and then patched with new opcodes; and 2) as shown, using a save to point to all potentially available instructions A binary of pointers of instruction types is patched to prepare a "map", and the map is traversed for patching without traversing the entire binary. While the mapping mechanism may be more efficient in terms of latency to do the reconfiguration process, it places a slight burden on the compiler/code generator in producing the mapping itself.The mutated program may pose a slight challenge to some CPU/software tools operating infrastructure. For example, a legitimate program debugger might need to disassemble a running program. Debuggers typically use a fixed set of assumptions in their instruction decoding logic. For OSLR, the debugger's decode logic must utilize the interface to discover the mutated instruction encoding. A legitimate or authorized debugger can do this by reading the OSLR configuration of the running program, which can be maintained as privileged information that can be hidden from casual observers and attackers, but can be used to Legal/authorized agents/tools for information. For example, opt-in/opt-out policies in software can be used to protect that information in a manner similar to how privileged program states are not easily accessible to all agents of a particular program (eg, MSR protection for sensitive information, etc.). Programs selected into debug mode allow qualified agents to view this information, while selected programs ensure that no agents are allowed to see the information.For interoperability of mutated programs, where two or more programs interact but use alternating OSLR configurations, programs can be configured to "swap" OSLR configurations when control transfers from one part of the program to another. In some embodiments, specialized instructions may be added to the architecture to facilitate this exchange. Additionally, restrictions can be set (eg, in hardware, in software, or both) that restrict OSLR-mutated programs from directly interacting with non-mutated programs or programs with different OSLR configurations (or, for example, transferring control to them). Dynamically, interoperability can be created by repatching patches to have compatible OSLR mappings (eg, via metadata, full program scans, etc.).Those skilled in the art will appreciate that various devices may benefit from the foregoing embodiments. The following exemplary core architectures, processor and computer architectures are non-limiting examples of devices that may beneficially incorporate embodiments of the techniques described herein.Exemplary Core Architecture, Processor and Computer ArchitectureThe processor cores can be implemented in different processors in different ways, for different purposes. For example, implementations of such cores may include: 1) general purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores designed for general purpose computing; 3) general purpose out-of-order cores intended primarily for graphics and/or Or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general purpose computing and/or one or more general purpose out-of-order cores intended for general purpose computing; and 2 ) coprocessor that includes one or more dedicated cores intended primarily for graphics and/or science (throughput). Such different processors lead to different computer system architectures, which may include: 1) a coprocessor on a separate chip from the CPU; 2) in the same package as the CPU but on a separate die 3) coprocessors on the same die as the CPU (in which case such coprocessors are sometimes called special purpose logic or special purpose cores such as integrated graphics and and/or scientific (throughput) logic); and 4) a system on a chip that can combine the described CPU (sometimes referred to as application core(s) or application processor(s)), the co-processing described above The device and additional functions are included on the same die. An exemplary core architecture is described next, followed by an exemplary processor and computer architecture.Exemplary Core ArchitectureIn-order and out-of-order core diagrams6A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register-renaming out-of-order issue/execution pipeline in accordance with various embodiments of the present invention. 6B is a block diagram illustrating an exemplary embodiment of an in-order architecture core and an exemplary register-renaming out-of-order issue/execute architecture core to be included in a processor in accordance with various embodiments of the present invention. The solid-line boxes in FIGS. 6A-6B illustrate in-order pipelines and in-order cores, while the optional addition of dashed-line boxes illustrates register-renamed, out-of-order issue/execution pipelines and cores. Considering that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In Figure 6A, processor pipeline 900 includes fetch stage 902, length decode stage 904, decode stage 906, allocate stage 908, rename stage 910, schedule (also known as dispatch or issue) stage 912, register read/memory Read stage 914 , execute stage 916 , write back/memory write stage 918 , exception handling stage 922 and commit stage 924 .6B shows a processor core 990 that includes a front end unit 930 coupled to an execution engine unit 950, and both of which are coupled to a memory unit 970. The cores 990 may be reduced instruction set computing (RISC) cores, complex instruction set computing (CISC) cores, very long instruction word (VLIW) cores, or mixed or alternative core types. As yet another option, core 990 may be a dedicated core, such as, for example, a network or communication core, a compression engine, a coprocessor core, a general purpose computing graphics processing unit (GPGPU) core, a graphics core, and the like.Front end unit 930 includes branch prediction unit 932 coupled to instruction cache unit 934 coupled to instruction translation lookaside buffer (TLB) 936 coupled to instruction A fetch unit 938, which is coupled to the decode unit 940, is an instruction fetch unit 938. Decode unit 940 (or decoder) may decode the instruction and generate one or more micro-operations, microcode entry points, micro-operations decoded from the original instruction, or otherwise reflecting the original instruction, or derived from the original instruction. Commands, other commands, or other control signals as outputs. Decoding unit 940 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. In one embodiment, core 990 includes a microcode ROM or other medium (eg, in decode unit 940, or otherwise within front end unit 930) that stores microcode for certain macroinstructions. Decode unit 940 is coupled to rename/distributor unit 952 in execution engine unit 950 .The execution engine unit 950 includes a rename/allocator unit 952 coupled to a retirement unit 954 and a set 956 of one or more scheduler units. Scheduler unit(s) 956 represents any number of different schedulers, including reservation stations, central instruction windows, and the like. Scheduler unit(s) 956 is coupled to physical register file unit(s) 958 . Each of the physical register file unit(s) 958 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integer, scalar float point, packed integer, packed floating point, vector integer, vector floating point, state (eg, instruction pointer which is the address of the next instruction to execute), etc. In one embodiment, the physical register file unit(s) 958 includes a vector register unit, a writemask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit(s) 958 are overlapped by retirement unit 954 to illustrate the various ways in which register renaming and out-of-order execution can be achieved (eg, using reorder buffer(s) and retirement register(s)) heap; using future file(s), history buffer(s), retirement register file(s); using register maps and register pools, etc.). Retirement unit 954 and physical register file unit(s) 958 are coupled to execution cluster(s) 960 . Execution cluster(s) 960 includes a set 962 of one or more execution units and a set 964 of one or more memory access units. Execution unit 962 may perform various operations (eg, shift, add, subtract, multiply) and may perform on various data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit(s) 956, physical register file unit(s) 958, and execution cluster(s) 960 are shown as possibly multiple, as some embodiments create separate Pipelines (e.g., scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or each with its own scheduler unit, physical register file unit(s), and/or Execution cluster's memory access pipeline - and in the case of a separate memory access pipeline, implement certain embodiments in which only the pipeline's execution cluster has memory access unit(s) 964). It should also be understood that where separate pipelines are used, one or more of these pipelines can be issued/executed out-of-order, and the remaining pipelines can be in-order.A set 964 of memory access units is coupled to a memory unit 970 that includes a data TLB unit 972 that is coupled to a data cache unit 974 that is coupled to the second level (L2) cache Cache unit 976. In one exemplary embodiment, memory access unit 964 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 972 in memory unit 970 . Instruction cache unit 934 is also coupled to a second level (L2) cache unit 976 in memory unit 970 . L2 cache unit 976 is coupled to one or more other levels of cache, and ultimately to main memory.As an example, an exemplary register renaming out-of-order issue/execute core architecture may implement pipeline 900 as follows: 1) instruction fetch 938 performs fetch stage 902 and length decode stage 904; 2) decode unit 940 performs decode stage 906; 3) rename/allocator unit 952 executes allocation stage 908 and rename stage 910; 4) scheduler unit(s) 956 executes dispatch stage 912; 5) physical register file unit(s) 958 and memory unit 970 execute register read/memory read stage 914; execution cluster 960 executes execution stage 916; 6) memory unit 970 and physical register file unit(s) 958 execute write back/memory write stage 918; 7) each unit may be involved in Exception handling stage 922; and 8) retirement unit 954 and physical register file unit(s) 958 perform commit stage 924.The core 990 may support one or more instruction sets (eg, the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set from MIPS Technologies, Inc., Sunnyvale, Calif.; Sunnyvale, Calif. The ARM instruction set (with optional additional extensions such as NEON) from ARM Holdings, Inc., which includes the instruction(s) described herein. In one embodiment, core 990 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.It will be appreciated that cores may support multithreading (performing a collection of two or more operations or threads in parallel) and that this multithreading may be accomplished in various ways, including time division multithreading, simultaneous multithreading Threading (where a single physical core provides a logical core for each of the threads that the physical core is simultaneously multithreading), or a combination thereof (eg, time division fetch and decode and thereafter, such as in hyperthreading techniques, simultaneous multithreading change).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 934/974 and a shared L2 cache unit 976, alternative embodiments may have a single internal cache for both instruction and data , such as, for example, a first level (L1) internal cache or multiple levels of internal caches. In some embodiments, the system may include a combination of internal cache and external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific Exemplary In-Order Core Architecture7A-7B illustrate block diagrams of a more specific exemplary in-order core architecture, which would be one of several logic blocks in a chip (including other cores of the same type and/or different types). Depending on the application, the logic blocks communicate with some fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high bandwidth interconnect network (eg, a ring network).7A is a block diagram of a single processor core and its connection to the on-die interconnect network 1002 and its local subset 1004 of the second level (L2) cache, according to an embodiment of the present invention. In one embodiment, the instruction decoder 1000 supports the x86 instruction set with packed data instruction set extensions. The L1 cache 1006 allows low latency accesses to cache memory into scalar and vector units. Although in one embodiment (to simplify the design), scalar unit 1008 and vector unit 1010 use separate sets of registers (scalar registers 1012 and vector registers 1014, respectively), and data transferred between these registers is written to memory , and then read back from the first level (L1) cache 1006, but alternative embodiments of the present invention may use a different approach (eg, use a single register set or include allowing data to be transferred between the two register files without communication paths that are written and read back).The local subset 1004 of the L2 cache is part of the global L2 cache, which is divided into a number of separate local subsets, one for each processor core. Each processor core has a direct access path to its own local subset 1004 of the L2 cache. Data read by a processor core is stored in its L2 cache subset 1004 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. Data written by a processor core is stored in its own L2 cache subset 1004 and flushed from other subsets as necessary. The ring network ensures the consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.7B is an expanded view of a portion of the processor core in FIG. 7A, according to an embodiment of the present invention. FIG. 7B includes the L1 data cache 1006A portion of the L1 cache 1006 , as well as more details about the vector unit 1010 and the vector registers 1014 . Specifically, vector unit 1010 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 1028) that executes one or more of integer, single-precision floating point, and double-precision floating-point instructions. The VPU supports mixing of register inputs through mixing unit 1020 , numerical conversion through numerical conversion units 1022A-B, and replication of memory inputs through copying unit 1024 . Write mask register 1026 allows masking of the resulting vector writes.8 is a block diagram of a processor 1100 that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, according to an embodiment of the invention. The solid-line box in FIG. 8 illustrates the processor 1100 with a single core 1102A, a system agent 1110, a set 1116 of one or more bus controller units, while an optional addition to the dashed-line box illustrates having multiple cores 1102A-N , a set 1114 of one or more integrated memory controller units in a system agent unit 1110 and a replacement processor 1100 for special purpose logic 1108 .Thus, different implementations of the processor 1100 may include: 1) a CPU where the dedicated logic 1108 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores) and the cores 1102A-N are one or more Multiple general-purpose cores (eg, general-purpose in-order cores, general-purpose out-of-order cores, a combination of the two); 2) coprocessors, where cores 1102A-N are intended primarily for graphics and/or science (throughput) and 3) coprocessors, where cores 1102A-N are a large number of general purpose ordered cores. Thus, processor 1100 may be a general-purpose processor, a co-processor, or a special-purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (General Purpose Graphics Processing Unit), a high-throughput many-integrated core (MIC) coprocessors (including 30 or more cores), embedded processors, etc. The processor may be implemented on one or more chips. Processor 1100 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a variety of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of corresponding caches 1104A-N within the cores 1102A-N, a set 1106 of one or more shared cache units, and external memory coupled to a set 1114 of integrated memory controller units (not shown). The set of shared cache units 1106 may include one or more intermediate levels of cache, such as second level (L2), third level (L3), fourth level (L4) or other levels of cache, last level Cache (LLC) and/or a combination of the above. Although in one embodiment the ring-based interconnect unit 1112 interconnects the integrated graphics logic 1108, the set of shared cache units 1106, and the system agent unit 1110/integrated memory controller unit(s) 1114, alternative embodiments Such cells may be interconnected using any number of known techniques. In one embodiment, coherency is maintained between one or more cache units 1106 and cores 1102A-N.In some embodiments, one or more of the cores 1102A-N are capable of multithreading. System agent 1110 includes those components that coordinate and operate cores 1102A-N. The system agent unit 1110 may include, for example, a power control unit (PCU) and a display unit. The PCU may be, or may include, the logic and components required to regulate the power states of the cores 1102A-N and the integrated graphics logic 1108. The display unit is used to drive one or more externally connected displays.The cores 1102A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of the cores 1102A-N may be able to execute the same instruction set, while other cores may be able to execute the instruction Only a subset of the set or a different instruction set.Exemplary Computer Architecture9-12 are block diagrams of exemplary computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices Other system designs and configurations for video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of incorporating processors and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 9, shown is a block diagram of a system 1200 according to one embodiment of the present invention. System 1200 may include one or more processors 1210 , 1215 coupled to controller hub 1220 . In one embodiment, controller hub 1220 includes graphics memory controller hub (GMCH) 1290 and input/output hub (IOH) 1250 (which may be on separate chips); GMCH 1290 includes memory and graphics controller, memory 1240 and coprocessor 1245 are coupled to the memory and graphics controller; IOH 1250 couples input/output (I/O) devices 1260 to GMCH 1290. Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), the memory 1240 and coprocessor 1245 are directly coupled to the processor 1210, and the controller hub 1220 communicates with the IOH 1250 in a single chip.The optionality of additional processors 1215 is represented in FIG. 9 by dashed lines. Each processor 1210 , 1215 may include one or more of the processing cores described herein, and may be some version of processor 1100 .Memory 1240 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1220 communicates with the processing(s) 1295 via a multidrop bus such as a front side bus (FSB), a point-to-point interface such as a quick path interconnect (QPI), or similar connections 1295 1210, 1215 to communicate.In one embodiment, coprocessor 1245 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like. In one embodiment, the controller hub 1220 may include an integrated graphics accelerator.Various differences may exist between physical resources 1210, 1215 in a range of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, and the like.In one embodiment, processor 1210 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. The processor 1210 identifies these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1245 . Accordingly, processor 1210 issues these coprocessor instructions (or control signals representing coprocessor instructions) to coprocessor 1245 over a coprocessor bus or other interconnect. Coprocessor(s) 1245 accept and execute received coprocessor instructions.Referring now to FIG. 10, shown is a block diagram of a first more specific exemplary system 1300 in accordance with an embodiment of the present invention. As shown in FIG. 10 , the multiprocessor system 1300 is a point-to-point interconnect system and includes a first processor 1370 and a second processor 1380 coupled via a point-to-point interconnect 1350 . Each of processors 1370 and 1380 may be some version of processor 1100 . In one embodiment of the invention, processors 1370 and 1380 are processors 1210 and 1215, respectively, and coprocessor 1338 is coprocessor 1245. In another embodiment, processors 1370 and 1380 are processor 1210 and coprocessor 1245, respectively.Processors 1370 and 1380 are shown including integrated memory controller (IMC) units 1372 and 1382, respectively. Processor 1370 also includes point-to-point (P-P) interfaces 1376 and 1378 as part of its bus controller unit; similarly, second processor 1380 includes P-P interfaces 1386 and 1388 . The processors 1370, 1380 may exchange information via a P-P interface 1350 using point-to-point (P-P) interface circuits 1378, 1388. As shown in Figure 10, IMCs 1372 and 1382 couple the processors to respective memories, namely memory 1332 and memory 1334, which may be portions of main memory locally attached to the respective processors.Processors 1370 , 1380 may each exchange information with chipset 1390 via respective P-P interfaces 1352 , 1354 using point-to-point interface circuits 1376 , 1394 , 1386 , 1398 . Chipset 1390 may optionally exchange information with coprocessor 1338 via high performance interface 1339 and interface 1392 . In one embodiment, coprocessor 1338 is a special purpose processor such as, for example, a high throughput MIC processor, network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, and the like.A shared cache (not shown) can be included in either processor, or external to both processors but connected to the processors via a P-P interconnect, so that if the processor is placed in a low power mode, either Local cache information for one or both processors may be stored in a shared cache.Chipset 1390 may be coupled to first bus 1316 via interface 1396 . In one embodiment, the first bus 1316 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, although the scope of the invention is not limited in this regard .As shown in FIG. 10 , various I/O devices 1314 may be coupled to the first bus 1316 along with a bus bridge 1318 that couples the first bus 1316 to the second bus 1320 . In one embodiment, one or a A plurality of additional processors 1315 are coupled to the first bus 1316 . In one embodiment, the second bus 1320 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1320, including, for example, a keyboard and/or mouse 1322, a communication device 1327, and a storage unit 1328, such as a device that may include instructions/code and data 1330 disk drive or other mass storage device. Additionally, audio I/O 1324 may be coupled to the second bus 1320 . Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 10, the system may implement a multidrop bus or other such architecture.Referring now to FIG. 11, shown is a block diagram of a second more specific exemplary system 1400 in accordance with an embodiment of the present invention. Like elements in FIGS. 10 and 11 use like reference numerals, and certain aspects of FIG. 10 have been omitted from FIG. 11 to avoid obscuring other aspects of FIG. 11 .11 illustrates that processors 1370, 1380 may include integrated memory and I/O control logic ("CL") 1472 and 1482, respectively. Thus, the CLs 1472, 1482 include integrated memory controller units and include I/O control logic. 11 illustrates that not only memory 1332, 1334 is coupled to CL 1472, 1482, but I/O device 1414 is also coupled to control logic 1472, 1482. Conventional I/O devices 1415 are coupled to chipset 1390 .Referring now to FIG. 12, shown is a block diagram of an SoC 1500 in accordance with an embodiment of the present invention. Similar elements in Figure 8 use similar reference numerals. Additionally, the dashed boxes are optional features on more advanced SoCs. In Figure 12, the interconnect unit(s) 1502 are coupled to: an application processor 1510 comprising a set of one or more cores 1102A-N and a shared cache unit(s) 1106; a system proxy unit 1110; bus controller unit(s) 1116; integrated memory controller unit(s) 1114; set of one or more co-processors 1520, which may include integrated graphics logic, image processors, audio processors, and video processors ; a static random access memory (SRAM) unit 1530; a direct memory access (DMA) unit 1532; and a display unit 1540 for coupling to one or more external displays. In one embodiment, coprocessor(s) 1520 comprise special purpose processors such as, for example, network or communications processors, compression engines, GPGPUs, high throughput MIC processors, or embedded processors, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementations. Embodiments of the present invention may be implemented as a computer program or program code executing on a programmable system including at least one processor, a storage system (including volatile and nonvolatile memory and/or storage elements) , at least one input device, and at least one output device.Program code, such as code 1330 illustrated in Figure 10, may be applied to input instructions to perform the functions described herein and generate output information. The output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor, such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high-level procedural or object-oriented programming language to communicate with the processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited to the scope of any particular programming language. In any case, the language can be a compiled language or an interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, the instructions representing various logic in a processor that, when read by a machine, cause the machine to manufacture logic for implementing the techniques described herein. Such representations, referred to as "IP cores," can be stored on tangible machine-readable media and can be supplied to various customers or production facilities for loading into the manufacturing machines that actually manufacture the logic or processors.Such machine-readable storage media may include, but are not limited to, non-transitory, tangible arrangements of articles of manufacture or formation by machines or devices, including storage media, such as hard disks; any other type of disks, including floppy disks, optical disks, compact disks Disc read only memory (CD-ROM), compact disc rewritable (CD-RW), and magneto-optical disk; semiconductor devices such as read only memory (ROM), such as dynamic random access memory (DRAM) and static random access memory Random Access Memory (RAM), Erasable Programmable Read Only Memory (EPROM), Flash Memory, Electrically Erasable Programmable Read Only Memory (EEPROM); Phase Change Memory (PCM); Magnetic Cards or optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory tangible machine-readable media containing instructions or containing design data, such as a hardware description language (HDL), which defines the structures, circuits, devices, processors described herein and/or system features. These embodiments are also referred to as program products.Simulation (including binary transformation, code deformation, etc.)In some cases, an instruction translator may be used to convert instructions from a source instruction set to a target instruction set. For example, an instruction translator may transform an instruction (eg, using static binary transform, dynamic binary transform including dynamic compilation), warp, emulate, or otherwise convert the instruction into one or more other instructions to be processed by the core. Instruction translators can be implemented in software, hardware, firmware, or a combination thereof. The instruction translator may be on-processor, off-processor, or partially on-processor and partially off-processor.13 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set, according to an embodiment of the present invention. In the illustrated embodiment, the instruction translator is a software instruction translator, but alternatively, the instruction translator may be implemented in software, firmware, hardware, or various combinations thereof. 13 shows that a program in a high-level language 1602 can be compiled using an x86 compiler 1604 to generate x86 binary code 1606 that can be natively executed by a processor 1616 having at least one x86 instruction set core. A processor with at least one x86 instruction set core 1616 represents any processor that performs substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing: 1) Intel A substantial portion of the instruction set of an x86 instruction set core, or 2) an application targeted to run on an Intel processor with at least one x86 instruction set core to achieve substantially the same results as an Intel processor with at least one x86 instruction set core or object code versions of other software. x86 compiler 1604 represents a compiler operable to generate x86 binary code 1606 (eg, object code) executable on a processor 1616 having at least one x86 instruction set core with or without additional linking processing . Similarly, FIG. 13 shows that an alternative instruction set compiler 1608 can be used to compile programs in high-level language 1602 to generate programs that can be executed by a processor 1614 that does not have at least one x86 instruction set core (eg, a Alternate instruction set binary code 1610 natively executed by the MIPS instruction set of MIPS Technologies, Inc. of Sunnyvale, CA, and/or a processor executing a core of the ARM instruction set of ARM Holdings, Inc. of Sunnyvale, CA. Instruction converter 1612 is used to convert x86 binary code 1606 into code that can be natively executed by processor 1614 without an x86 instruction set core. This converted code is unlikely to be identical to the alternate instruction set binary code 1610, since instruction converters capable of doing so are difficult to manufacture; however, the converted code will perform general operations and be composed of instructions from the alternate instruction set. Thus, instruction translator 1612 represents, by emulation, emulation, or any other process, software, firmware, hardware, or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1606.Techniques and architectures for instruction set architecture opcode parameterization are described herein. In the foregoing description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of certain embodiments. However, it will be apparent to those skilled in the art that certain embodiments may be practiced without these specific details. In other instances, structures and devices are shown in block diagram form to avoid obscuring the description.Additional Notes and ExamplesExample 1 includes an apparatus comprising: a memory for storing configuration information; an instruction decoder for decoding an instruction having one or more fields including an opcode field; and circuitry communicatively coupled to the instruction decoder and memory for: determining whether an opcode value in an opcode field of an instruction corresponds to an altered opcode value in stored configuration information that causes one or more altered opcodes The values are related to the corresponding original opcode values, and if so determined, the instruction is decoded based on one of the original opcode values that is related to the changed opcode value in the stored configuration information.Example 2 includes the apparatus of example 1, wherein the circuitry is further for: storing a random opcode value in the configuration information for the one or more altered opcode values; and matching the corresponding original opcode value with the stored configuration Each of the random opcode values in the message is correlated.Example 3 includes the apparatus of any of Examples 1-2, further comprising: software for altering the instruction-level encoded content of the program at load time to replace relevant opcode values with altered opcode values from the stored configuration information The raw opcode value of .Example 4 includes the apparatus of example 3, wherein the software is further for adding the configuration information to the context of the program to make the program and the configuration information portable.Example 5 includes the apparatus of example 3, wherein the software is further configured to: generate a first set of random opcode values in the configuration information for the first instance of the program; A second set of random opcode values with different random opcode values.Example 6 includes the apparatus of example 5, wherein the software is further to limit the random opcode value in the configuration information to a value that avoids conflict with a predetermined non-variable instruction pattern.Example 7 includes the apparatus of any of Examples 1-6, wherein the memory includes one or more programmable system registers for storing configuration information.Example 8 includes a method comprising: storing configuration information in a programmable register coupled to a decoder, the configuration information correlating one or more altered opcode values with corresponding original opcode values; field; determining whether the opcode value in the opcode field of the instruction corresponds to the altered opcode value in the stored configuration information; and if so determined, by the decoder based on the An original opcode value associated with the changed opcode value in the stored configuration information decodes the instruction.Example 9 includes the method of Example 8, further comprising: storing a random opcode value in the configuration information for the one or more altered opcode values; and matching the corresponding original opcode value with the random opcode value in the stored configuration information Each of the opcode values is associated with each random opcode value.Example 10 includes the method of any of Examples 8-9, further comprising: altering the instruction-level encoded content of the program at load time to replace the associated original opcode with an altered opcode value from the stored configuration information value.Example 11 includes the method of example 10, further comprising adding configuration information to the context of the program to make the program and configuration information portable.Example 12 includes the method of Example 10, further comprising: generating, in the configuration information, a first set of random opcode values for the first instance of the program; and generating, in the configuration information, for the second instance of the program with the first set of random opcode values A second set of random opcode values with different values.Example 13 includes the method of Example 12, further comprising limiting the random opcode value in the configuration information to a value that avoids conflict with a predetermined non-variable instruction pattern.Example 14 includes an apparatus comprising: storage circuitry for storing configuration information correlating one or more altered opcode values with corresponding original opcode values; fetch circuitry for fetching a single instruction, the The single instruction is used to include an opcode field; an opcode change circuit, communicatively coupled to the storage circuit and the fetch circuit, for determining whether the opcode value in the opcode field of the single instruction is the same as the changed value in the stored configuration information; The opcode value corresponds to, and if determined to do so, replaces the altered opcode value with one of the original opcode values associated with the altered opcode value in the stored configuration information; decoder circuit , communicatively coupled to the opcode change circuit for decoding a single instruction based on the opcode value originating from the opcode change circuit; and an execution circuit communicatively coupled to the decoder circuit for executing the decoded instruction.Example 15 includes the apparatus of Example 14, further comprising a configuration circuit communicatively coupled to the storage circuit for: storing a random opcode value in the configuration information for the one or more changed opcode values; and causing corresponding The original opcode value of is associated with each of the random opcode values in the stored configuration information.Example 16 includes the apparatus of any one of Examples 14 to 15, further comprising operating system code for: changing the instruction-level encoded content of the program at load time to replace the associated opcode value with the changed opcode value from the configuration information Raw opcode value.Example 17 includes the apparatus of example 16, wherein the operating system code is further for adding the configuration information to the context of the program to make the program and the configuration information portable.Example 18 includes the apparatus of example 16, wherein the operating system code is further configured to: generate a first set of random opcode values in the configuration information for the first instance of the program; A second set of random opcode values that differ from the first set of random opcode values.Example 19 includes the apparatus of example 18, wherein the operating system code is further to limit the random opcode value in the configuration information to a value that avoids conflict with a predetermined non-variable instruction pattern.Example 20 includes the apparatus of any of Examples 14-19, wherein the single instruction further includes a field for an identifier of the first source operand for identifying one of a vector register and a memory location.Example 21 includes an apparatus comprising: means for storing configuration information in a programmable register coupled to a decoder, the configuration information correlating one or more altered opcode values with corresponding original opcode values; means for fetching an instruction having an opcode field; means for determining whether an opcode value in the opcode field of the instruction corresponds to a changed opcode value in the stored configuration information; and if so, then Means for decoding an instruction by a decoder based on one of the original opcode values associated with a changed opcode value in stored configuration information.Example 22 includes the apparatus of Example 21, further comprising: means for storing a random opcode value in the configuration information for the one or more altered opcode values; and for matching the corresponding original opcode value with the stored The random opcode values in the configuration information are associated with each random opcode value of the device.Example 23 includes the apparatus of any of Examples 21-22, further comprising: for altering instruction-level encoded content of the program at load time to replace the associated original operation with altered opcode values from the stored configuration information Code value device.Example 24 includes the apparatus of Example 23, further comprising: means for adding the configuration information to the context of the program to make the program and the configuration information portable.Example 25 includes the apparatus of Example 23, further comprising: means for generating, in the configuration information, a first set of random opcode values for the first instance of the program; A device for a second set of random opcode values that differ from the first set of random opcode values.Example 26 includes the apparatus of Example 25, further comprising: means for limiting the random opcode value in the configuration information to a value that avoids conflict with a predetermined non-variable instruction pattern.Example 27 includes at least one non-transitory machine-readable medium comprising a plurality of instructions that, in response to being executed on the computing device, cause the computing device to: store configuration information in a programmable device coupled to a decoder In registers, configuration information correlates one or more altered opcode values with the corresponding original opcode values; fetches an instruction with an opcode field; determines whether the opcode value in the instruction's opcode field matches the stored configuration the changed opcode value in the information corresponds to; and if determined to be so, by the decoder based on one of the original opcode values in the original opcode value associated with the changed opcode value in the stored configuration information to the instruction decoding.Example 28 includes at least one non-transitory machine-readable medium as in Example 27, including a plurality of further instructions that, in response to being executed on the computing device, cause the computing device to: in the configuration information storing random opcode values for the one or more altered opcode values; and correlating a corresponding original opcode value with each of the random opcode values in the stored configuration information.Example 29 includes at least one non-transitory machine-readable medium as in any one of Examples 27-28, including a plurality of further instructions that, in response to being executed on the computing device, cause the computing device to: Action: Alter the instruction-level encoded content of the program at load time to replace the associated original opcode value with the altered opcode value from the stored configuration information.Example 30 includes at least one non-transitory machine-readable medium as in Example 29, including a plurality of further instructions that, in response to being executed on the computing device, cause the computing device to: add configuration information to the context of the program to make program and configuration information portable.Example 31 includes at least one non-transitory machine-readable medium as in Example 29, including a plurality of further instructions that, in response to being executed on the computing device, cause the computing device to: in the configuration information generating a first set of random opcode values for the first instance of the program; and generating a second set of random opcode values different from the first set of random opcode values for the second instance of the program in the configuration information.Example 32 includes at least one non-transitory machine-readable medium as in Example 31, including a plurality of further instructions that, in response to being executed on the computing device, cause the computing device to: add the configuration information The random opcode value is limited to a value that avoids conflicts with predetermined non-variable instruction patterns.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. The appearances of the phrase "in one embodiment" in various places in this specification are not necessarily all referring to the same embodiment.Some portions of the detailed description herein are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those of ordinary skill in the computing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here generally understood as a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are used in association with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless expressly stated otherwise, as will be apparent from the discussion herein, it will be understood that throughout are the actions and processes of a computer system or similar electronic computing device that manipulates and converts data represented as physical (electronic) quantities within the registers and memory of the computer system into Memory or registers or other such information stores, transmits or displays other data similarly represented as physical quantities within the device.Certain embodiments also relate to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs may be stored in computer-readable storage media such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs, magneto-optical disks, read-only memories (ROMs), such as dynamic disks. Random Access Memory (RAM) (DRAM) RAM, EPROM, EEPROM, magnetic or optical cards, or any type of medium suitable for storing electronic instructions and coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The desired structure for a variety of these systems will appear from the description herein. Furthermore, certain embodiments are not described with reference to any particular programming language. It will be appreciated that various programming languages may be used to implement the teachings of such embodiments described herein.In addition to what is described herein, various modifications may be made to the disclosed embodiments and implementations thereof without departing from the scope thereof. Accordingly, the descriptions and examples herein are to be interpreted in an illustrative rather than a restrictive sense. The scope of the present invention should be defined only by reference to the appended claims. |
Software instructions are executed on a processor within a computer system to configure a steaming engine with stream parameters (4700) to define a multidimensional array. The stream parameters define a size for each dimension of the multidimensional array and a pad value indicator. Data is fetched from a memory (4702) coupled to the streaming engine responsive to the stream parameters. A stream of vectors is formed (4703) for the multidimensional array responsive to the stream parameters from the data fetched from memory. A padded stream vector is formed (4706) that includes a specified pad value without accessing the pad value from system memory. |
CLAIMSWhat is claimed is:1. A method of operating a streaming engine in a computer system, the method comprising: receiving stream parameters into control logic of the streaming engine to define a multidimensional array, wherein the stream parameters define a size for each dimension of the multidimensional array and a pad value indicator;fetching data from a memory coupled to the streaming engine responsive to the stream parameters;forming a stream of vectors for the multidimension array responsive to the stream parameters from the data fetched from memory; andforming a padded stream vector for the stream of vectors that includes a pad value specified by the pad value indicator without fetching respective pad data from the memory.2. The method of claim 1, in which the stream parameters define a selected dimension of the multidimensional array, and in which the padded stream vector is formed in the selected dimension of the stream of vectors3. The method of claim 1, in which the pad value indicator specifies one of a minimum value, a maximum value, or zero.4. The method of claim 1, in which the stream parameters include an element size of the array, a number of elements to include in each vector of a stream, and a number of vectors to include in the stream for each dimension of the array.5. The method of claim 4, in which the pad value forms a last element of the stream of vectors.6. The method of claim 1, further comprising suppressing access to an address lookaside buffer while forming the padded stream vector.7. A system comprising:a system memory;a streaming engine coupled to the system memory, wherein the streaming engine comprises:a stream template register to hold stream parameters, wherein the stream parameters define a size for each dimension of the multidimensional array and a pad value indicator;address generation logic coupled to the stream template register;
a memory interface logic coupled to receive a stream of address from the address generation logic and coupled to access the system memory; andcontrol logic coupled to memory interface to form a padded stream vector for the stream of vectors that includes a pad value specified by the pad value indicator without fetching respective pad data from the memory.8. The system of claim 7, wherein the control logic is operable to suppress a respective access to the system memory responsive to the stream parameters while forming the padded stream vector in the stream of vectors.9. The system of claim 7, further comprising a translation lookaside buffer coupled to the address generation logic, wherein the control logic is operable to suppress an access to the translation lookaside buffer by the address generation logic for forming the padded stream vector for the stream of vectors.10. The system of claim 7, in which all elements in the padded stream vector have a value of zero.11. The system of claim 7, in which an element in a padded stream vector has a pad value that is specified by the stream parameters to be one of a minimum value or a maximum value.12. The system of claim 7, further comprising:a register file with inputs coupled to receive a vector fetched from the system memory; andan alignment network with inputs coupled to outputs of the register file, the alignment network coupled to the control logic for control, the alignment network having outputs coupled to outputs of the streaming engine to provide the padded stream vector responsive to the pad value indicator.13. The system of claim 7 being a system on a chip (SoC), further comprising a processing unit coupled to the outputs of the streaming engine to receive the stream vectors.14. A system comprising:a system memory;a streaming engine coupled to access the system memory to form a stream of stream vectors, wherein the streaming engine is operable to insert a padded stream vector having a specified pad value into the stream of stream vectors; anda processing unit coupled to outputs of the streaming engine to receive the stream of
stream vectors.15. The system of claim 14, in which the streaming engine comprises:a stream template register to hold stream parameters, wherein the stream parameters define a size for each dimension of the multidimensional array and pad value indicator; andcontrol logic coupled to receive the stream parameters from the stream template being operable to insert the specified pad value into a padded stream vector.16. The system of claim 15, further comprising:address generation logic coupled to the stream template register;a memory interface logic coupled to receive a stream of address from the address generation logic and coupled to access the system memory; andwherein the control logic is operable to suppress an access to the system memory responsive to the stream parameters while forming the padded stream vector.17. The system of claim 16, further comprising a translation lookaside buffer coupled to the address generation logic, wherein the control logic is operable to suppress an access to the translation lookaside buffer by the address generation logic while forming the padded stream vector.18. The system of claim 14, in which all elements in padded stream vector have a value of zero.19. The system of claim 14, in which elements in a padded stream vector have a pad value that is one of a minimum value and a maximum value.20. The system of claim 14, further comprising:a register file with inputs coupled to receive a vector fetched from the system memory; andan alignment network with inputs coupled to outputs of the register file, the alignment network coupled to the control logic for control, the alignment network having outputs coupled to outputs of the streaming engine to provide the padded stream vector responsive to the stream parameters. |
INSERTING PREDEFINED PAD VALUES INTO A STREAM OF VECTORS[0001] This relates to using a streaming engine to insert pad values into a stream of vectors. BACKGROUND[0002] Digital signal processors (DSP) are optimized for processing streams of data that may be derived from various input signals, such as sensor data, a video stream, a voice channel, radar signals, biomedical signals, etc. Digital signal processors operating on real-time data may receive an input data stream, perform a filter function on the data stream (such as encoding or decoding) and output a transformed data stream. The system is called real-time because the application fails if the transformed data stream is not available for output when scheduled. Some video encoding requires a predictable but non-sequential input data pattern. Such applications may require memory accesses to load data registers in a data register file and then supply data from the data registers to functional units which perform the data processing.[0003] One or more DSP processing cores can be combined with various peripheral circuits, blocks of memory, etc. on a single integrated circuit (IC) die to form a system on chip (SoC). These systems can include multiple interconnected processors that share the use of on-chip and off-chip memory. A processor can include some combination of instruction cache (ICache) and data cache (DCache) to improve processing. Furthermore, multiple processors with shared memory can be incorporated in a single embedded system. The processors can physically share the same memory without accessing data or executing code located in the same memory locations or can use some portion of the shared memory as common shared memory.SUMMARY[0004] Methods and apparatus are provided for software instructions to be executed on a processor within a computer system to configure a steaming engine with stream parameters to define a multidimensional array. The stream parameters define a size for each dimension of the multidimensional array and a specified pad value indicator. Data is fetched from a memory coupled to the streaming engine responsive to the stream parameters. A stream of vectors is formed for the multidimensional array responsive to the stream parameters from the data fetched from memory. A padded stream vector is formed that includes a specified pad value without
accessing the pad value from system memory.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 illustrates an example dual scalar/vector data path processor.[0006] FIG. 2 illustrates the registers and functional units in the dual scalar/vector data path processor illustrated in FIG. 1.[0007] FIG. 3 illustrates a global scalar register file.[0008] FIG. 4 illustrates a local scalar register file shared by arithmetic functional units.[0009] FIG. 5 illustrates a local scalar register file shared by multiply functional units.[0010] FIG. 6 illustrates a local scalar register file shared by load/store units.[0011] FIG. 7 illustrates a global vector register file.[0012] FIG. 8 illustrates a predicate register file.[0013] FIG. 9 illustrates a local vector register file shared by arithmetic functional units.[0014] FIG. 10 illustrates a local vector register file shared by multiply and correlation functional units.[0015] FIG. 11 illustrates pipeline phases of a processing unit.[0016] FIG. 12 illustrates sixteen instructions of a single fetch packet.[0017] FIG. 13 illustrates an example of the instruction coding of instructions.[0018] FIG. 14 illustrates bit coding of a condition code extension slot 0.[0019] FIG. 15 illustrates bit coding of a condition code extension slot 1.[0020] FIG. 16 illustrates bit coding of a constant extension slot 0.[0021] FIG. 17 is a partial block diagram illustrating constant extension.[0022] FIG. 18 illustrates carry control for SIMD operations.[0023] FIG. 19 illustrates a conceptual view of streaming engines.[0024] FIG. 20 illustrates a sequence of formatting operations.[0025] FIG. 21 illustrates an example of lane allocation in a vector.[0026] FIG. 22 illustrates an example of lane allocation in a vector.[0027] FIG. 23 illustrates a basic two-dimensional (2D) stream.[0028] FIG. 24 illustrates the order of elements within the example stream of FIG. 23.[0029] FIG. 25 illustrates extracting a smaller rectangle from a larger rectangle.[0030] FIG. 26 illustrates how an example streaming engine fetches a stream with a transposition granularity of 4 bytes.
[0031] FIG. 27 illustrates how an example streaming engine fetches a stream with a transposition granularity of 8 bytes.[0032] FIG. 28 illustrates the details of an example streaming engine.[0033] FIG. 29 illustrates an example stream template register.[0034] FIG. 30 illustrates sub-field definitions of the flags field of the example stream template register of FIG. 29.[0035] FIG. 31 illustrates an example of a vector length masking/group duplication block.[0036] FIG. 32 is a partial schematic diagram of an example of the generation of the streaming engine valid or invalid indication.[0037] FIG. 33 is a partial schematic diagram of a streaming engine address generator illustrating generation of the loop address and loop count.[0038] FIG. 34 illustrates a partial schematic diagram showing the streaming engine supply of data of this example.[0039] FIG. 35 illustrates a partial schematic diagram showing the streaming engine supply of valid data to the predicate unit.[0040] FIG. 36 is a block diagram of a system that includes a matrix multiplication accelerator and the streaming engine of FIG. 28.[0041] FIG. 37 illustrates an example of matrix multiplication.[0042] FIG. 38 is a more detailed block diagram of a portion of the streaming engine of FIG. 28.[0043] FIGS. 39, 40, 41, 42, and 43 illustrate example linear stream transfers by the streaming engine of FIG. 28.[0044] FIGS. 44A, 44B together illustrate how a partial matrix is augmented with null vectors by the streaming engine of FIG. 28 for matrix multiplication.[0045] FIG. 45 illustrates adding null vectors to a stream.[0046] FIGS. 46-47 illustrate formation of a stream by inserting null or predefined data vectors by the streaming engine of FIG. 28.[0047] FIG. 48 is a block diagram of a multiprocessor system that includes the streaming engine of FIG. 28.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0048] In the drawings, like elements are denoted by like reference numerals for consistency.
[0049] Digital signal processors (DSP) are optimized for processing streams of data that may be derived from various input signals, such as sensor data, a video stream, a voice channel, radar signals, biomedical signals, etc. Memory bandwidth and scheduling are concerns for digital signal processors operating on real-time data. An example DSP includes a streaming engine to improve memory bandwidth and data scheduling.[0050] One or more DSP can be combined with various peripheral circuits, blocks of memory, etc. on a single integrated circuit (IC) die to form a system on chip (SoC). See, for example, “66AK2Hx Multicore Keystone™ DSP+ARM®System-on-Chip,” 2013 which is incorporated by reference herein.[0051] In some example processors, an autonomous streaming engine (SE) is coupled to the DSP. In this example, the streaming engine includes two closely coupled streaming engines that can manage two data streams simultaneously. In another example, the streaming engine is capable of managing only a single stream, while in other examples the streaming engine is capable of handling more than two streams. In each case, for each stream, the streaming engine includes an address generation stage, a data formatting stage, and some storage for formatted data waiting for consumption by the processor. In the examples described herein, addresses are derived from algorithms that can involve multi-dimensional loops, each dimension maintaining an iteration count. In one example, the streaming engine supports six levels of nested iteration. In other examples, more or fewer levels of iteration are supported.[0052] In some example processors, control logic and an alignment network enable the streaming engine to form null vectors or padded vectors and to insert them into a stream without accessing the null data or pad data from system memory.[0053] Several examples of forming null vectors and padded vectors are described in more detail with regards to FIGS. 36-48.[0054] An example DSP processor is described with reference to FIGS. 1-18. An example streaming engine capable of managing two data streams using six-dimensional nested loops is described with reference to FIGS. 19-35.[0055] FIG. 1 illustrates an example processor 100 that includes dual scalar/vector data paths 115, 117. As used herein, the term“vector” refers to a one-dimensional array of data elements that can be accessed and operated on as a unit. Processor 100 includes a streaming engine 125 that is described in more detail herein. Processor 100 includes separate level one instruction
cache (L1I) 121 and level one data cache (LID) 123. Processor 100 includes a level 2 (L2) combined instruction/data cache 130 that holds both instructions and data. FIG. 1 illustrates connection between L1I cache and L2 combined instruction/data cache 130, 512-bit bus 142. FIG. 1 illustrates the connection between LID cache 123 and L2 combined instruction/data cache 130, 512-bit bus 145. In the example processor 100, L2 combined instruction/data cache 130 stores both instructions to back up L1I cache 121 and data to back up LID cache 123. In this example, L2 combined instruction/data cache 130 is further connected to higher level cache and/or main memory using known or later developed memory system techniques not illustrated in FIG. 1. As used herein, the term“higher level” memory or cache refers to a next level in a memory hierarchy that is more distant from the processor, while the term“lower level” memory or cache refers to a level in the memory hierarchy that is closer to the processor. L1I cache 121, LID cache 123, and L2 cache 130 may be implemented in different sizes in various examples. In this example, L1I cache 121 and LID cache 123 are each 32K bytes, and L2 cache 130 is 1024K bytes. In the example processor 100, L1I cache 121, LID cache 123 and L2 combined instruction/data cache 130 are formed on a single integrated circuit. This single integrated circuit optionally includes other circuits.[0056] Processing unit core 110 fetches instructions from L1I cache 121 as controlled by instruction fetch unit 111. Instruction fetch unit 111 determines the next instructions to be executed and recalls a fetch packet sized set of such instructions. The nature and size of fetch packets are further detailed below. Instructions are directly fetched from LI I cache 121 upon a cache hit if the instructions are stored in L1I cache 121. Upon a cache miss occurring when the specified instructions are not stored in L1I cache 121, the instructions are sought in L2 combined cache 130. In this example, the size of a cache line in L1I cache 121 equals the size of a fetch packet which is 512 bits. The memory locations of these instructions are either a hit in L2 combined cache 130 or a miss. A hit is serviced from L2 combined cache 130. A miss is serviced from a higher level of cache (not illustrated) or from main memory (not illustrated). In this example, the requested instruction is simultaneously supplied to both L1I cache 121 and processing unit core 110 to speed use.[0057] In this example, processing unit core 110 includes multiple functional units to perform instruction specified data processing tasks. Instruction dispatch unit 112 determines the target functional unit of each fetched instruction. In this example, processing unit 110 operates as a
very long instruction word (VLIW) processor capable of operating on multiple instructions in corresponding functional units simultaneously. A compiler organizes instructions in execute packets that are executed together. Instruction dispatch unit 112 directs each instruction to its target functional unit. The functional unit assigned to an instruction is completely specified by the instruction produced by the compiler. The hardware of processing unit core 110 has no part in the functional unit assignment. In this example, instruction dispatch unit 112 operates on several instructions in parallel. The number of such parallel instructions is set by the size of the execute packet. This is further described herein.[0058] One part of the dispatch task of instruction dispatch unit 112 is determining whether the instruction is to execute on a functional unit in scalar data path side A 115 or vector data path side B 116. An instruction bit within each instruction called the s bit determines which data path the instruction controls. This is further described herein.[0059] Instruction decode unit 113 decodes each instruction in a current execute packet. Decoding includes identification of the functional unit performing the instruction, identification of registers used to supply data for the corresponding data processing operation from among possible register files, and identification of the register destination of the results of the corresponding data processing operation. As further described below, instructions can include a constant field in place of one register number operand field. The result of this decoding are signals for control of the target functional unit to perform the data processing operation specified by the corresponding instruction on the specified data.[0060] Processing unit core 110 includes control registers 114. Control registers 114 store information for control of the functional units in scalar data path side A 115 and vector data path side B 116. This information may include mode information or the like.[0061] The decoded instructions from instruction decode 113 and information stored in control registers 114 are supplied to scalar data path side A 115 and vector data path side B 116. As a result, functional units within scalar data path side A 115 and vector data path side B 116 perform instruction specified data processing operations upon instruction specified data and store the results in an instruction specified data register or registers. Each of scalar data path side A 115 and vector data path side B 116 include multiple functional units that operate in parallel. These are further described below in conjunction with FIG. 2. There is a data path 117 between scalar data path side A 115 and vector data path side B 116 permitting data exchange.
[0062] Processing unit core 110 includes further non-instruction-based modules. Emulation unit 118 permits determination of the machine state of processing unit core 110 in response to instructions. This capability can be employed for algorithmic development. Interrupts/exceptions unit 119 enables processing unit core 110 to be responsive to external, asynchronous events (interrupts) and to respond to attempts to perform improper operations (exceptions).[0063] Processor 100 includes streaming engine 125. Streaming engine 125 supplies two data streams from predetermined addresses cached in L2 combined cache 130 to register files of vector data path side B of processing unit core 110. This provides controlled data movement from memory (as cached in L2 combined cache 130) directly to functional unit operand inputs. This is further described herein.[0064] FIG. 1 illustrates example data widths of busses between various parts. L1I cache 121 supplies instructions to instruction fetch unit 111 via bus 141. Bus 141 is a 512-bit bus in this example. Bus 141 is unidirectional from L1I cache 121 to processing unit 110. L2 combined cache 130 supplies instructions to L1I cache 121 via bus 142. Bus 142 is a 512-bit bus in this example. Bus 142 is unidirectional from L2 combined cache 130 to L II cache 121.[0065] LID cache 123 exchanges data with register files in scalar data path side A 115 via bus 143. Bus 143 is a 64-bit bus in this example. LID cache 123 exchanges data with register files in vector data path side B 116 via bus 144. Bus 144 is a 512-bit bus in this example. Busses 143 and 144 are illustrated as bidirectional supporting both processing unit core 110 data reads and data writes. LID cache 123 exchanges data with L2 combined cache 130 via bus 145. Bus 145 is a 512-bit bus in this example. Bus 145 is illustrated as bidirectional supporting cache service for both processing unit core 110 data reads and data writes.[0066] Processor data requests are directly fetched from LID cache 123 upon a cache hit (if the requested data is stored in LID cache 123). Upon a cache miss (the specified data is not stored in LID cache 123), the data is sought in L2 combined cache 130. The memory locations of the requested data are either a hit in L2 combined cache 130 or a miss. A hit is serviced from L2 combined cache 130. A miss is serviced from another level of cache (not illustrated) or from main memory (not illustrated). The requested data may be simultaneously supplied to both LID cache 123 and processing unit core 110 to speed use.[0067] L2 combined cache 130 supplies data of a first data stream to streaming engine 125 via
bus 146. Bus 146 is a 512-bit bus in this example. Streaming engine 125 supplies data of the first data stream to functional units of vector data path side B 116 via bus 147. Bus 147 is a 512-bit bus in this example. L2 combined cache 130 supplies data of a second data stream to streaming engine 125 via bus 148. Bus 148 is a 512-bit bus in this example. Streaming engine 125 supplies data of this second data stream to functional units of vector data path side B 116 via bus 149, which is a 512-bit bus in this example. Busses 146, 147, 148 and 149 are illustrated as unidirectional from L2 combined cache 130 to streaming engine 125 and to vector data path side B 116 in this example.[0068] Streaming engine data requests are directly fetched from L2 combined cache 130 upon a cache hit (if the requested data is stored in L2 combined cache 130). Upon a cache miss (the specified data is not stored in L2 combined cache 130), the data is sought from another level of cache (not illustrated) or from main memory (not illustrated). It is technically feasible in some examples for LID cache 123 to cache data not stored in L2 combined cache 130. If such operation is supported, then upon a streaming engine data request that is a miss in L2 combined cache 130, L2 combined cache 130 snoops LID cache 123 for the streaming engine requested data. If LID cache 123 stores the data, the snoop response includes the data, which is then supplied to service the streaming engine request. If LID cache 123 does not store the data, the snoop response indicates this and L2 combined cache 130 services the streaming engine request from another level of cache (not illustrated) or from main memory (not illustrated).[0069] In this example, both LID cache 123 and L2 combined cache 130 can be configured as selected amounts of cache or directly addressable memory in U.S. Patent No. 6,606,686 entitled UNIFIED MEMORY SYSTEM ARCHITECTURE INCLUDING CACHE AND DIRECTLY ADDRESSABLE STATIC RANDOM ACCESS MEMORY, which is incorporated by reference herein.[0070] In this example, processor 100 is fabricated on an integrated chip (IC) that is mounted on a ball grid array (BGA) substrate. A BGA substrate and IC die together may be referred to as “BGA package,”“IC package,”“integrated circuit,”“IC,”“chip,”“microelectronic device,” or similar terminology. The BGA package may include encapsulation material to cover and protect the IC die from damage. In another example, other types of known or later developed packaging techniques may be used with processor 100.[0071] FIG. 2 illustrates further details of functional units and register files within scalar data
path side A 115 and vector data path side B 116. Scalar data path side A 115 includes LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226. Scalar data path side A 115 includes global scalar register file 211, Ll/Sl local register file 212, Ml/Nl local register file 213 and D1/D2 local register file 214. Vector data path side B 116 includes L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246. Vector data path side B 116 includes global vector register file 231, L2/S2 local register file 232, M2/N2/C local register file 233 and predicate register file 234. Which functional units can read from or write to which register files is described in more detail herein.[0072] Scalar data path side A 115 includes LI unit 221. LI unit 221 generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file 211 or Ll/Sl local register file 212. LI unit 221 performs the following instruction selected operations: 64-bit add/subtract operations; 32-bit min/max operations; 8-bit Single Instruction Multiple Data (SIMD) instructions such as sum of absolute value, minimum and maximum determinations; circular min/max operations; and various move operations between register files. The result is written into an instruction specified register of global scalar register file 211, Ll/Sl local register file 212, Ml/Nl local register file 213 or Dl/D2 local register file 214.[0073] Scalar data path side A 115 includes SI unit 222. SI unit 222 generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file 211 or Ll/Sl local register file212. In this example, SI unit 222 performs the same type operations as LI unit 221. In another example, there may be slight variations between the data processing operations supported by LI unit 221 and SI unit 222. The result is written into an instruction specified register of global scalar register file 211, Ll/Sl local register file 212, Ml/Nl local register file 213 or D1/D2 local register file 214.[0074] Scalar data path side A 115 includes Ml unit 223. Ml unit 223 generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file 211 or Ml/Nl local register file213. In this example, Ml unit 223 performs the following instruction selected operations: 8-bit multiply operations; complex dot product operations; 32-bit bit count operations; complex conjugate multiply operations; and bit-wise Logical Operations, moves, adds and subtracts. The
result is written into an instruction specified register of global scalar register file 211, Ll/S 1 local register file 212, Ml/Nl local register file 213 or D1/D2 local register file 214.[0075] Scalar data path side A 115 includes N1 unit 224. N1 unit 224 generally accepts two 64-bit operands and produces one 64-bit result. The two operands are each recalled from an instruction specified register in either global scalar register file 211 or Ml/Nl local register file 213. In this example, N1 unit 224 performs the same type operations as Ml unit 223. There are also double operations (called dual issued instructions) that employ both the Ml unit 223 and the N1 unit 224 together. The result is written into an instruction specified register of global scalar register file 211, Ll/Sl local register file 212, Ml/Nl local register file 213 or D1/D2 local register file 214.[0076] Scalar data path side A 115 includes D1 unit 225 and D2 unit 226. D1 unit 225 and D2 unit 226 generally each accept two 64-bit operands and each produce one 64-bit result. D1 unit 225 and D2 unit 226 generally perform address calculations and corresponding load and store operations. D1 unit 225 is used for scalar loads and stores of 64 bits. D2 unit 226 is used for vector loads and stores of 512 bits. In this example, D1 unit 225 and D2 unit 226 also perform: swapping, pack and unpack on the load and store data; 64-bit SIMD arithmetic operations; and 64-bit bit-wise logical operations. D1/D2 local register file 214 stores base and offset addresses used in address calculations for the corresponding loads and stores. The two operands are each recalled from an instruction specified register in either global scalar register file 211 or D1/D2 local register file 214. The calculated result is written into an instruction specified register of global scalar register file 211, Ll/Sl local register file 212, Ml/Nl local register file 213 or D1/D2 local register file 214.[0077] Vector data path side B 116 includes L2 unit 241. L2 unit 241 generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file 231, L2/S2 local register file 232 or predicate register file 234. In this example, L2 unit 241 performs instruction similar to LI unit 221 except on wider 512-bit data. The result may be written into an instruction specified register of global vector register file 231, L2/S2 local register file 232, M2/N2/C local register file 233 or predicate register file 234.[0078] Vector data path side B 116 includes S2 unit 242. S2 unit 242 generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an
instruction specified register in either global vector register file 231, L2/S2 local register file 232 or predicate register file 234. In this example, S2 unit 242 performs instructions similar to SI unit 222. The result is written into an instruction specified register of global vector register file 231, L2/S2 local register file 232, M2/N2/C local register file 233 or predicate register file 234.[0079] Vector data path side B 116 includes M2 unit 243. M2 unit 243 generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file 231 or M2/N2/C local register file 233. In this example, M2 unit 243 performs instructions similar to Ml unit 223 except on wider 512-bit data. The result is written into an instruction specified register of global vector register file 231, L2/S2 local register file 232 or M2/N2/C local register file 233.[0080] Vector data path side B 116 includes N2 unit 244. N2 unit 244 generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file 231 or M2/N2/C local register file 233. In this example, N2 unit 244 performs the same type operations as M2 unit 243. There are also double operations (called dual issued instructions) that employ both M2 unit 243 and the N2 unit 244 together. The result is written into an instruction specified register of global vector register file 231, L2/S2 local register file 232 or M2/N2/C local register file 233.[0081] Vector data path side B 116 includes correlation (C) unit 245. C unit 245 generally accepts two 512-bit operands and produces one 512-bit result. The two operands are each recalled from an instruction specified register in either global vector register file 231 or M2/N2/C local register file 233. In this example, C unit 245 performs "Rake" and "Search" instructions that are used for WCDMA (wideband code division multiple access) encoding/decoding. In this example, C unit 245 can perform up to 512 multiples per clock cycle of a 2-bit PN (pseudorandom number) and 8-bit I/Q (complex number), 8-bit and 16-bit Sum-of- Absolute-Difference (SAD) calculations, up to 512 SADs per clock cycle, horizontal add and horizontal min/max instructions, and vector permutes instructions. C unit 245 also contains 4 vector control registers (CUCR0 to CUCR3) used to control certain operations of C unit 245 instructions. Control registers CUCR0 to CUCR3 are used as operands in certain C unit 245 operations. In some examples, control registers CUCR0 to CUCR3 are used in control of a general permutation instruction (VPERM), and as masks for SIMD multiple DOT product operations (DOTPM) and SIMD multiple Sum-of- Absolute-Difference (SAD) operations. In
further examples, control register CUCRO is used to store the polynomials for Galois Field Multiply operations (GFMPY) and control register CUCR1 is used to store the Galois field polynomial generator function.[0082] Vector data path side B 116 includes P unit 246. Vector predicate (P) unit 246 performs basic logic operations on registers of local predicate register file 234. P unit 246 has direct access to read from and write to predication register file 234. The logic operations include single register unary operations such as NEG (negate) which inverts each bit of the single register, BITCNT (bit count) which returns a count of the number of bits in the single register having a predetermined digital state (1 or 0), RMBD (right most bit detect) which returns a number of bit positions from the least significant bit position (right most) to a first bit position having a predetermined digital state (1 or 0), DECIMATE which selects every instruction specified Nth (1, 2, 4, etc.) bit to output, and EXPAND which replicates each bit an instruction specified N times (2, 4, etc.). The logic operations also include two register binary operations such as AND which is a bitwise AND of data of the two registers, NAND which is a bitwise AND and negate of data of the two registers, OR which is a bitwise OR of data of the two registers, NOR which is a bitwise OR and negate of data of the two registers, and XOR which is exclusive OR of data of the two registers. The logic operations include transfer of data from a predicate register of predicate register file 234 to another specified predicate register or to a specified data register in global vector register file 231. One use of P unit 246 is manipulation of the SIMD vector comparison results for use in control of a further SIMD vector operation. The BITCNT instruction can be used to count the number of l's in a predicate register to determine the number of valid data elements from a predicate register.[0083] FIG. 3 illustrates global scalar register file 211. There are 16 independent 64-bit wide scalar registers designated A0 to A15. Each register of global scalar register file 211 can be read from or written to as 64-bits of scalar data. All scalar data path side A 115 functional units (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226) can read or write to global scalar register file 211. Global scalar register file 211 can be read from as 32-bits or as 64-bits and written to as 64-bits. The instruction executing determines the read data size. Vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246) can read from global scalar register file 211 via cross path 117 under restrictions that are described below.
[0084] FIG. 4 illustrates D1/D2 local register file 214. There are sixteen independent 64-bit wide scalar registers designated DO to D16. Each register of D1/D2 local register file 214 is read from or written to as 64-bits of scalar data. All scalar data path side A 115 functional units (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226) can write to global scalar register file 211. Only D1 unit 225 and D2 unit 226 can read from D1/D2 local scalar register file 214. Data stored in D1/D2 local scalar register file 214 can include base addresses and offset addresses used in address calculation.[0085] FIG. 5 illustrates Ll/Sl local register file 212. In this example, Ll/Sl local register file 212 includes eight independent 64-bit wide scalar registers designated ALO to AL7. In this example, the instruction coding permits Ll/Sl local register file 212 to include up to 16 registers. In this example, eight registers are implemented to reduce circuit size and complexity. Each register of Ll/Sl local register file 212 can be read from or written to as 64-bits of scalar data. All scalar data path side A 115 functional units (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226) can write to Ll/Sl local scalar register file 212. LI unit 221 and SI unit 222 can read from Ll/Sl local scalar register file 212.[0086] FIG. 6 illustrates Ml/Nl local register file 213. In this example, eight independent 64-bit wide scalar registers designated AMO to AM7 are implemented. In this example, the instruction coding permits Ml/Nl local register file 213 to include up to 16 registers. In this example, eight registers are implemented to reduce circuit size and complexity. Each register of Ml/Nl local register file 213 can be read from or written to as 64-bits of scalar data. All scalar data path side A 115 functional units (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226) can write to Ml/Nl local scalar register file 213. Ml unit 223 and N1 unit 224 can read from Ml/Nl local scalar register file 213.[0087] FIG. 7 illustrates global vector register file 231. There are sixteen independent 512-bit wide vector registers. Each register of global vector register file 231 can be read from or written to as 64-bits of scalar data designated B0 to B15. Each register of global vector register file 231 can be read from or written to as 512-bits of vector data designated VB0 to VB15. The instruction type determines the data size. All vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246) can read or write to global vector register file 231. Scalar data path side A 115 functional units (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226) can read from global vector
register file 231 via cross path 117 under restrictions that are described below.[0088] FIG. 8 illustrates predicate (P) local register file 234. There are eight independent 64-bit wide registers designated P0 to P7. Each register of P local register file 234 can be read from or written to as 64-bits of scalar data. Vector data path side B 116 functional units L2 unit 241, S2 unit 242, C unit 244 and P unit 246 can write to P local register file 234. L2 unit 241, S2 unit 242 and P unit 246 can read from P local scalar register file 234. One use of P local register file 234 is writing one-bit SIMD vector comparison results from L2 unit 241, S2 unit 242 or C unit 244, manipulation of the SIMD vector comparison results by P unit 246, and use of the manipulated results in control of a further SIMD vector operation.[0089] FIG. 9 illustrates L2/S2 local register file 232. In this example, eight independent 512-bit wide vector registers are implemented. In this example, the instruction coding permits L2/S2 local register file 232 to include up to sixteen registers. In this example, eight registers are implemented to reduce circuit size and complexity. Each register of L2/S2 local vector register file 232 can be read from or written to as 64-bits of scalar data designated BL0 to BL7. Each register of L2/S2 local vector register file 232 can be read from or written to as 512-bits of vector data designated VBL0 to VBL7. The instruction type determines the data size. All vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246) can write to L2/S2 local vector register file 232. L2 unit 241 and S2 unit 242 can read from L2/S2 local vector register file 232.[0090] FIG. 10 illustrates M2/N2/C local register file 233. In this example, eight independent 512-bit wide vector registers are implemented. In this example, the instruction coding permits M2/N2/C local register file 233 to include up to sixteen registers. In this example, eight registers are implemented to reduce circuit size and complexity. Each register of M2/N2/C local vector register file 233 can be read from or written to as 64-bits of scalar data designated BM0 to BM7. Each register of M2/N2/C local vector register file 233 can be read from or written to as 512-bits of vector data designated VBM0 to VBM7. All vector data path side B 116 functional units (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246) can write to M2/N2/C local vector register file 233. M2 unit 243, N2 unit 244 and C unit 245 can read from M2/N2/C local vector register file 233.[0091] The provision of global register files accessible by all functional units of a side and local register files accessible by some of the functional units of a side is a design choice. In
another example, a different accessibility provision could be made, such as employing one type of register file corresponding to the global register files described herein.[0092] Cross path 117 permits limited exchange of data between scalar data path side A 115 and vector data path side B 116. During each operational cycle one 64-bit data word can be recalled from global scalar register file A 211 for use as an operand by one or more functional units of vector data path side B 116 and one 64-bit data word can be recalled from global vector register file 231 for use as an operand by one or more functional units of scalar data path side A 115. Any scalar data path side A 115 functional unit (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226) can read a 64-bit operand from global vector register file 231. This 64-bit operand is the least significant bits of the 512-bit data in the accessed register of global vector register file 231. Multiple scalar data path side A 115 functional units can employ the same 64-bit cross path data as an operand during the same operational cycle. However, a single 64-bit operand is transferred from vector data path side B 116 to scalar data path side A 115 in a single operational cycle. Any vector data path side B 116 functional unit (L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246) can read a 64-bit operand from global scalar register file 211. If the corresponding instruction is a scalar instruction, the cross-path operand data is treated as a 64-bit operand. If the corresponding instruction is a vector instruction, the upper 448 bits of the operand are zero filled. Multiple vector data path side B 116 functional units can employ the same 64-bit cross path data as an operand during the same operational cycle. In one example, a single 64-bit operand is transferred from scalar data path side A 115 to vector data path side B 116 in a single operational cycle.[0093] Streaming engine 125 (FIG. 1) transfers data in certain restricted circumstances. Streaming engine 125 controls two data streams. A stream includes of a sequence of elements of a particular type. Programs that operate on streams read the data sequentially, operating on each element in turn. Every stream has the following basic properties: the stream data have a well-defined beginning and ending in time; the stream data have fixed element size and type throughout the stream; and, the stream data have a fixed sequence of elements. Once a stream is opened, streaming engine 125 performs the following operations: calculates the address; fetches the defined data type from L2 unified cache 130 (which may require cache service from a higher level memory, e.g., in the event of a cache miss in L2); performs data type manipulation such as zero extension, sign extension, data element sorting/swapping such as matrix
transposition; and delivers the data directly to the programmed data register file within processor core 110. Streaming engine 125 is thus useful for real-time digital filtering operations on well-behaved data. Streaming engine 125 frees the corresponding processor from these memory fetch tasks, thus enabling other processing functions.[0094] Streaming engine 125 provides several benefits. For example, streaming engine 125 permits multi-dimensional memory accesses, increases the available bandwidth to the functional units minimizes the number of cache miss stalls since the stream buffer bypasses LID cache 123, and reduces the number of scalar operations required to maintain a loop. Streaming engine 125 also manages address pointers and handles address generation which frees up the address generation instruction slots and D1 unit 225 and D2 unit 226 for other computations.[0095] Processor core 110 (FIG. 1) operates on an instruction pipeline. Instructions are fetched in instruction packets of fixed length as further described below. All instructions require the same number of pipeline phases for fetch and decode but require a varying number of execute phases.[0096] FIG. 11 illustrates the following pipeline phases: program fetch phase 1110, dispatch and decode phases 1120, and execution phases 1130. Program fetch phase 1110 includes three stages for all instructions. Dispatch and decode phases 1120 include three stages for all instructions. Execution phase 1130 includes one to four stages depending on the instruction.[0097] Fetch phase 1110 includes program address generation (PG) stage 1111, program access (PA) stage 1112 and program receive (PR) stage 1113. During program address generation stage 1111, the program address is generated in the processor and the read request is sent to the memory controller for the LI I cache. During the program access stage 1112, the L1I cache processes the request, accesses the data in its memory and sends a fetch packet to the processor boundary. During the program receive stage 1113, the processor registers the fetch packet.[0098] Instructions are fetched in a fetch packet that includes sixteen 32-bit wide words. FIG. 12 illustrates sixteen instructions 1201 to 1216 of a single fetch packet. Fetch packets are aligned on 512-bit (16-word) boundaries. This example employs a fixed 32-bit instruction length which enables decoder alignment. A properly aligned instruction fetch can load multiple instructions into parallel instruction decoders. Such a properly aligned instruction fetch can be achieved by predetermined instruction alignment when stored in memory by having fetch
packets aligned on 512-bit boundaries coupled with a fixed instruction packet fetch. Conversely, variable length instructions require an initial step of locating each instruction boundary before decoding. A fixed length instruction set generally permits more regular layout of instruction fields which simplifies the construction of each decoder which is an advantage for a wide issue VLIW processor.[0099] The execution of the individual instructions is partially controlled by a p bit in each instruction. In this example, the p bit is bit 0 of the 32-bit wide slot. The p bit determines whether an instruction executes in parallel with the next instruction. In this example, instructions are scanned from lower to higher address. If the p bit of an instruction is 1, then the next following instruction (higher memory address) is executed in parallel with (in the same cycle as) that instruction. If the p bit of an instruction is 0, then the next following instruction is executed in the cycle after the instruction.[00100] Processor core 110 (FIG. 1) and L1I cache 121 pipelines (FIG. 1) are de-coupled from each other. Fetch packet returns from L1I cache can take a different number of clock cycles, depending on external circumstances such as whether there is a hit in L II cache 121 or a hit in L2 combined cache 130. Therefore, program access stage 1112 can take several clock cycles instead of one clock cycle as in the other stages.[00101] The instructions executing in parallel constitute an execute packet. In this example, an execute packet can contain up to sixteen 32-bit wide slots for sixteen instructions. No two instructions in an execute packet can use the same functional unit. A slot is one of five types: 1) a self-contained instruction executed on one of the functional units of processor core 110 (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225, D2 unit 226, L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C unit 245 and P unit 246); 2) a unitless instruction such as a NOP (no operation) instruction or multiple NOP instructions; 3) a branch instruction; 4) a constant field extension; and 5) a conditional code extension. Some of these slot types are further described herein.[00102] Dispatch and decode phases 1120 (FIG. 11) include instruction dispatch to appropriate execution unit (DS) stage 1121, instruction pre-decode (DC1) stage 1122, and instruction decode, operand read (DC2) stage 1123. During instruction dispatch to appropriate execution unit stage 1121, the fetch packets are split into execute packets and assigned to the appropriate functional units. During the instruction pre-decode stage 1122, the source registers,
destination registers, and associated paths are decoded for the execution of the instructions in the functional units. During the instruction decode, operand read stage 1123, more detailed unit decodes are performed and operands are read from the register files.[00103] Execution phase 1130 includes execution (El to E5) stages 1131 to 1135. Different types of instructions require different numbers of such stages to complete execution. The execution stages of the pipeline play an important role in understanding the device state at processor cycle boundaries.[00104] During El stage 1131, the conditions for the instructions are evaluated and operands are operated on. As illustrated in FIG. 11, El stage 1131 can receive operands from a stream buffer 1141 and one of the register files shown schematically as 1142. For load and store instructions, address generation is performed, and address modifications are written to a register file. For branch instructions, the branch fetch packet in PG phase is affected. As illustrated in FIG. 11, load and store instructions access memory here shown schematically as memory 1151. For single-cycle instructions, results are written to a destination register file when any conditions for the instructions are evaluated as true. If a condition is evaluated as false, the instruction does not write any results or have any pipeline operation after El stage 1131.[00105] During E2 stage 1132, load instructions send the address to memory. Store instructions send the address and data to memory. Single-cycle instructions that saturate results set the SAT bit in the control status register (CSR) if saturation occurs. For 2-cycle instructions, results are written to a destination register file.[00106] During E3 stage 1133, data memory accesses are performed. Any multiply instructions that saturate results set the SAT bit in the control status register (CSR) if saturation occurs. For 3-cycle instructions, results are written to a destination register file.[00107] During E4 stage 1134, load instructions bring data to the processor boundary. For 4-cycle instructions, results are written to a destination register file.[00108] During E5 stage 1135, load instructions write data into a register as illustrated schematically in FIG. 11 with input from memory 1151 to E5 stage 1135.[00109] FIG. 13 illustrates an example of the instruction coding 1300 of functional unit instructions used by this example. Each instruction includes 32 bits and controls the operation of one of the individually controllable functional units (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225, D2 unit 226, L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, C
unit 245 and P unit 246).[00110] The creg field 1301 (bits 29 to 31) and the z bit 1302 (bit 28) are optional fields used in conditional instructions. The bits are used for conditional instructions to identify the predicate register and the condition. The z bit 1302 (bit 28) indicates whether the predication is based upon zero or not zero in the predicate register. If z = 1, the test is for equality with zero. If z = 0, the test is for nonzero. The case of creg = 0 and z = 0 is treated as true to allow unconditional instruction execution. The creg field 1301 and the z field 1302 are encoded in the instruction as shown in Table 1.Table 1[00111] Execution of a conditional instruction is conditional upon the value stored in the specified data register. The data register is in the global scalar register file 211 for all functional units. Note that "z" in the z bit column refers to the zero/not zero comparison selection noted above and "x" is a don't care state. This coding specifies a subset of the sixteen global registers as predicate registers which preserves bits in the instruction coding. Note that unconditional instructions do not have the optional bits. For unconditional instructions, the bits in fields 1301 and 1302 (28 to 31) are used as additional opcode bits.[00112] The dst field 1303 (bits 23 to 27) specifies a register in a corresponding register file as the destination of the instruction results.[00113] The src2/cst field 1304 (bits 18 to 22) has several meanings depending on the instruction opcode field (bits 3 to 12 for all instructions and also bits 28 to 31 for unconditional
instructions). One meaning specifies a register of a corresponding register file as the second operand. Another meaning is an immediate constant. Depending on the instruction type, the field 1304 is treated as an unsigned integer and zero extended to a specified data length or is treated as a signed integer and sign extended to the specified data length.[00114] The srcl field 1305 (bits 13 to 17) specifies a register in a corresponding register file as the first source operand.[00115] The opcode field 1306 (bits 3 to 12) for all instructions (and also bits 28 to 31 for unconditional instructions) specifies the type of instruction and designates appropriate instruction options including unambiguous designation of the functional unit used and operation performed. A detailed explanation of the opcode is beyond the scope of this description except for the instruction options described below.[00116] The e bit 1307 (bit 2) is used for immediate constant instructions where the constant can be extended. If e = 1, then the immediate constant is extended in a manner described below. If e = 0, then the immediate constant is not extended and the immediate constant is specified by the src2/cst field 1304 (bits 18 to 22). Note that the e bit 1307 is used for some instructions. Accordingly, with proper coding, the e bit 1307 can be omitted from some instructions and the bit can be used as an additional opcode bit.[00117] The s bit 1308 (bit 1) designates scalar data path side A 115 or vector data path side B 116. If s = 0, then scalar data path side A 115 is selected which limits the functional unit to LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226 and the corresponding register files illustrated in FIG. 2. Similarly, s = 1 selects vector data path side B 116 which limits the functional unit to L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244, P unit 246 and the corresponding register file illustrated in FIG. 2.[00118] The p bit 1309 (bit 0) marks the execute packets. The p-bit determines whether the instruction executes in parallel with the following instruction. The p-bits are scanned from lower to higher address. If p = 1 for the current instruction, then the next instruction executes in parallel with the current instruction. If p = 0 for the current instruction, then the next instruction executes in the cycle after the current instruction. All instructions executing in parallel constitute an execute packet. An execute packet can contain up to sixteen instructions. Each instruction in an execute packet uses a different functional unit.[00119] There are two different condition code extension slots. Each execute packet can
contain one each of these unique 32-bit condition code extension slots which contains the 4-bit creg/z fields for the instructions in the same execute packet. FIG. 14 illustrates the coding for condition code extension slot 0 and FIG. 15 illustrates the coding for condition code extension slot 1.[00120] FIG. 14 illustrates the coding for condition code extension slot 0 having 32 bits. Field 1401 (bits 28 to 31) specifies 4 creg/z bits assigned to the LI unit 221 instruction in the same execute packet. Field 1402 (bits 27 to 24) specifies four creg/z bits assigned to the L2 unit 241 instruction in the same execute packet. Field 1403 (bits 20 to 23) specifies four creg/z bits assigned to the SI unit 222 instruction in the same execute packet. Field 1404 (bits 16 to 19) specifies four creg/z bits assigned to the S2 unit 242 instruction in the same execute packet. Field 1405 (bits 12 to 15) specifies four creg/z bits assigned to the D1 unit 225 instruction in the same execute packet. Field 1406 (bits 8 to 11) specifies four creg/z bits assigned to the D2 unit 226 instruction in the same execute packet. Field 1407 (bits 6 and 7) is unused/reserved. Field 1408 (bits 0 to 5) is coded as a set of unique bits (CCEX0) to identify the condition code extension slot 0. Once the unique ID of condition code extension slot 0 is detected, the corresponding creg/z bits are employed to control conditional execution of any LI unit 221, L2 unit 241, SI unit 222, S2 unit 242, D1 unit 225 and D2 unit 226 instruction in the same execution packet. The creg/z bits are interpreted as shown in Table 1. If the corresponding instruction is conditional (includes creg/z bits), the corresponding bits in the condition code extension slot 0 override the condition code bits in the instruction. Setting the creg/z bits equal to "0000" makes the instruction unconditional. Thus, a properly coded condition code extension slot 0 can make some corresponding instructions conditional and some unconditional.[00121] FIG. 15 illustrates the coding for condition code extension slot 1 having 32 bits. Field 1501 (bits 28 to 31) specifies four creg/z bits assigned to the Ml unit 223 instruction in the same execute packet. Field 1502 (bits 27 to 24) specifies four creg/z bits assigned to the M2 unit 243 instruction in the same execute packet. Field 1503 (bits 19 to 23) specifies four creg/z bits assigned to the C unit 245 instruction in the same execute packet. Field 1504 (bits 16 to 19) specifies four creg/z bits assigned to the N1 unit 224 instruction in the same execute packet. Field 1505 (bits 12 to 15) specifies four creg/z bits assigned to the N2 unit 244 instruction in the same execute packet. Field 1506 (bits 6 to 11) is unused/reserved. Field 1507 (bits 0 to 5) is coded as a set of unique bits (CCEX1) to identify the condition code extension slot 1. Once the
unique ID of condition code extension slot 1 is detected, the corresponding creg/z bits are employed to control conditional execution of any Ml unit 223, M2 unit 243, C unit 245, N1 unit 224 and N2 unit 244 instruction in the same execution packet. These creg/z bits are interpreted as shown in Table 1. If the corresponding instruction is conditional (includes creg/z bits), the corresponding bits in the condition code extension slot 1 override the condition code bits in the instruction. Setting the creg/z bits equal to "0000" makes the instruction unconditional. Thus, a properly coded condition code extension slot 1 can make some instructions conditional and some unconditional.[00122] Both condition code extension slot 0 and condition code extension slot 1 can include a p bit to define an execute packet as described above in conjunction with FIG. 13. In this example, as illustrated in FIGS. 14 and 15, code extension slot 0 and condition code extension slot 1 have bit 0 (p bit) encoded as 1. Thus, neither condition code extension slot 0 nor condition code extension slot 1 can be in the last instruction slot of an execute packet.[00123] There are two different 32-bit constant extension slots. Each execute packet can contain one each of the unique constant extension slots which contains 27 bits to be concatenated as high order bits with the 5-bit constant field 1305 to form a 32-bit constant. As noted in the instruction coding description above, some instructions define the src2/cst field 1304 as a constant rather than a source register identifier. At least some of such instructions can employ a constant extension slot to extend the constant to 32 bits.[00124] FIG. 16 illustrates the fields of constant extension slot 0. Each execute packet can include one instance of constant extension slot 0 and one instance of constant extension slot 1. FIG. 16 illustrates that constant extension slot 0 1600 includes two fields. Field 1601 (bits 5 to 31) constitutes the most significant 27 bits of an extended 32-bit constant including the target instruction scr2/cst field 1304 as the five least significant bits. Field 1602 (bits 0 to 4) is coded as a set of unique bits (CSTX0) to identify the constant extension slot 0. In this example, constant extension slot 0 1600 can be used to extend the constant of one of an LI unit 221 instruction, data in a D1 unit 225 instruction, an S2 unit 242 instruction, an offset in a D2 unit 226 instruction, an M2 unit 243 instruction, an N2 unit 244 instruction, a branch instruction, or a C unit 245 instruction in the same execute packet. Constant extension slot 1 is similar to constant extension slot 0 except that bits 0 to 4 are coded as a set of unique bits (CSTX1) to identify the constant extension slot 1. In this example, constant extension slot 1 can be used to
extend the constant of one of an L2 unit 241 instruction, data in a D2 unit 226 instruction, an SI unit 222 instruction, an offset in a D1 unit 225 instruction, an Ml unit 223 instruction or an N1 unit 224 instruction in the same execute packet.[00125] Constant extension slot 0 and constant extension slot 1 are used as follows. The target instruction is of the type permitting constant specification. In this example, the extension is implemented by replacing one input operand register specification field with the least significant bits of the constant as described above with respect to scr2/cst field 1304. Instruction decoder 113 determines this case, known as an immediate field, from the instruction opcode bits. The target instruction also includes one constant extension bit (e bit 1307) dedicated to signaling whether the specified constant is not extended (constant extension bit=0) or extended (constant extension bit=l). If instruction decoder 113 detects a constant extension slot 0 or a constant extension slot 1, instruction decoder 113 further checks the other instructions within the execute packet for an instruction corresponding to the detected constant extension slot. A constant extension is made if one corresponding instruction has a constant extension bit (e bit 1307) equal to 1.[00126] FIG. 17 is a partial block diagram 1700 illustrating constant extension. FIG. 17 assumes that instruction decoder 113 (FIG. 1) detects a constant extension slot and a corresponding instruction in the same execute packet. Instruction decoder 113 supplies the twenty-seven extension bits from the constant extension slot (bit field 1601) and the five constant bits (bit field 1305) from the corresponding instruction to concatenator 1701. Concatenator 1701 forms a single 32-bit word from these two parts. In this example, the twenty-seven extension bits from the constant extension slot (bit field 1601) are the most significant bits and the five constant bits (bit field 1305) are the least significant bits. The combined 32-bit word is supplied to one input of multiplexer 1702. The five constant bits from the corresponding instruction field 1305 supply a second input to multiplexer 1702. Selection of multiplexer 1702 is controlled by the status of the constant extension bit. If the constant extension bit (e bit 1307) is 1 (extended), multiplexer 1702 selects the concatenated 32-bit input. If the constant extension bit is 0 (not extended), multiplexer 1702 selects the five constant bits from the corresponding instruction field 1305. The output of multiplexer 1702 supplies an input of sign extension unit 1703.[00127] Sign extension unit 1703 forms the final operand value from the input from multiplexer 1703. Sign extension unit 1703 receives control inputs Scalar/Vector and Data Size.
The Scalar/Vector input indicates whether the corresponding instruction is a scalar instruction or a vector instruction. The functional units of data path side A 115 (LI unit 221, SI unit 222, Ml unit 223, N1 unit 224, D1 unit 225 and D2 unit 226) perform scalar instructions. Any instruction directed to one of these functional units is a scalar instruction. Data path side B functional units L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244 and C unit 245 can perform scalar instructions or vector instructions. Instruction decoder 113 determines whether the instruction is a scalar instruction or a vector instruction from the opcode bits. P unit 246 may perform scalar instructions. The Data Size can be eight bits (byte B), sixteen bits (half-word H), 32 bits (word W), or 64 bits (double word D).[00128] Table 2 lists the operation of sign extension unit 1703 for the various options.Table 2[00129] Both constant extension slot 0 and constant extension slot 1 can include a p bit to define an execute packet as described above in conjunction with FIG. 13. In this example, as in the case of the condition code extension slots, constant extension slot 0 and constant extension slot 1 have bit 0 (p bit) encoded as 1. Thus, neither constant extension slot 0 nor constant extension slot 1 can be in the last instruction slot of an execute packet.[00130] An execute packet can include a constant extension slot 0 or 1 and more than one corresponding instruction marked constant extended (e bit=l). For such an occurrence, for constant extension slot 0, more than one of an LI unit 221 instruction, data in a D1 unit 225 instruction, an S2 unit 242 instruction, an offset in a D2 unit 226 instruction, an M2 unit 243 instruction or an N2 unit 244 instruction in an execute packet can have an e bit of 1. For such an
occurrence, for constant extension slot 1, more than one of an L2 unit 241 instruction, data in a D2 unit 226 instruction, an SI unit 222 instruction, an offset in a D1 unit 225 instruction, an Ml unit 223 instruction or an N1 unit 224 instruction in an execute packet can have an e bit of 1. In one example, instruction decoder 113 determines that such an occurrence is an invalid operation and not supported. Alternately, the combination can be supported with extension bits of the constant extension slot applied to each corresponding functional unit instruction marked constant extended.[00131] LI unit 221, SI unit 222, L2 unit 241, S2 unit 242 and C unit 245 often operate in a single instruction multiple data (SIMD) mode. In this SIMD mode, the same instruction is applied to packed data from the two operands. Each operand holds multiple data elements disposed in predetermined slots. SIMD operation is enabled by carry control at the data boundaries. Such carry control enables operations on varying data widths.[00132] FIG. 18 illustrates the carry control logic. AND gate 1801 receives the carry output of bit N within the operand wide arithmetic logic unit (64 bits for scalar data path side A 115 functional units and 512 bits for vector data path side B 116 functional units). AND gate 1801 also receives a carry control signal which is further described below. The output of AND gate 1801 is supplied to the carry input of bit N+l of the operand wide arithmetic logic unit. AND gates such as AND gate 1801 are disposed between every pair of bits at a possible data boundary. For example, for 8-bit data such an AND gate will be between bits 7 and 8, bits 15 and 16, bits 23 and 24, etc. Each such AND gate receives a corresponding carry control signal. If the data size is the minimum size, each carry control signal is 0, effectively blocking carry transmission between the adjacent bits. The corresponding carry control signal is 1 if the selected data size requires both arithmetic logic unit sections. Table 3 below shows example carry control signals for the case of a 512-bit wide operand as used by vector data path side B 116 functional units which can be divided into sections of 8 bits, 16 bits, 32 bits, 64 bits, 128 bits or 256 bits. In Table 3, the upper 32 bits control the upper bits (bits 128 to 511) carries and the lower 32 bits control the lower bits (bits 0 to 127) carries. No control of the carry output of the most significant bit is needed, thus only 63 carry control signals are required.Table 3[00133] Operation on data sizes that are integral powers of 2 (2N) is common. However, the carry control technique is not limited to integral powers of 2 and can be applied to other data sizes and operand widths.[00134] In this example, at least L2 unit 241 and S2 unit 242 employ two types of SIMD instructions using registers in predicate register file 234. In this example, the SIMD vector predicate instructions operate on an instruction specified data size. The data sizes include byte (8 bit) data, half word (16 bit) data, word (32 bit) data, double word (64 bit) data, quad word (128 bit) data and half vector (256 bit) data. In the first of these instruction types, the functional unit (L unit 241 or S unit 242) performs a SIMD comparison on packed data in two general data registers and supplies results to a predicate data register. The instruction specifies a data size, the two general data register operands, and the destination predicate register. In this example, each predicate data register includes one bit corresponding to each minimal data size portion of the general data registers. In the current example, the general data registers are 512 bits (64 bytes) and the predicate data registers are 64 bits (8 bytes). Each bit of a predicate data register corresponds to eight bits of a general data register. The comparison is performed on a specified data size (8, 16, 32, 64, 128 or 256 bits). If the comparison is true, then the functional unit supplies l's to all predicate register bits corresponding the that data size portion. If the comparison is false, the functional unit supplies zeroes to the predicate register bits corresponding to that data size portion. In this example, the enabled comparison operations include: less than, greater than, and equal to.
[00135] In the second of the instruction types, the functional unit (L2 unit 241 or S2 unit 242) separately performs a first SIMD operation or a second SIMD operation on packed data in general data registers based upon the state of data in a predicate data register. The instruction specifies a data size, one or two general data register operands, a controlling predicate register, and a general data register destination. For example, a functional unit can select, for each data sized portion of two vector operands, a first data element of a first operand or a second data element of a second operand dependent upon the 1/0 state of corresponding bits in the predicate data register to store in the destination register. In another example, the data elements of a single vector operand can be saved to memory or not saved dependent upon the data of the corresponding bits of the predicate register.[00136] The operations of P unit 245 permit a variety of compound vector SIMD operations based upon more than one vector comparison. For example, a range determination can be made using two comparisons. In a SIMD operation, a candidate vector is compared with a vector reference having the minimum of the range packed within a data register. The greater than result is scalar data with bits corresponding to the SIMD data width set to 0 or 1 depending upon the SIMD comparison and is stored in a predicate data register. Another SIMD comparison of the candidate vector is performed with another reference vector having the maximum of the range packed within a different data register produces another scalar with less than results stored in another predicate register. The P unit then ANDs the two predicate registers. The AND result indicates whether each SIMD data part of the candidate vector is within range or out of range. A P unit BITCNT instruction of the AND result can produce a count of the data elements within the comparison range. The P unit NEG function can be used to convert various expressions, such as: a less than comparison result to a greater than or equal comparison result; a greater than comparison result to a less than or equal to comparison result; or, an equal to comparison result to a not equal to comparison result.STREAMING ENGINE[00137] FIG. 19 is a conceptual view of the streaming engine 125 of the example processor 100 of FIG. 1. FIG. 19 illustrates the processing of a single stream representative of the two streams controlled by streaming engine 125. Streaming engine 1900 includes stream address generator 1901. Stream address generator 1901 sequentially generates addresses of the elements of the stream and supplies these element addresses to system memory 1910. Memory 1910
recalls data stored at the element addresses (data elements) and supplies these data elements to data first-in-first-out (FIFO) buffer 1902. Data FIFO buffer 1902 provides buffering between memory 1910 and processor 1920. Data formatter 1903 receives the data elements from data FIFO memory 1902 and provides data formatting according to the stream definition. This process is described in more detail herein. Streaming engine 1900 supplies the formatted data elements from data formatter 1903 to the processor 1920. A program executing on processor 1920 consumes the data and generates an output.[00138] Stream elements may reside in system memory. The memory imposes no particular structure upon the stream. Programs define streams and thereby impose structure by specifying the stream attributes such as address of the first element of the stream, size and type of the elements in the stream, formatting for data in the stream, and the address sequence associated with the stream.[00139] The streaming engine defines an address sequence for elements of the stream in terms of a pointer walking through memory. A multiple-level nested loop controls the path the pointer takes. An iteration count for a loop level indicates the number of times the level repeats. A dimension gives the distance between pointer positions of the loop level.[00140] In a basic forward stream, the innermost loop consumes physically contiguous elements from memory as the implicit dimension of the innermost loop is one element. The pointer moves from element to element in consecutive, increasing order. In each level outside the inner loop, that loop moves the pointer to a new location based on the size of the dimension of the loop level.[00141] This form of addressing allows programs to specify regular paths through memory using a small number of parameters. Table 4 lists the addressing parameters of a basic stream.Table 4[00142] In this example, ELEM BYTES ranges from 1 to 64 bytes as shown in Table 5.Table 5[00143] The definition above maps consecutive elements of the stream to increasing addresses in memory which is appropriate for many algorithms. Some algorithms are better served by reading elements in decreasing memory address order or reverse stream addressing. For example, a discrete convolution computes vector dot-products, as illustrated by expression (1).[00144] In expression (1), f[] and g[] represent arrays in memory. For each output, the algorithm reads f[] in the forward direction and reads g[] in the reverse direction. Practical filters limit the range of indices for [x] and [t-x] to a finite number of elements. To support this pattern, the streaming engine supports reading elements in decreasing address order.[00145] Matrix multiplication presents a unique problem to the streaming engine. Each element in the matrix product is a vector dot product between a row from the first matrix and a column from the second. Programs may store matrices in row-major or column-major order. Row-major order stores all the elements of a single row contiguously in memory. Column-major order stores all elements of a single column contiguously in memory. Matrices may be stored in the same order as the default array order for the language. As a result, only one of the two matrices in a matrix multiplication map on to the 2-dimensional stream definition of the streaming engine. In an example, an index steps through columns on one array and rows of the other array. The streaming engine supports implicit matrix transposition with transposed streams. Transposed streams avoid the cost of explicitly transforming the data in memory. Instead of accessing data in strictly consecutive-element order, the streaming engine effectively interchanges the inner two loop dimensions of the traversal order, fetching elements along the second dimension into contiguous vector lanes.[00146] This algorithm works but is impractical to implement for small element sizes. Some algorithms work on matrix tiles which are multiple columns and rows together. Therefore, the streaming engine defines a separate transposition granularity. The hardware imposes a minimum granularity. The transpose granularity needs to be at least as large as the element size. Transposition granularity causes the streaming engine to fetch one or more consecutive elements from dimension 0 before moving along dimension 1. When the granularity equals the element size, a single column from a row-major array is fetched. Otherwise, the granularity specifies fetching two, four or more columns at a time from a row-major array. This is also applicable for column-major layout by exchanging row and column in the description. A parameter GRANULE indicates the transposition granularity in bytes.[00147] Another common matrix multiplication technique exchanges the innermost two loops of the matrix multiply. The resulting inner loop no longer reads down the column of one matrix while reading across the row of another. For example, the algorithm may hoist one term outside
the inner loop, replacing it with the scalar value. The innermost loop can be implemented with a single scalar by vector multiply followed by a vector add. Or, the scalar value can be duplicated across the length of the vector and a vector by vector multiply used. The streaming engine of this example directly supports the latter case and related use models with an element duplication mode. In this mode, the streaming engine reads a granule smaller than the full vector size and replicates that granule to fill the next vector output.[00148] The streaming engine treats each complex number as a single element with two sub-elements that give the real and imaginary (rectangular) or magnitude and angle (polar) portions of the complex number. Not all programs or peripherals agree what order these sub-elements should appear in memory. Therefore, the streaming engine offers the ability to swap the two sub-elements of a complex number with no cost. The feature swaps the halves of an element without interpreting the contents of the element and can be used to swap pairs of sub-elements of any type, not just complex numbers.[00149] Algorithms generally prefer to work at high precision, but high precision values require more storage and bandwidth than lower precision values. Commonly, programs store data in memory at low precision, promote those values to a higher precision for calculation, and then demote the values to lower precision for storage. The streaming engine supports such operations directly by allowing algorithms to specify one level of type promotion. In this example, every sub-element can be promoted to a larger type size with either sign or zero extension for integer types. In some examples, the streaming engine supports floating point promotion, promoting 16-bit and 32-bit floating point values to 32-bit and 64-bit formats, respectively.[00150] While the streaming engine defines a stream as a discrete sequence of data elements, the processing unit core 110 consumes data elements packed contiguously in vectors. The vectors resemble streams as the vectors contain multiple homogeneous elements with some implicit sequence. Because the streaming engine reads streams, but the processing unit core 110 consumes vectors, the streaming engine maps streams onto vectors in a consistent way.[00151] Vectors include equal-sized lanes, each lane containing a sub-element. The processing unit core 110 designates the rightmost lane of the vector as lane 0, regardless of current endian mode. Lane numbers increase right-to-left. The actual number of lanes within a vector varies depending on the length of the vector and the data size of the sub-element.
[00152] FIG. 20 illustrates the sequence of the formatting operations of formatter 1903. Formatter 1903 includes three sections: input section 2010, formatting section 2020, and output section 2030. Input section 2010 receives the data recalled from system memory 1910 as accessed by stream address generator 1901. The data can be via linear fetch stream 2011 or transposed fetch stream 2012.[00153] Formatting section 2020 includes various formatting blocks. The formatting performed within formatter 1903 by the blocks is further described below. Complex swap block 2021 optionally swaps two sub-elements forming a complex number element. Type promotion block 2022 optionally promotes each data element into a larger data size. Promotion includes zero extension for unsigned integers and sign extension for signed integers. Decimation block2023 optionally decimates the data elements. In this example, decimation can be 2: 1 retaining every other data element or 4: 1 retaining every fourth data element. Element duplication block2024 optionally duplicates individual data elements. In this example, the data element duplication is an integer power of 2 (2N, where N is an integer) including 2x, 4x, 8x, 16x, 32x and 64x. In this example, data duplication can extend over multiple destination vectors. Vector length masking/group duplication block 2025 has two primary functions. An independently specified vector length VECLEN controls the data elements supplied to each output data vector. When group duplication is off, excess lanes in the output data vector are zero filled and these lanes are marked invalid. When group duplication is on, input data elements of the specified vector length are duplicated to fill the output data vector.[00154] Output section 2030 holds the data for output to the corresponding functional units. Register and buffer for processor 2031 stores a formatted vector of data to be used as an operand by the functional units of processing unit core 110 (FIG. 1).[00155] FIG. 21 illustrates an example of lane allocation in a vector. Vector 2100 is divided into eight 64-bit lanes (8x64 bits = 512 bits, the vector length). Lane 0 includes bits 0 to 63, line 1 includes bits 64 to 127, lane 2 includes bits 128 to 191, lane 3 includes bits 192 to 255, lane 4 includes bits 256 to 319, lane 5 includes bits 320 to 383, lane 6 includes bits 384 to 447. and lane 7 includes bits 448 to 511.[00156] FIG. 22 illustrates another example of lane allocation in a vector. Vector 2210 is divided into sixteen 32-bit lanes (16x32 bits = 512 bits, the vector length). Lane 0 includes bits 0 to 31, line 1 includes bits 32 to 63, lane 2 includes bits 64 to 95, lane 3 includes bits 96 to 127,
lane 4 includes bits 128 to 159, lane 5 includes bits 160 to 191, lane 6 includes bits 192 to 223, lane 7 includes bits 224 to 255, lane 8 includes bits 256 to 287, lane 9 includes bits 288 to 319, lane 10 includes bits 320 to 351, lane 11 includes bits 352 to 383, lane 12 includes bits 384 to 415, lane 13 includes bits 416 to 447, lane 14 includes bits 448 to 479, and lane 15 includes bits 480 to 511.[00157] The streaming engine maps the innermost stream dimension directly to vector lanes. The streaming engine maps earlier elements within the innermost stream dimension to lower lane numbers and later elements to higher lane numbers, regardless of whether the stream advances in increasing or decreasing address order. Whatever order the stream defines, the streaming engine deposits elements in vectors in increasing-lane order. For non-complex data, the streaming engine places the first element in lane 0 of the vector processing unit core 110 (FIG. 1) fetches, the second in lane 1, and so on. For complex data, the streaming engine places the first element in lanes 0 and 1, the second element in lanes 2 and 3, and so on. Sub-elements within an element retain the same relative ordering regardless of the stream direction. For non-swapped complex elements, the sub-elements with the lower address of each pair are placed in the even numbered lanes, and the sub-elements with the higher address of each pair are placed in the odd numbered lanes. For swapped complex elements, the placement is reversed.[00158] The streaming engine fills each vector processing unit core 110 fetches with as many elements as possible from the innermost stream dimension. If the innermost dimension is not a multiple of the vector length, the streaming engine zero pads the dimension to a multiple of the vector length. As noted below, the streaming engine also marks the lanes invalid. Thus, for higher-dimension streams, the first element from each iteration of an outer dimension arrives in lane 0 of a vector. The streaming engine maps the innermost dimension to consecutive lanes in a vector. For transposed streams, the innermost dimension includes groups of sub-elements along dimension 1, not dimension 0, as transposition exchanges these two dimensions.[00159] Two-dimensional (2D) streams exhibit greater variety as compared to one dimensional streams. A basic 2D stream extracts a smaller rectangle from a larger rectangle. A transposed 2D stream reads a rectangle column-wise instead of row-wise. A looping stream, where the second dimension overlaps first, executes a finite impulse response (FIR) filter taps which loops repeatedly over FIR filter samples providing a sliding window of input samples.[00160] FIG. 23 illustrates a region of memory that can be accessed using a basic two-
dimensional stream. The inner two dimensions, represented by ELEM BYTES, ICNTO, DIM1 and ICNT1 (refer to Table 4), give sufficient flexibility to describe extracting a smaller rectangle 2320 having dimensions 2321 and 2322 from a larger rectangle 2310 having dimensions 2311 and 2312. In this example, rectangle 2320 is a 9 by 13 rectangle of 64-bit values and rectangle 2310 is a larger 11 by 19 rectangle. The following stream parameters define this stream: ICNTO = 9, ELEM BYTES = 8, ICNT1 = 13, and DIM1 = 88 (11 times 8).[00161] Thus, the iteration count in the 0-dimension 2321 is nine and the iteration count in the 1 -dimension 2322 is thirteen. Note that the ELEM BYTES scales the innermost dimension. The first dimension has ICNTO elements of size ELEM BYTES. The stream address generator does not scale the outer dimensions. Therefore, DIM1 = 88, which is eleven elements scaled by eight bytes per element.[00162] FIG. 24 illustrates the order of elements within the example stream of FIG. 23. The streaming engine fetches elements for the stream in the order illustrated in order 2400. The first nine elements come from the first row of rectangle 2320, left-to-right in hops 1 to 8. The 10th through 24th elements comes from the second row, and so on. When the stream moves from the 9th element to the 10th element (hop 9 in FIG. 24), the streaming engine computes the new location based on the position of the pointer at the start of the inner loop, not the position of the pointer at the end of the first dimension. Thus, DIMl is independent of ELEM BYTES and ICNTO. DIMl represents the distance between the first bytes of each consecutive row.[00163] Transposed streams are accessed along dimension 1 before dimension 0. The following examples illustrate transposed streams with varying transposition granularity. FIG. 25 illustrates extracting a smaller rectangle 2520 (12x8) having dimensions 2521 and 2522 from a larger rectangle 2510 (14x13) having dimensions 2511 and 2512. In FIG. 25, ELEM BYTES equal 2.[00164] FIG. 26 illustrates how the streaming engine fetches the stream of the example stream of FIG. 25 with a transposition granularity of four bytes. Fetch pattern 2600 fetches pairs of elements from each row (because the granularity of four is twice the ELEM BYTES of two), but otherwise moves down the columns. Once the streaming engine reaches the bottom of a pair of columns, the streaming engine repeats the pattern with the next pair of columns.[00165] FIG. 27 illustrates how the streaming engine fetches the stream of the example stream of FIG. 25 with a transposition granularity of eight bytes. The overall structure remains the
same. The streaming engine fetches four elements from each row (because the granularity of eight is four times the ELEM BYTES of two) before moving to the next row in the column as shown in fetch pattern 2700.[00166] The streams examined so far read each element from memory exactly once. A stream can read a given element from memory multiple times, in effect looping over a portion of memory. FIR filters exhibit two common looping patterns: re-reading the same filter taps for each output and reading input samples from a sliding window. Two consecutive outputs need inputs from two overlapping windows.[00167] FIG. 28 illustrates the details of streaming engine 125 of FIG. 1. Streaming engine 125 contains three major sections: Stream 0 engine 2810; Stream 1 engine 2820; and Shared L2 Interfaces 2830. Stream 0 engine 2810 and Stream 1 2820 both contain identical hardware that operates in parallel. Stream 0 engine 2810 and Stream 1 engine 2820 both share L2 interfaces 2830. Each stream 0 engine 2810 and stream 1 engine 2820 provides processing unit core 110 (FIG. 1) data at a rate of up to 512 bits/cycle, every cycle, which is enabled by the dedicated stream paths and shared dual L2 interfaces.[00168] Each streaming engine 125 includes a respective dedicated 6-dimensional (6D) stream address generator 2811/2821 that can each generate one new non-aligned request per cycle. As is further described herein, address generators 2811/2821 output 512-bit aligned addresses that overlap the elements in the sequence defined by the stream parameters.[00169] Each address generator 2811/2821 connects to a respective dedicated micro table look-aside buffer (pTLB) 2812/2822. The pTLB 2812/2822 converts a single 48-bit virtual address to a 44-bit physical address each cycle. Each pTLB 2812/2822 has 8 entries, covering a minimum of 32kB with 4kB pages or a maximum of 16MB with 2MB pages. Each address generator 2811/2821 generates 2 addresses per cycle. The pTLB 2812/2822 only translates one address per cycle. To maintain throughput, streaming engine 125 operates under the assumption that most stream references are within the same 4 kB page. Thus, the address translation does not modify bits 0 to 11 of the address. If aoutO and aoutl line in the same 4 kB page (aout0[47: 12] are the same aoutl [47: 12]), then the pTLB 2812/2822 only translates aoutO and reuses the translation for the upper bits of both addresses.[00170] Translated addresses are queued in respective command queue 2813/2823. These addresses are aligned with information from the respective corresponding Storage Allocation and
Tracking block 2814/2824. Streaming engine 125 does not explicitly manage pTLB 2812/2822. The system memory management unit (MMU) invalidates pTLBs as necessary during context switches.[00171] Storage Allocation and Tracking 2814/2824 manages the internal storage of the stream, discovering data reuse and tracking the lifetime of each piece of data. The block accepts two virtual addresses per cycle and binds those addresses to slots in the internal storage. The data store is organized as an array of slots. The streaming engine maintains following metadata to track the contents and lifetime of the data in each slot: 49-bit virtual address associated with the slot, valid bit indicating valid address, ready bit indicating data has arrived for the address, active bit indicating if there are any references outstanding to this data, and a last reference value indicating the most recent reference to this slot in the reference queue. The storage allocation and tracking are further described herein.[00172] Respective reference queue 2815/2825 stores the sequence of references generated by the respective corresponding address generator 2811/2821. The reference sequence enables the data formatting network to present data to processing unit core 110 in the correct order. Each entry in respective reference queue 2815/2825 contains the information necessary to read data out of the data store and align the data for processing unit core 110. Respective reference queue 2815/2825 maintains the information listed in Table 6 in each slot.Table 6[00173] Storage allocation and tracking 2814/2824 inserts references in reference queue 2815/2825 as address generator 2811/2821 generates new addresses. Storage allocation and tracking 2814/2824 removes references from reference queue 2815/2825 when the data becomes available and there is room in the stream head registers. As storage allocation and tracking 2814/2824 removes slot references from reference queue 2815/2825 and formats data, the references are checked for the last reference to the corresponding slots. Storage allocation and tracking 2814/2824 compares reference queue 2815/2825 removal pointer against the recorded last reference of the slot. If the pointer and the recorded last reference match, then storage
allocation and tracking 2814/2824 marks the slot inactive once the data is no longer needed.[00174] Streaming engine 125 has respective data storage 2816/2826 for a selected number of elements. Deep buffering allows the streaming engine to fetch far ahead in the stream, hiding memory system latency. Each data storage 2816/2826 accommodates two simultaneous read operations and two simultaneous write operations per cycle and each is therefore referred to a two-read, two-write (2r2w) data storage. In other examples, the amount of buffering can be different. In the current example, streaming engine 125 dedicates 32 slots to each stream with each slot tagged by a virtual address. Each slot holds 64 bytes of data in eight banks of eight bytes.[00175] Data storage 2816/2826 and the respective storage allocation/tracking logic 2814/2824 and reference queues 2815/2825 implement the data FIFO 1902 described with reference to FIG. 19.[00176] Respective butterfly network 2817/2827 includes a seven-stage butterfly network that implements the formatter 1903 (FIG. 19, FIG. 20). Butterfly network 2817/2827 receives 128 bytes of input and generates 64 bytes of output. The first stage of the butterfly is actually a half-stage that collects bytes from both slots that match a non-aligned fetch and merges the collected bytes into a single, rotated 64-byte array. The remaining six stages form a standard butterfly network. Respective butterfly network 2817/2827 performs the following operations: rotates the next element down to byte lane 0; promotes data types by a power of two, if requested; swaps real and imaginary components of complex numbers, if requested; and converts big endian to little endian if processing unit core 110 is presently in big endian mode. The user specifies element size, type promotion, and real/imaginary swap as part of the parameters of the stream.[00177] Streaming engine 125 attempts to fetch and format data ahead of processing unit core 110's demand in order to maintain full throughput. Respective stream head registers 2818/2828 provide a small amount of buffering so that the process remains fully pipelined. Respective stream head registers 2818/2828 are not directly architecturally visible. Each stream also has a respective stream valid register 2819/2829. Valid registers 2819/2829 indicate which elements in the corresponding stream head registers 2818/2828 are valid. The outputs of stream head registers 2818/2828 and valid registers 2819/2829 are provided to processing unit core 110 via buses 2840/2841.
[00178] The two streams 2810/2820 share a pair of independent L2 interfaces 2830: L2 Interface A (IF A) 2833 and L2 Interface B (IFB) 2834. Each L2 interface provides 512 bits/cycle throughput direct to the L2 controller 130 (FIG. 1) via respective buses 147/149 for an aggregate bandwidth of 1024 bits/cycle. The L2 interfaces use the credit-based multicore bus architecture (MBA) protocol. The MBA protocol is described in more detail in U.S. Patent 9,904,645, “Multicore Bus Architecture with Non-Blocking High Performance Transaction Credit System,” which is incorporated by reference herein. The L2 controller assigns a pool of command credits to each interface. The pool has sufficient credits so that each interface can send sufficient requests to achieve full read-return bandwidth when reading L2 RAM, L2 cache and multicore shared memory controller (MSMC) memory, as described in more detail herein.[00179] To maximize performance, in this example both streams can use both L2 interfaces, allowing a single stream to send a peak command rate of two requests per cycle. Each interface prefers one stream over the other, but this preference changes dynamically from request to request. IFA 2833 and IFB 2834 prefer opposite streams, when IFA 2833 prefers Stream 0, IFB 2834 prefers Stream 1 and vice versa.[00180] Respective arbiter 2831/2832 ahead of each respective interface 2833/2834 applies the following basic protocol on every cycle having credits available. Arbiter 2831/2832 checks if the preferred stream has a command ready to send. If so, arbiter 2831/2832 chooses that command. Arbiter 2831/2832 next checks if an alternate stream has at least two requests ready to send, or one command and no credits. If so, arbiter 2831/2832 pulls a command from the alternate stream. If either interface issues a command, the notion of preferred and alternate streams swap for the next request. Using this algorithm, the two interfaces dispatch requests as quickly as possible while retaining fairness between the two streams. The first rule ensures that each stream can send a request on every cycle that has available credits. The second rule provides a mechanism for one stream to borrow the interface of the other when the second interface is idle. The third rule spreads the bandwidth demand for each stream across both interfaces, ensuring neither interface becomes a bottleneck.[00181] Respective coarse grain rotator 2835/2836 enables streaming engine 125 to support a transposed matrix addressing mode. In this mode, streaming engine 125 interchanges the two innermost dimensions of the multidimensional loop to access an array column-wise rather than row-wise. Respective rotators 2835/2836 are not architecturally visible.
[00182] FIG. 29 illustrates an example stream template register 2900. The stream definition template provides the full structure of a stream that contains data. The iteration counts and dimensions provide most of the structure, while the various flags provide the rest of the details. In this example, a single stream template 2900 is defined for all data-containing streams. All stream types supported by the streaming engine are covered by the template 2900. The streaming engine supports a six-level loop nest for addressing elements within the stream. Most of the fields in the stream template 2900 map directly to the parameters in that algorithm. The numbers above the fields are bit numbers within a 256-bit vector. Table 7 shows the stream field definitions of a stream template.Table 7[00183] Loop 0 is the innermost loop and loop 5 is the outermost loop. In the current example, DIM0 is equal to ELEM BYTES defining physically contiguous data. Thus, the stream template register 2900 does not define DIM0. Streaming engine 125 interprets iteration counts as unsigned integers and dimensions as unsealed signed integers. An iteration count of zero at any level (ICNT0, ICNT1, ICNT2, ICNT3, ICNT4 or ICNT5) indicates an empty stream. Each iteration count must be at least one to define a valid stream. The template above specifies
the type of elements, length and dimensions of the stream. The stream instructions separately specify a start address, e.g ., by specification of a scalar register in scalar register file 211 which stores the start address. Thus, a program can open multiple streams using the same template but different registers storing the start address.[00184] FIG. 30 illustrates an example of sub-field definitions of the flags field 2921 shown in FIG. 29. As shown in FIG. 30, the flags field 2911 is 6 bytes or 48 bits. FIG. 30 shows bit numbers of the fields. Table 8 shows the definition of these fields.Table 8[00185] The Element Type (ELTYPE) field 3001 defines the data type of the elements in the stream. The coding of the four bits of the ELTYPE field 3001 is defined as shown in Table 9.Table 9[00186] Real/Complex Type determines whether the streaming engine treats each element as a
real number or two parts (real/imaginary or magnitude/angle) of a complex number and also specifies whether to swap the two parts of complex numbers. Complex types have a total element size twice the sub-element size. Otherwise, the sub-element size equals the total element size.[00187] Sub-Element Size determines the type for purposes of type promotion and vector lane width. For example, 16-bit sub-elements get promoted to 32-bit sub-elements or 64-bit sub-elements when a stream requests type promotion. The vector lane width matters when processing unit core 110 (FIG. 1) operates in big endian mode, as the core 110 lays out vectors in little endian order.[00188] Total Element Size specifies the minimal granularity of the stream which determines the number of bytes the stream fetches for each iteration of the innermost loop. Streams read whole elements, either in increasing or decreasing order. Therefore, the innermost dimension of a stream spans ICNT0ctotal-element-size bytes.[00189] The TRANSPOSE field 3002 determines whether the streaming engine accesses the stream in a transposed order. The transposed order exchanges the inner two addressing levels. The TRANSPOSE field 3002 also indicated the granularity for transposing the stream. The coding of the three bits of the TRANSPOSE field 3002 is defined as shown in Table 10 for normal 2D operations.Table 10[00190] Streaming engine 125 can transpose data elements at a different granularity than the element size thus allowing programs to fetch multiple columns of elements from each row. The transpose granularity cannot be smaller than the element size. The TRANSPOSE field 3002
interacts with the DIMFMT field 3009 in a manner further described below.[00191] The PROMOTE field 3003 controls whether the streaming engine promotes sub-elements in the stream and the type of promotion. When enabled, streaming engine 125 promotes types by powers-of-2 sizes. The coding of the three bits of the PROMOTE field 3003 is defined as shown in Table 11.Table 11[00192] When PROMOTE is 000, corresponding to a lx promotion, each sub-element is unchanged and occupies a vector lane equal in width to the size specified by ELTYPE. When PROMOTE is 001, corresponding to a 2x promotion and zero extend, each sub-element is treated as an unsigned integer and zero extended to a vector lane twice the width specified by ELTYPE. A 2x promotion is invalid for an initial sub-element size of 64 bits. When PROMOTE is 010, corresponding to a 4x promotion and zero extend, each sub-element is treated as an unsigned integer and zero extended to a vector lane four times the width specified by ELTYPE. A 4x promotion is invalid for an initial sub-element size of 32 or 64 bits. When PROMOTE is Oi l, corresponding to an 8x promotion and zero extend, each sub-element is treated as an unsigned integer and zero extended to a vector lane eight times the width specified by ELTYPE. An 8x promotion is invalid for an initial sub-element size of 16, 32 or 64 bits. When PROMOTE is 101, corresponding to a 2x promotion and sign extend, each sub-element is treated as a signed
integer and sign extended to a vector lane twice the width specified by ELTYPE. A 2x promotion is invalid for an initial sub-element size of 64 bits. When PROMOTE is 110, corresponding to a 4x promotion and sign extend, each sub-element is treated as a signed integer and sign extended to a vector lane four times the width specified by ELTYPE. A 4x promotion is invalid for an initial sub-element size of 32 or 64 bits. When PROMOTE is 111, corresponding to an 8x promotion and zero extend, each sub-element is treated as a signed integer and sign extended to a vector lane eight times the width specified by ELTYPE. An 8x promotion is invalid for an initial sub-element size of 16, 32 or 64 bits.[00193] The VECLEN field 3004 defines the stream vector length for the stream in bytes. Streaming engine 125 breaks the stream into groups of elements that are VECLEN bytes long. The coding of the three bits of the VECLEN field 3004 is defined as shown in Table 12.Table 12[00194] VECLEN cannot be less than the product of the element size in bytes and the duplication factor. As shown in Table 11, the maximum VECLEN of 64 bytes equals the preferred vector size of vector data path side B 116. When VECLEN is shorter than the native vector width of processing unit core 110, streaming engine 125 pads the extra lanes in the vector provided to processing unit core 110. The GRDUP field 3006 determines the type of padding. The VECLEN field 3004 interacts with ELDUP field 3005 and GRDUP field 3006 in a manner detailed below.[00195] The ELDUP field 3005 specifies the number of times to duplicate each element. The element size multiplied with the element duplication amount cannot exceed the 64 bytes. The coding of the three bits of the ELDUP field 3005 is defined as shown in Table 13.
Table 13[00196] The ELDUP field 3005 interacts with VECLEN field 3004 and GROUP field 3006 in a manner detailed below. The nature of the relationship between the permitted element size, the element duplication factor, and the destination vector length requires that a duplicated element that overflows the first destination register fills an integer number of destination registers upon completion of duplication. The data of the additional destination registers eventually supplies the respective stream head register 2818/2828. Upon completion of duplication of a first data element, the next data element is rotated down to the least significant bits of source register 3100 discarding the first data element. The process then repeats for the new data element.[00197] The GRDUP bit 3006 determines whether group duplication is enabled. If GRDUP bit 3006 is 0, then group duplication is disabled. If the GRDUP bit 3006 is 1, then group duplication is enabled. When enabled by GRDUP bit 3006, streaming engine 125 duplicates a group of elements to fill the vector width. VECLEN field 3004 defines the length of the group to replicate. When VECLEN field 3004 is less than the vector length of processing unit core 110 and GRDUP bit 3006 enables group duplication, streaming engine 125 fills the extra lanes (see FIGS. 21 and 22) with additional copies of the stream vector. Because stream vector length and vector length of processing unit core 110 are integral powers of two, group duplication produces an integral number of duplicate copies. Note GRDUP and VECLEN do not specify the number of duplications. The number of duplications performed is based upon the ratio of VECLEN to the native vector length, which is 64 bytes/512 bits in this example.[00198] The GRDUP field 3006 specifies how streaming engine 125 pads stream vectors for bits following the VECLEN length to the vector length of processing unit core 110. When
GRDUP bit 3006 is 0, streaming engine 125 fills the extra lanes with zeros and marks the extra vector lanes invalid. When GRDUP bit 3006 is 1, streaming engine 125 fills extra lanes with copies of the group of elements in each stream vector. Setting GRDUP bit 3006 to 1 has no effect when VECLEN is set to the native vector width of processing unit core 110. VECLEN must be at least as large as the product of ELEM BYTES and the element duplication factor ELDUP. Accordingly, an element or the duplication factor number of elements cannot be separated using VECLEN.[00199] Group duplication operates to the destination vector size. Group duplication does not change the data supplied when the product of the element size ELEM BYTES and element duplication factor ELDUP equals or exceeds the destination vector width. Under such conditions, the states of the GRDUP bit 3006 and the VECLEN field 3004 have no effect on the supplied data.[00200] The set of examples below illustrate the interaction between VECLEN and GRDUP. Each of the following examples show how the streaming engine maps a stream onto vectors across different stream vector lengths and the vector size of vector data path side B 116. The stream of this example includes twenty-nine elements (E0 to E28) of 64 bits/8 bytes. The stream can be a linear stream of twenty-nine elements or an inner loop of 29 elements. The tables illustrate eight byte lanes such as shown in FIG. 21. Each illustrated vector is stored in the respective stream head register 2818/2828 in turn.[00201] Table 14 illustrates how the example stream maps onto bits within the 64-byte processor vectors when VECLEN is 64 bytes.Table 14[00202] As shown in Table 14, the stream extends over four vectors. As previously described, the lanes within vector 4 that extend beyond the stream are zero filled. When VECLEN has a size equal to the native vector length, the value of GRDUP does not matter as no duplication can
take place with such a VECLEN.[00203] Table 15 shows the same parameters as shown in Table 14, except with VECLEN of 32 bytes. Group duplicate is disabled (GRDUP = 0).Table 15[00204] The twenty-nine elements of the stream are distributed over lanes 0 to 3 in eight vectors. Extra lanes 4 to 7 in vectors 1-7 are zero filled. In vector 8, lane 1 has a stream element (E28) and the other lanes are zero filled.[00205] Table 16 shows the same parameters as shown in Table 14, except with VECLEN of sixteen bytes. Group duplicate is disabled (GRDUP = 0).Table 16[00206] The twenty-nine elements of the stream are distributed over lane 0 and lane 1 in fifteen vectors. Extra lanes 2 to 7 in vectors 1-14 are zero filled. In vector 15, lane 1 has a stream element (E28) and the other lanes are zero filled.[00207] Table 17 shows the same parameters as shown in Table 14, except with VECLEN of eight bytes. Group duplicate is disabled (GRDUP = 0).Table 17[00208] The twenty-nine elements of the stream appear in lane 0 in twenty-nine vectors. Extra lanes 1-7 in vectors 1-29 are zero filled.[00209] Table 18 shows the same parameters as shown in Table 15, except with VECLEN of thirty-two bytes and group duplicate is enabled (GRDEIP = 1).Table 18[00210] The twenty-nine elements of the stream are distributed over lanes 0-7 in eight vectors. Each vector 1-7 includes four elements duplicated. The duplication factor (2) results because
VECLEN (32 bytes) is half the native vector length of 64 bytes. In vector 8, lane 0 has a stream element (E28) and lanes 1-3 are zero filled. Lanes 4-7 of vector 9 duplicate this pattern.[00211] Table 19 shows the same parameters as shown in Table 16, except with VECLEN of sixteen bytes. Group duplicate is enabled (GRDUP = 1).Table 19[00212] The twenty-nine elements of the stream are distributed over lanes 0-7 in fifteen vectors. Each vector 1-7 includes two elements duplicated four times. The duplication factor (4) results because VECLEN (16 bytes) is one quarter the native vector length of 64 bytes. In vector 15, lane 0 has a stream element (E28) and lane 1 is zero filled. This pattern is duplicated in lanes 2 and 3, lanes 4 and 5, and lanes 6 and 7 of vector 15.[00213] Table 20 shows the same parameters as shown in Table 17, except with VECLEN of eight bytes. Group duplicate is enabled (GRDUP = 1).Table 20[00214] The twenty-nine elements of the stream all appear on lanes 0 to 7 in twenty-nine vectors. Each vector includes one element duplicated eight times. The duplication factor (8) results because VECLEN (8 bytes) is one eighth the native vector length of 64 bytes. Thus, each lane is the same in vectors 1-29.[00215] FIG. 31 illustrates an example of vector length masking/group duplication block 2025 (see FIG. 20) that is included within formatter block 1903 of FIG. 19. Input register 3100 receives a vector input from element duplication block 2024 shown in FIG. 20. Input register3100 includes 64 bytes arranged in 64 1-byte blocks byteO to byte63. Note that bytes byteO to byte63 are each equal in length to the minimum of ELEM BYTES. A set of multiplexers 3101 to 3163 couple input bytes from source register 3100 to output register 3170. Each respective multiplexer 3101 to 3163 supplies an input to a respective bytel to byte63 of output register 3170. Not all input bytes byteO to byte63 of input register 3100 are coupled to every multiplexer3101 to 3163. Note there is no multiplexer supplying byteO of output register 3170. In this example, byteO of output register 3170 is supplied by byteO of input register 3100.[00216] Multiplexers 3101 to 3163 are controlled by multiplexer control encoder 3180. Multiplexer control encoder 3180 receives ELEM BYTES, VECLEN and GRDUP input signals and generates respective control signals for multiplexers 3101 to 3163. ELEM BYTES and ELDUP are supplied to multiplexer control encoder 3180 to check to see that VECLEN is at least as great as the product of ELEM BYTES and ELDUP. In operation, multiplexer control encoder 3180 controls multiplexers 3101 to 3163 to transfer least significant bits equal in number to VECLEN from input register 3100 to output register 3170. If GRDUP = 0 indicating group duplication disabled, then multiplexer control encoder 3180 controls the remaining multiplexers 3101 to 3163 to transfer zeros to all bits in the remaining most significant lanes of output register 3170. If GRDUP = 1 indicating group duplication enabled, then multiplexer control encoder 3180 controls the remaining multiplexers 3101 to 3163 to duplicate the VECLEN number of least significant bits of input register 3100 into the most significant lanes of output register 3170. This control is similar to the element duplication control described above and fills the output register 3170 with the first vector. For the next vector, data within input register 3100 is rotated down by VECLEN, discarding the previous VECLEN least significant bits. The rate of data movement in formatter 1903 (FIG. 19) is set by the rate of consumption of data by processing
unit core 110 (FIG. 1) via stream read and advance instructions described below. The group duplication formatting repeats as long as the stream includes additional data elements.[00217] Element duplication (ELDUP) and group duplication (GRUDP) are independent. Note these features include independent specification and parameter setting. Thus, element duplication and group duplication can be used together or separately. Because of how these are specified, element duplication permits overflow to the next vector while group duplication does not.[00218] Referring again to FIG. 30, the DECIM field 3007 controls data element decimation of the corresponding stream. Streaming engine 125 deletes data elements from the stream upon storage in respective stream head registers 2818/2828 for presentation to the requesting functional unit. Decimation removes whole data elements, not sub-elements. The DECIM field 3007 is defined as listed in Table 21.Table 21[00219] If DECIM field 3007 equals 00, then no decimation occurs. The data elements are passed to the corresponding stream head registers 2818/2828 without change. If DECIM field 3007 equals 01, then 2: 1 decimation occurs. Streaming engine 125 removes odd number elements from the data stream upon storage in the stream head registers 2818/2828. Limitations in the formatting network require 2: 1 decimation to be employed with data promotion by at least 2x (PROMOTE cannot be 000), ICNT0 must be multiple of 2, and the total vector length(VECLEN) must be large enough to hold a single promoted, duplicated element. For transposed streams (TRANSPOSE ¹ 0), the transpose granule must be at least twice the element size in bytes before promotion. If DECIM field 3007 equals 10, then 4: 1 decimation occurs. Streaming engine 125 retains every fourth data element removing three elements from the data stream upon storage in the stream head registers 2818/2828. Limitations in the formatting network require 4: 1 decimation to be employed with data promotion by at least 4x (PROMOTE cannot be 000, 001 or 101), ICNT0 must be a multiple of 4 and the total vector length (VECLEN) must be large
enough to hold a single promoted, duplicated element. For transposed streams(TRANSPOSE ¹ 0), in one example, decimation removes columns, and does not remove rows. Thus, in such cases, the transpose granule must be at least twice the element size in bytes before promotion for 2: 1 decimation (GRANULE > 2 x ELEM BYTES) and at least four times the element size in bytes before promotion for 4: 1 decimation (GRANULE > 4 x ELEM BYTES).[00220] The THROTTLE field 3008 controls how aggressively the streaming engine fetches ahead of processing unit core 110. The coding of the two bits of this field is defined as shown in Table 22.Table 22[00221] THROTTLE does not change the meaning of the stream and serves only as a hint. The streaming engine can ignore this field. Programs should not rely on the specific throttle behavior for program correctness, because the architecture does not specify the precise throttle behavior. THROTTLE allows programmers to provide hints to the hardware about the program behavior. By default, the streaming engine attempts to get as far ahead of processing unit core 110 as possible to hide as much latency as possible (equivalent to THROTTLE = 11), while providing full stream throughput to processing unit core 110. While some applications need this level of throughput, such throughput can cause bad system level behavior for others. For example, the streaming engine discards all fetched data across context switches. Therefore, aggressive fetch-ahead can lead to wasted bandwidth in a system with large numbers of context switches.[00222] The DIMFMT field 3009 defines which of the loop count fields ICNT0 2901, ICNT1 2902, ICNT2 2903, ICNT3 2804, ICNT4 2905 and ICNT5 2906, of the loop dimension fields DIM1 2911, DIM2 2912, DIM3 2913, DIM4 2914 and DIM5 2915 and of the addressing mode fields AMO 3013, AMI 3014, AM2 3015, AM3 3016, AM4 3017 and AM5 3018 (part of FLAGS field 2921) of the stream template register 2900 are active for the particular stream.
Table 23 lists the active loops for various values of the DIMFMT field 3009. Each active loop count must be at least 1 and the outer active loop count must be greater than 1.Table 2300223] The DIR bit 3010 determines the direction of fetch of the inner loop (LoopO). If the DIR bit 3010 is 0, LoopO fetches are in the forward direction toward increasing addresses. If the DIR bit 3010 is 1, LoopO fetches are in the backward direction toward decreasing addresses. The fetch direction of other loops is determined by the sign of the corresponding loop dimension DIM1, DIM2, DIM3, DIM4 and DIM5.[00224] The CBK0 field 3011 and the CBK1 field 3012 control the circular block size upon selection of circular addressing. The manner of determining the circular block size is described herein.[00225] The AMO field 3013, AMI field 3014, AM2 field 3015, AM3 field 3016, AM4 field 3017 and AM5 field 3018 control the addressing mode of a corresponding loop, thus permitting the addressing mode to be independently specified for each loop. Each of AMO field 3013, AMI field 3014, AM2 field 3015, AM3 field 3016, AM4 field 3017 and AM5 field 3018 are three bits and are decoded as listed in Table 24.Table 24[00226] In linear addressing, the address advances according to the address arithmetic whether
forward or reverse. In circular addressing, the address remains within a defined address block. Upon reaching the end of the circular address block the address wraps around to the beginning limit of the block. Circular addressing blocks are limited to 2N addresses where N is an integer. Circular address arithmetic can operate by cutting the carry chain between bits and not allowing a selected number of most significant bits to change. Thus, arithmetic beyond the end of the circular block changes only the least significant bits. The block size is set as listed in Table 25.Table 25[00227] In this example, the circular block size is set by the number encoded by CBKO (first circular address mode 01) or the number encoded by CBKO+CBKl+1 (second circular address mode 10). For example, in the first circular address mode, the circular address block size can range from 512 bytes to 16 M bytes. For the second circular address mode, the circular address block size can range from 1 K bytes to 64 G bytes. Thus, the encoded block size is 2(B+9)bytes, where B is the encoded block number which is CBKO for the first block size (AMx of 01) and CBKO+CBKl+1 for the second block size (AMx of 10).[00228] The processing unit 110 (FIG. 1) exposes the streaming engine 125 (FIG. 28) to programs through a small number of instructions and specialized registers. Programs start and end streams with SEOPEN and SECLOSE. SEOPEN opens a new stream and the stream remains open until terminated explicitly by SECLOSE or replaced by a new stream with SEOPEN. The SEOPEN instruction specifies a stream number indicating opening stream 0 or stream 1. The SEOPEN instruction specifies a data register storing the start address of the stream. The SEOPEN instruction also specifies a stream template register that stores the stream template as described above. The arguments of the SEOPEN instruction are listed in Table 26.Table 26[00229] The stream start address register is a register in general scalar register file 211 (FIG. 2) in this example. The SEOPEN instruction can specify the stream start address register via scrl field 1305 (FIG. 13) of example instruction coding 1300 (FIG. 13). The SEOPEN instruction specifies stream 0 or stream 1 in the opcode. The stream template register is a vector register in general vector register file 221 in this example. The SEOPEN instruction can specify the stream template register via scr2/cst field 1304 (FIG. 13). If the specified stream is active, the SEOPEN instruction closes the prior stream and replaces the stream with the specified stream.[00230] SECLOSE explicitly marks a stream inactive, flushing any outstanding activity. Any further references to the stream trigger exceptions. SECLOSE also allows a program to prematurely terminate one or both streams.[00231] An SESAVE instruction saves the state of a stream by capturing sufficient state information of a specified stream to restart that stream in the future. An SERSTR instruction restores a previously saved stream. An SESAVE instruction saves the stream metadata and does not save any of the stream data. The stream re-fetches stream data in response to an SERSTR instruction.[00232] Each stream can be in one of three states: inactive, active, or frozen after reset. Both streams begin in the inactive state. Opening a stream moves the stream to the active state. Closing the stream returns the stream to the inactive state. In the absence of interrupts and exceptions, streams ordinarily do not make other state transitions. To account for interrupts, the streaming engine adds a third state: frozen. The frozen state represents an interrupted active stream.[00233] In this example, four bits, two bits per stream, define the state of both streams. One bit per stream resides within the streaming engine, and the other bit resides within the processor core 110. The streaming engine internally tracks whether each stream holds a parameter set associated with an active stream. This bit distinguishes an inactive stream from a not-inactive stream. The processor core 110 separately tracks the state of each stream with a dedicated bit per stream in the Task State Register (TSR): TSR.SE0 for stream 0, and TSR.SE1 for stream 1. These bits distinguish between active and inactive streams.
[00234] Opening a stream moves the stream to the active state. Closing a stream moves the stream to the inactive state. If a program opens a new stream over a frozen stream, the new stream replaces the old stream and the streaming engine discards the contents of the previous stream. The streaming engine supports opening a new stream on a currently active stream. The streaming engine discards the contents of the previous stream, flushes the pipeline, and starts fetching data for the new opened stream. Data to processor is asserted once the data has returned. If a program closes an already closed stream, nothing happens. If a program closes an open or frozen stream, the streaming engine discards all state related to the stream, clears the internal stream-active bit, and clears the counter, tag and address registers. Closing a stream serves two purposes. Closing an active stream allows a program to specifically state the stream and the resources associated with the stream are no longer needed. Closing a frozen stream also allows context switching code to clear the state of the frozen stream, so that other tasks do not see it.[00235] As noted above, there are circumstances when some data within a stream holding register 2818 or 2828 is not valid. As described above, such a state can occur at the end of an inner loop when the number of stream elements is less than the respective stream holding register 2818/2828 size or at the end of an inner loop when the number of stream elements remaining is less than the lanes defined by VECLEN. For times not at the end of an inner loop, if VECLEN is less than the width of stream holding register 2818/2828 and GRDUP is disabled, then lanes in stream holding register 2818/2828 in excess of VECLEN are invalid.[00236] Referring again to FIG. 28, in this example streaming engine 125 further includes valid registers 2819 and 2829. Valid register 2819 indicates the valid lanes in stream head register 2818. Valid register 2829 indicates the valid lanes in stream head register 2828. Respective valid registers 2819/2829 include one bit for each minimum ELEM BYTES lane within the corresponding stream head register 2818/2828. In this example, the minimum ELEM BYTES is 1 byte. The preferred data path width of processor 100 and the data length of stream head registers 2818/2828 is 64 bytes (512 bits). Valid registers 2819/2829 accordingly have a data width of 64 bits. Each bit in valid registers 2819/2829 indicates whether a corresponding byte in stream head registers 2818/2828 is valid. In this example, a 0 indicates the corresponding byte within the stream head register is invalid, and a 1 indicates the corresponding byte is valid.
[00237] In this example, upon reading a respective one of the stream head registers 2818/2828 and transferring of data to the requesting functional unit, the invalid/valid data in the respective valid register 2819/2829 is automatically transferred to a data register within predicate register file 234 (FIG. 2) corresponding to the particular stream. In this example the valid data for stream 0 is stored in predicate register P0 and the valid data for stream 1 is stored in predicate register P I .[00238] The valid data stored in the predicate register file 234 can be used in a variety of ways. The functional unit can combine the vector stream data with another set of vectors and then store the combined data to memory using the valid data indications as a mask, thus enabling the same process to be used for the end of loop data as is used for cases where all the lanes are valid which avoids storing invalid data. The valid indication stored in predicate register file 234 can be used as a mask or an operand in other processes. P unit 246 (FIG. 2) can have an instruction to count the number of l's in a predicate register (BITCNT, which can be used to determine the count of valid data elements from a predicate register.[00239] FIG. 32 illustrates example hardware 3200 to produce the valid/invalid indications stored in the valid register 2819 (FIG. 28). FIG. 32 illustrates hardware for stream 0; stream 1 includes corresponding hardware. Hardware 3200 operates to generate one valid word each time data is updated in stream head register 2818 (FIG. 28). A first input ELTYPE is supplied to decoder 3201. Decoder 3201 produces an output TOTAL ELEMENT SIZE corresponding to the minimum data size based upon the element size ELEM BYTES and whether the elements are real numbers or complex numbers. The meanings of various codings of ELTYPE are shown in Table 9. Table 27 shows an example output of decoder 3201 in bytes for the various ELTYPE codings. Note Table 9 lists bits and Table 27 lists bytes. As shown in Table 27, TOTAL ELEMENT SIZE is 1, 2, 4 or 8 bytes if the element is real and 2, 4, 8 or 16 bytes if the element is complex.Table 27[00240] A second input PROMOTE is supplied to decoder 3202. Decoder 3202 produces an output promotion factor corresponding to the PROMOTE input. The meaning of various codings of PROMOTE are shown in Table 28, which shows an example output of decoder 3202 in bytes for the various PROMOTE codings. The difference in extension type (zero extension or sign extension) is not relevant to decoder 3202.Table 28[00241] The outputs of decoders 3201 and 3202 are supplied to multiplier 3203. The product produced by multiplier 3203 is the lane size corresponding to the TOTAL ELEMENT SIZE and the promotion factor. Because the promotion factor is an integral power of 2 (2N), the multiplication can be achieved by corresponding shifting of the TOTAL ELEMENT SIZE, e.g ., no shift for a promotion factor of 1, a one-bit shift for a promotion factor of 2, a two-bit shift for a promotion factor of 4, and a three-bit shift for a promotion factor of 8.[00242] NUMBER OF LANES unit 3204 receives the vector length VECLEN and the LANE SIZE and generates the NUMBER OF LANES. Table 29 shows an example decoding of the number of lanes for lane size in bytes and the vector length VECLEN.Table 29[00243] As previously stated, VECLEN must be greater than or equal to the product of the element size and the duplication factor. As shown in Table 29, VECLEN must also be greater than or equal to the product of the element size and the promotion factor. This means that VECLEN must be large enough to guarantee that an element cannot be separated from its extension produced by type promotion block 2022 (FIG. 20). The cells below the diagonal in Table 29 marked indicate an unpermitted combination of parameters.[00244] The NUMBER OF LANES output of unit 3204 serves as one input to LANE/REMAINING ELEMENTS CONTROL WORD unit 3211. A second input comes from multiplexer 3212. Multiplexer 3212 receives a LoopO input and a Loopl input. The LoopO input and the Loopl input represent the number of remaining elements in the current iteration of the corresponding loop.
[00245] FIG. 33 illustrates a partial schematic view of address generator 2811 shown in FIG. 28. Address generator 2811 forms an address for fetching the next element in the defined stream of the corresponding streaming engine. Start address register 3301 stores a start address of the data stream. As previously described above, in this example, start address register 3301 is a scalar register in global scalar register file 211 designated by the SEOPEN instruction that opened the corresponding stream. The start address can be copied from the specified scalar register and stored locally at the respective address generator 2811/2821 by control logic included with address generator 2811. The first loop of the stream employs LoopO count register3311, adder 3312, multiplier 3313 and comparator 3314. LoopO count register 3311 stores the working copy of the iteration count of the first loop (LoopO). For each iteration of LoopO, adder3312, as triggered by the Next Address signal, adds 1 to the loop count, which is stored back in LoopO count register 3311. Multiplier 3313 multiplies the current loop count and the quantity ELEM BYTES. ELEM BYTES is the size of each data element in loopO in bytes. LoopO traverses data elements physically contiguous in memory with an iteration step size of ELEM BYTES.[00246] Comparator 3314 compares the count stored in LoopO count register 3311 (after incrementing by adder 3313) with the value of ICNT0 2901 (FIG. 29) from the corresponding stream template register 2900 (FIG. 29). When the output of adder 3312 equals the value of ICNT0 2901 of the stream template register 2900, an iteration of LoopO is complete. Comparator 3314 generates an active LoopO End signal. LoopO count register 3311 is reset to 0 and an iteration of the next higher loop, in this case Loopl, is triggered.[00247] Circuits for the higher loops (Loopl, Loop2, Loop3, Loop4 and Loop5) are similar to that illustrated in FIG. 33. Each loop includes a respective working loop count register, adder, multiplier and comparator. The adder of each loop is triggered by the loop end signal of the prior loop. The second input to each multiplier is the corresponding dimension DIM1, DIM2, DIM3, DIM4 and DIM5 from the corresponding stream template. The comparator of each loop compares the working loop register count with the corresponding iteration value ICNT1, ICNT2, ICNT3, ICNT4 and ICNT5 of the corresponding stream template register 2900. A loop end signal generates an iteration of the next higher loop. A loop end signal from Loop5 ends the stream.[00248] FIG. 33 also illustrates the generation of LoopO count. LoopO count equals the
updated data stored in the corresponding working count register 3311. LoopO count is updated on each change of working LoopO count register 3311. The loop counts for the higher loops (Loopl, Loop2, Loop3, Loop4 and Loop5) are similarly generated.[00249] FIG. 33 also illustrates the generation of LoopO address. LoopO address equals the data output from multiplier 3313. LoopO address is updated on each change of working LoopO count register 3311. Similar circuits for Loopl, Loop2, Loop3, Loop4 and Loop5 produce corresponding loop addresses. In this example, LoopO count register 3311 and the other loop count registers are implemented as count up registers. In another example, initialization and comparisons operate as count down circuits.[00250] Referring again to FIG. 32, the value of the loop down count, such as LoopO/, is given by expression (2).Loopx/= ICNTx— Loopx (2)Accordingly, the loop down count is the difference between the initial iteration count specified in the stream template register and the loop up count produced as illustrated in FIG. 33.[00251] LANE/REMAINING ELEMENTS CONTROL WORD unit 3211 (FIG. 32) generates a control word 3213 based upon the number of lanes from NUMBER OF LANES unit 3204 and the loop down count selected by multiplexer 3212. The control input to multiplexer 3212 is the TRANSPOSE signal from field 3002 of FIG. 30. If TRANSPOSE is disabled ("000"), multiplexer 3212 selects the LoopO down count LoopO/. For all other legal values of TRANSPOSE ("001", "010", "011 ", " 100", "101" and "110") multiplexer 3212 selects the Loopl down count Loopl/. The streaming engine maps the innermost dimension to consecutive lanes in a vector. For normal streams this is LoopO. For transposed streams, this is Loopl, because transposition exchanges the two dimensions.[00252] LANE/REMAINING ELEMENTS CONTROL WORD unit 3211 generates control word 3213 as follows. Control word 3213 has a number of bits equal to the number of lanes from unit 3204. If the remaining count of elements of the selected loop is greater than or equal to the number of lanes, then all lanes are valid. For this case, control word 3213 is all ones, indicating that all lanes within the vector length VECLEN are valid. If the remaining count of elements of the selected loop is nonzero and less than the number of lanes, then some lanes are valid and some are invalid. According to the lane allocation described above in conjunction with FIGS. 21 and 22, stream elements are allocated lanes starting with the least significant lanes.
Under these circumstances, control word 3213 includes a number of least significant bits set to one equal to the number of the selected loop down count. All other bits of control word 3213 are set to zero. In the example illustrated in FIG. 32, the number of lanes equals eight and there are five valid (1) least significant bits followed by three invalid (0) most significant bits which corresponds to a loop having five elements remaining in the final iteration.[00253] Control word expansion unit 3214 expands the control word 3213 based upon the magnitude of LANE SIZE. The expanded control word includes one bit for each minimally sized lane. In this example, the minimum stream element size, and thus the minimum lane size, is one byte (8 bits). In this example, the size of holding registers 2818/2828 equals the vector size of 64 bytes (512 bits). Thus, the expanded control word has 64 bits, one bit for each byte of stream holding registers 2818/2828. This expanded control word fills the least significant bits of the corresponding valid register 2819 and 2829 (FIG. 28).[00254] For the case when VECLEN equals the vector length, the description is complete. The expanded control word includes bits for all places within respective valid register 2819/2829. There are some additional considerations when VECLEN does not equal the vector length. When VECLEN does not equal the vector length, the expanded control word does not have enough bits to fill the corresponding valid register 2819/2829. As illustrated in FIG. 32, the expanded control word fills the least significant bits of the corresponding valid register 2819/2829, thus providing the valid/invalid bits for lanes within the VECLEN width. Another mechanism is provided for lanes beyond the VECLEN width up to the data width of stream head register 2818.[00255] Referring still to FIG. 32, multiplexer 3215 and group duplicate unit 3216 are illustrated to provide the needed additional valid/invalid bits. Referring to the description of VECLEN, if group duplication is not enabled (GRDUP=0), then the excess lanes are not valid. A first input of multiplexer 3215 is an INVALID 0 signal that includes multiple bits equal in number to VECLEN. When GRDUP=0, multiplexer 3215 selects this input. Group duplicate unit 3216 duplicates this input to all excess lanes of stream head register 2818. Thus, the most significant bits of valid register 2819 are set to zero indicating the corresponding bytes of stream head register 2818 are invalid. This occurs for vectors 1-8 of the example shown in Table 15, vectors 1-15 of the example shown in Table 16, and vectors 1-29 of the example shown in Table 17.
[00256] In another example, mux 3215 and group duplicate block 3216 are replaced with group duplicate logic that is similar to the group duplicate logic 2025 illustrated in FIG. 31.[00257] As previously described, if group duplication is enabled (GRDUP=1), then the excess lanes of stream head register 2818 (FIG. 28) are filled with copies of the least significant bits. A second input of multiplexer 3215 is the expanded control word from control word expansion unit 3214. When GRDUP=1, multiplexer 3215 selects this input. Group duplicate unit 3216 duplicates this input to all excess lanes of stream head register 2818.[00258] There are two possible outcomes. In one outcome, in most cases, all the lanes within VECLEN are valid and the bits from control word expansion unit 3214 are all ones. This occurs for vectors 1-7 of the group duplication example shown in Table 18 and vectors 1-14 of the group duplication example shown in Table 19. Under these conditions, all bits of the expanded control word from control word expansion unit 3214 are one and all lanes of stream head register 2818 are valid. Group duplicate unit 3216 thus fills all the excess lanes with ones. In the other outcome, the number of remaining stream data elements is less than the number of lanes within VECLEN. This occurs for vector 8 in the group duplication example shown in Table 18 and vector 15 in the group duplication example shown in Table 19. Under these conditions, some lanes within VECLEN are valid and some are invalid. Group duplicate unit 3216 fills the excess lanes with bits having the same pattern as the expanded control word bits. In either case, the excess lanes are filled corresponding to the expanded control bits.[00259] Referring still to FIG. 32, a boundary 3217 is illustrated between the least significant bits and the most significant bits. The location of this boundary is set by the size of VECLEN relative to the size of stream head register 2818.[00260] FIG. 34 is a partial schematic diagram 3400 illustrating the stream input operand coding described above. FIG. 34 illustrates a portion of instruction decoder 113 (see FIG. 1) decoding srcl field 1305 of one instruction to control corresponding srcl input of functional unit 3420. These same or similar circuits are duplicated for src2/cst field 1304 of an instruction controlling functional unit 3420. In addition, these circuits are duplicated for each instruction within an execute packet capable of employing stream data as an operand that are dispatched simultaneously.[00261] Instruction decoder 113 receives bits 13-17 of srcl field 1305 of an instruction. The opcode field (bits 3-12 for all instructions and also bits 28-31 for unconditional instructions)
unambiguously specifies a corresponding functional unit 3420 and the function to be performed. In this example, functional unit 3420 can be L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244 or C unit 245. The relevant part of instruction decoder 113 illustrated in FIG. 34 decodes srcl bit field 1305. Sub-decoder 3411 determines whether srcl bit field 1305 is in the range from 00000 to 01111. If this is the case, sub-decoder 3411 supplies a corresponding register number to global vector register file 231. In this example, the register number is the four least significant bits of srcl bit field 1305. Global vector register file 231 recalls data stored in the register corresponding to the register number and supplies the data to the srcl input of functional unit 3420.[00262] Sub-decoder 3412 determines whether srcl bit field 1305 is in the range from 10000 to 10111. If this is the case, sub-decoder 3412 supplies a corresponding register number to the corresponding local vector register file. If the instruction is directed to L2 unit 241 or S2 unit 242, the corresponding local vector register file is local vector register file 232. If the instruction is directed to M2 unit 243, N2 unit 244 or C unit 245, the corresponding local vector register file is local vector register file 233. In this example, the register number is the three least significant bits of srcl bit field 1305. The corresponding local vector register file 232/233 recalls data stored in the register corresponding to the register number and supplies the data to the srcl input of functional unit 3420.[00263] Sub-decoder 3413 determines whether srcl bit field 1305 is 11100. If this is the case, sub-decoder 3413 supplies a stream 0 read signal to streaming engine 125. Streaming engine 125 then supplies stream 0 data stored in holding register 2818 to the srcl input of functional unit 3420.[00264] Sub-decoder 3414 determines whether srcl bit field 1305 is 11101. If this is the case, sub-decoder 3414 supplies a stream 0 read signal to streaming engine 125. Streaming engine 125 then supplies stream 0 data stored in holding register 2818 to the srcl input of functional unit 3420. Sub-decoder 3414 also supplies an advance signal to stream 0. As previously described, streaming engine 125 advances to store the next sequential vector of data elements of stream 0 in holding register 2818.[00265] Supply of a stream 0 read signal to streaming engine 125 by either sub-decoder 3413 or sub-decoder 3414 triggers another data movement. Upon such a stream 0 read signal, streaming engine 125 supplies the data stored in valid register 2819 to predicate register file 234
for storage. In this example, this is a predetermined data register within predicate register file 234. In this example, data register P0 corresponds to stream 0.[00266] Sub-decoder 3415 determines whether srcl bit field 1305 is 11110. If this is the case, sub-decoder 3415 supplies a stream 1 read signal to streaming engine 125. Streaming engine 125 then supplies stream 1 data stored in holding register 2828 to the srcl input of functional unit 3420.[00267] Sub-decoder 3416 determines whether srcl bit field 1305 is 11111. If this is the case, sub-decoder 3416 supplies a stream 1 read signal to streaming engine 125. Streaming engine 125 then supplies stream 1 data stored in holding register 2828 to the srcl input of functional unit 3420. Sub-decoder 3414 also supplies an advance signal to stream 1. As previously described, streaming engine 125 advances to store the next sequential vector of data elements of stream 1 in holding register 2828.[00268] Supply of a stream 1 read signal to streaming engine 125 by either sub-decoder 3415 or sub-decoder 3416 triggers another data movement. Upon such a stream 1 read signal, streaming engine 125 supplies the data stored in valid register 2829 to predicate register file 234 for storage. In this example, this is a predetermined data register within predicate register file 234. In this example, data register PI corresponds to stream 1.[00269] Similar circuits are used to select data supplied to scr2 input of functional unit 3402 in response to the bit coding of src2/cst field 1304. The src2 input of functional unit 3420 can be supplied with a constant input in a manner described above. If instruction decoder 113 generates a read signal for stream 0 from either scrl field 1305 or scr2/cst field 1304, streaming engine 125 supplies the data stored in valid register 2819 to predicate register P0 of predicate register file 234 for storage. If instruction decode 113 generates a read signal for stream 1 from either scrl field 1305 or scr2/cst field 1304, streaming engine 125 supplies the data stored in valid register 2829 to predicate register PI of predicate register file 234 for storage.[00270] The exact number of instruction bits devoted to operand specification and the number of data registers and streams are design choices. In particular, the specification of a single global vector register file and omission of local vector register files is feasible. This example employs a bit coding of an input operand selection field to designate a stream read and another bit coding to designate a stream read and advancing the stream.[00271] The process illustrated in FIG. 34 automatically transfers valid data into predicate
register file 234 each time stream data is read. The transferred valid data can then be used by P unit 246 for further calculation of meta data. The transferred valid data can also be used as a mask or as an operand for other operations by one or more of vector data path side B 116 functional units including L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244 and C unit 245. There are numerous feasible compound logic operations employing this stream valid data.[00272] FIG. 35 is a partial schematic diagram 3500 illustrating another example configuration for selecting operand sources. In this example, the respective stream valid register 2819/2829 need not be automatically loaded to a predetermined register in predicate register file 234. Instead, an explicit instruction to P unit 246 is used to move the data. FIG. 35 illustrates a portion of instruction decoder 113 (see FIG. 1) decoding srcl field 1305 of one instruction to control a corresponding srcl input of P unit 246. These same or similar circuits can be duplicated for src2/cst field 1304 (FIG. 13) of an instruction controlling P unit 246.[00273] Instruction decoder 113 receives bits 13-17 of srcl field 1305 of an instruction. The opcode field opcode field (bits 3-12 for all instructions and also bits 28-31 for unconditional instructions) unambiguously specifies P unit 246 and the function to be performed. The relevant part of instruction decoder 113 illustrated in FIG. 35 decodes srcl bit field 1305. Sub-decoder 3511 determines whether srcl bit field 1305 is in the range 00000 to 01111. If this is the case, sub-decoder 3511 supplies a corresponding register number to global vector register file 231. In this example, the register number is the four least significant bits of srcl bit field 1305. Global vector register file 231 recalls data stored in the register corresponding to the register number and supplies the data to the srcl input of P unit 246.[00274] Sub-decoder 3512 determines whether srcl bit field 1305 is in the range 10000 to 10111. If this is the case, sub-decoder 3512 supplies a decoded register number to the predicate register file 234. In this example, the register number is the three least significant bits of srcl bit field 1305. The predicate register file 234 recalls data stored in the register corresponding to the register number and supplies the data to the srcl input of predicate unit 246.[00275] Sub-decoder 3513 determines whether srcl bit field 1305 is 11100. If this is the case, sub-decoder 3513 supplies a stream 0 valid read signal to streaming engine 125. Streaming engine 125 then supplies valid data stored in valid register 2819 to the srcl input of P unit 246.[00276] Sub-decoder 3514 determines whether srcl bit field 1305 is 11101. If this is the case, sub-decoder 3514 supplies a stream 1 valid read signal to streaming engine 125. Streaming
engine 125 then supplies stream 1 valid data stored in valid register 2829 to the srcl input of P unit 246.[00277] The P unit 246 instruction employing the stream valid register 2819/2829 as an operand can be any P unit instruction previously described such as NEG, BITCNT, RMBD, DECIMATE, EXPAND, AND, NAND, OR, NOR, and XOR.[00278] The special instructions noted above can be limited to P unit 242. Thus, the operations outlined in FIGS. 34 and 35 can be used together. If the functional unit specified by the instruction is L2 unit 241, S2 unit 242, M2 unit 243, N2 unit 244 or C unit 245, then srcl field 1305 is interpreted as outlined with respect to FIG. 34. If the functional unit specified by the instruction is P unit 246, then srcl field 1305 is interpreted as outlined with respect to FIG. 35. Alternatively, the automatic saving of the stream valid register to a predetermined predicate register illustrated in FIG. 34 can be implemented in one example and not implemented in another example.MATRIX MULTIPLACTION ACCELERATOR[00279] FIG. 36 is a block diagram of a system 3600 that includes processing unit core 110, streaming engine 125, system memory 3630, and matrix multiplication accelerator (MMA) 3640. MMA 3640 is a tightly coupled matrix multiplication acceleration unit that is a third type of function unit for processing unit core 110. The other two are the scalar data path 115 (FIG. 1) and the vector data path 116 (FIG. 1). MMA 3640 enables system 3600 to fulfill a large group of image and data processing application requirements. MMA 3640 supports high computational performance requirements in matrix multiplications. With support from streaming engine 125, processing unit core 110, and L2 memory 130, MMA 3640 efficiently computes large numbers of MACs (multiple accumulation) required by various vision algorithms, dense linear algebra, FFT operations and high level apps including convolutional neural networks (CNNs), structure from motion (SFM), Radar, etc. without increasing the memory bandwidth into the processing unit core 110.[00280] Generally speaking, MMA 3640 supports matrix multiplication of two matrices. As shown in expression (3), where matrix A is an [n x m] matrix and matrix B is an [m x p] matrix, the matrix multiplication outputs matrix C.where each i,j entry is given by multiplying the entry Aik (across row i of A) by the entries Bkj
(down column j ofB), for k = 1, 2, m.[00281] FIG. 37 illustrates an example matrix A, matrix B and a resulting matrix C in more detail. Each result element is a summation of products of elements from a row of matrix A and a column of matrix B as defined by expression (3).[00282] Referring back to FIG. 36, MMA includes an A[.] buffer 3641 to hold a matrix A, a B[.] buffer 3642 to hold a matrix B, and a C[.] buffer 3643 to collect the result elements for matrix C. MMA 3643 includes an array of individual multipliers and a set of accumulators as indicated at 3644 to allow an entire row of the C matrix to be calculated in one execution cycle of MMA 3640. In an example, MMA 3640 is equipped to handle a 32 x 32 16-bit fixed/floating point matrix multiply and to produce a 32 x 32 16-bit product matrix in 32 cycles.[00283] In an example, MMA 3640 is also equipped to multiply two 64 x 64 8-bit matrices by breaking each into four 32 x 32 sub-matrices, multiplying various combinations of the sub matrices and then combining the results to produce a final 64 x 64 8-bit matrix result. Other examples may be implemented to support larger or smaller matrices having a larger or smaller precision than 8 or 16-bits.[00284] In order for MMA 3640 to operate correctly on a matrix that is smaller than its native size, such as the 32 x 32 example mentioned above, all of the elements of each input matrix A and B should contain valid data. Unused elements should be set to zero, or some other agreed upon null value. Streaming engine 125 includes support for nulling unused elements of an array during a stream access without performing additional memory accesses, as described in more detail hereinbelow. In an example, streaming engine 125 includes support for inserting zeros or a selected value, such as a max value or a min value, during a stream access without performing memory accesses to provide the selected values.STREAMING ENGINE SUPPORT FOR MMA[00285] Referring still to FIG. 36, streaming engine 125 interfaces with L2 memory 130 via unified memory controller (UMC) 3601. UMC 3601 coordinates access to L2 memory 130 by the various functional units in system 3600. Data routing unit 3602 interfaces with UMC 3601 to route date into the system memory 130. The outputs of stream head registers 2818/2828 (FIG. 28) and valid registers 2819/2829 (FIG. 28) are provided to streaming engine interface 3611 and thence to logical units in processing unit core 110 via respective buses 2840/2841. In this example, S unit 242 can be programmed to pass one stream from SE 125 to MMA A[.]
buffer 3641 and to pass the other stream from SE 125 to B[.] buffer 3642. The output of product matrix C[.] may be directed to L2 memory 130 via UMC 3601, or to a destination register in register file 233 via a destination multiplexer shown in C unit 245.[00286] FIG. 38 is a more detailed block diagram of a portion of the streaming engine 125 of FIG. 28. Linear streams work for large classes of algorithms, but not all. For example, matrix multiplication presents a unique problem for the streaming engine in that each element in the matrix product contains the result of a vector dot product between a row from the first matrix and a column from the second matrix. Programs may store matrices all in row-major or column- major order. Row-major order stores all the elements of a single row contiguously in memory. C and C++ programs often store arrays in row-major order. Column-major order stores all elements of a single column contiguously in memory. FORTRAN programs often store arrays in column-major order. Depending on the programming language, matrices may get stored in the same order as the default array order for the language.[00287] As a result, only one of the two matrices in a matrix multiplication map on to the streaming engine’s 2-dimensional stream definition. This problem is not unique to the streaming engine. In fact, matrix multiplication’s access pattern fits poorly with most general-purpose memory hierarchies. Some software libraries attack this problem by directly transposing one of the two matrices, so that both get accessed row-wise (or column-wise) during multiplication. TRANSPOSE MODE[00288] With the streaming engine, programs need not resort to that extreme. The streaming engine supports implicit matrix transposition with a notion of transposed streams. Transposed streams avoid the cost of explicitly transforming the data in memory. Instead of accessing data in strictly consecutive-element order, the streaming engine effectively interchanges the inner two loop dimensions in its traversal order, fetching elements along the second dimension into contiguous vector lanes.[00289] Transpose mode interchanges the two innermost loop levels. Accordingly, in transpose mode, the two innermost loops ICNT0 and ICNT1 are interchanged. ICNT1 determines the number of rows in each column. A column is defined as a GRANULE size. ICNT0 is the second dimension in a transpose stream and defines the horizontal width (which may or may not be a multiple of the GRANULE). In this example streaming engine, the maximum row height, ICNT1, must be at least 1 and less than or equal 16. There are no
restrictions on the ICNTO in transpose. However, if the ICNTO is not a multiple of the GRANULE size, the streaming engine will pad zeros in the missing elements of each GRANULE.[00290] Coarse rotator 2835 and data storage 2816 of stream 0 engine 2810 are illustrated in FIG. 38; however, coarse rotator 2836 and data storage 2826 of stream 1 engine 2820 are similar and operate in a similar manner. Transpose mode is performed by stream 0 engine 2810 (FIG. 28) using resources of coarse rotator 2835, data storage unit 2816, and butterfly network 2817. Transpose mode is performed by stream 1 engine 2820 (FIG. 28) using resources of coarse rotator 2836 (FIG. 28), data storage unit 2826 (FIG. 28), and butterfly network 2827 (FIG. 28).[00291] In this example, data storage 2816 is organized as a register file 3810 with 32 slots of 64 bytes (512 bits). Other examples may provide a larger or a smaller amount of storage without changing the semantics of a stream. Data storage 2816 is organized as eight independent banks that are each eight bytes (64 bits) wide. Each bank includes two write ports and two read ports. In this example, each bank also includes two bits/line for parity protection.[00292] In Transpose mode, the SE organizes the internal storage into sector tiles, and the number of sector tiles depends on what the current vertical count (ICNT1) is set to. This allows the SE to fetch as many rows and columns as possible and organizes and rotates the data coming back from L2 into the sectors. This allows the SE to use both read and write ports per bank when reading and writing the data in transpose mode, so that the data can be rotated and ordered according to its sector.[00293] In this example, coarse rotator 2835 includes a set of sixteen multiplexors, represented by multiplexors 3806, 3807, 3808. Each multiplexor, such as multiplexor 3806, has sixteen inputs that are each four bytes (32 bits) wide and are connected to receive all 512 bits provided by the L2 interface 2833 on bus 3802. A four-byte output of each multiplexor is coupled to provide data to one half of a respective bank of register file 3810. Each bank of register file 3810 is coupled to receive data from two multiplexors in parallel, such as 3806, 3807, such that data received from the L2 interface via bus 3802 may be manipulated in four- byte elements.[00294] Reference queue 2815 receives storage allocation and tracking meta-data from storage and allocation logic 2814 (FIG. 28). As each 512-bit line of data is received from L2 via L2 interface 2833 (FIG. 28), control logic 3804 generates control signals to independently control
each of the sixteen multiplexors 3806, 3807, 3808 such that any four-byte data element from the received 512-bit line of data may be stored in either side of each of the eight banks in a selected slot of register file 3810 based on the meta-data provided by reference queue 2815. Coarse rotator 2835 allows each 512-bit line of data to be rotated, shifted, truncated, or duplicated by stream 0 engine 2810 as described in more detail above. Furthermore, matrix transposition may be performed by stream 0 engine 2810 using the coarse rotator 2835, as will be described in more detail below.[00295] Alignment network 3820 and 3821 are each similar to coarse rotator 2835. In this example, alignment network 3820 includes a set of sixteen multiplexors, represented by multiplexors 3822, 3823. Each multiplexor, such as multiplexor 3822, has thirty-two inputs that are each four bytes (32 bits) wide and are connected to receive all 512 bits provided by each of the two read ports of the register file 3810. A four-byte output of each multiplexor, such as multiplexor 3822, is coupled to provide data to a respective input of butterfly network 2817. In this manner, multiplexors 3822, 3823 can select sixteen four-byte data elements from register file 3810 to form a 64-byte line of data to provide to butterfly network 2817.[00296] Similarly, in this example, alignment network 3821 includes a set of sixteen multiplexors, represented by multiplexors 3824, 3825. Each multiplexor, such as multiplexor 3824, has thirty -two inputs that are each four bytes (32 bits) wide and are connected to receive all 512 bits provided by each of the two read ports of the register file 3810. A four-byte output of each multiplexor, such as multiplexor 3824, is coupled to provide data to a respective input of butterfly network 2817. In this manner, multiplexors 3824, 3825 can select sixteen four-byte data elements from register file 3810 to form a 64-byte line of data to provide to butterfly network 2817.[00297] Control logic 3814 generates control signals to independently control each of the sixteen multiplexors 3822, 3823 in alignment network 3820 and each of the sixteen multiplexors 3824, 3825 in alignment network 3821 such that any four-byte data element retrieved from register file 3810 may be aligned to any four-byte location within two 64-byte output lines provided to butterfly network 2817 based on the meta-data provided by reference queue 2815.[00298] Butterfly network 2817 is controlled by stream 0 engine 2810 to further format data retrieved from data storage 2816 before sending the formatted data to processing unit core 110, as described in more detail with regard to FIG. 20, FIG 28. Butterfly network 2817 includes
multiple ranks of cross-coupled multiplexor nodes to perform the data formatting.[00299] In this example, control logic 3804 for coarse rotator 2835 and control logic 3814 for data storage 2816 are implemented as asynchronous Boolean logic that is capable of generating control signals for each of the multiplexors and register file 3810 in parallel based on the contents of the meta-data provided by reference queue 2815.[00300] In Transpose mode, SE0 2810 organizes the internal storage 2816 into sector tiles, and the number of sector tiles depends on what the current vertical count (ICNT1) is set too. This allows the SE0 2810 to fetch as many rows and columns as possible and organizes and rotates the data coming back from L2 130 (FIG. 1) into the sectors of data storage 2816. In this example, the register file 3810 includes 32 rows x 64 bytes and is organized as eight independent 4-port banks. This allows SE0 2810 to use both read / write ports per bank when reading and writing the data in transpose mode, since the data is rotated and ordered according to its sector.[00301] The coarse rotator is controlled by control logic 3804 based on meta-data that is queued up in reference queue 2815 (FIG. 28, FIG. 38). The meta-data is derived from the stream parameters for matrix 3700 by storage allocation tracking logic 2814 (FIG. 28).INSERTING NULL VECTORS INTO A STREAM[00302] Referring to FIG. 38, control logic 3814 can control each of the multiplexers on alignment networks 3820, 3821 to force a null value onto selected elements of a stream vector based on the meta-data provided by reference queue 2815. In this manner, an entire stream vector can be set to a null value without fetching any data from system memory. In an example, the null value is“0.” In another example, a predefined value or pattern may be used to represent a null value.[00303] In another example, control logic 3814 can control each of the multiplexers on alignment networks 3820, 3821 to force a selected value onto selected elements of a stream vector based on the meta-data provided by reference queue 2815. In this manner, an entire stream vector can be set to a selected value, such as a min value or a max value without fetching any data from system memory.[00304] In this example, control logic in the address generator 2811/2821 (see FIG. 28) performs count tracking for the six levels of nested iterations. As will be described in more detail hereinbelow, various conditions may be detected that are associated with one or more of the six levels of nested iterations and used to signal that a null value or a selected value is to be
inserted into the vector stream without fetching data from memory for the null value or selected value vector.[00305] Metadata produced by the address generator is pushed into the stream reference queue 2815/2825 (see FIG. 28). On the backend, when processing unit core 110 (FIG. 1) performs a read, this metadata is popped out and sent to control logic 3814 in alignment network 2820/2821 with the lanes to be nulled or set to a selected value without reading data from system memory. The metadata keeps track of decrement dimensions (DECDIM) and dimension widths (DECDIM WIDTH) and is fed to backend logic, as described hereinbelow in more detail.DATA STRIP MINING SUPPORT[00306] Complex signal processing applications such as various vision algorithms, dense linear algebra, FFT operations and high-level apps including convolutional neural networks (CNNs), structure from motion (SFM), and Radar may require numeric manipulation of a complex multidimensional data structure. Boundaries of the data structure need to be clean so that anomalous data is not included in computations done at the boundary. Generally, this requires a programmer to allocate memory space at the boundary of a data structure that can be preset to a known value, such as zero, or a maximum or minimum data value. Presetting the boundary data then takes additional processing cycles to access and set the boundary data values.[00307] An example steaming engine 125 (FIG. 1) includes a feature to allow a programmer to specify a decrement dimension (DECDIM WIDTH) parameter for one or more of the six dimensional nested loops supported by steaming engine 125 that is different from a dimension defined by a respective loop dimension parameter. In this manner, streaming engine 125 can be programmed to insert the appropriate known value into the data stream as the data stream is being fetched from system memory by the streaming engine. Furthermore, streaming engine 125 can be programmed to insert null or known value stream vectors that correspond to boundary area of the data structure without accessing the system memory for these boundary area stream vectors, thereby reducing processing cycles and time.[00308] In an example, the DECDIM feature uses two sets of flags that includes a first decrement flag (DECDIMx), where“x” is“1” or“2” in this example, a secondary decrement flag (DECDIMxSD), and a respective width parameter (DECDIMx WIDTH). This allows a programmer to define four mask values on selected dimensions of the nested loops in order to mask or zero out parts of the stream data. Table 30 defines an example set of DECDIM flags
that apply to an example of streaming engine 125. In this example, three bits are used to define the DECDIMx flag, while only two bits are used to define the DECDIMxSD flag. The DECDIMxSD flag is therefore limited to DIM1-DIM3 in this example. Various combinations of settings for DECDIM are supported, as described in more detail hereinbelow.Table 30 DECDIM flags to control data strip mining operation[00309] Referring to FIG. 29, two DECDIMx WIDTH fields 2931, 2932 are defined within stream template register 2900. DECDIMI WIDTH 2931 is used in conjunction with DECDIMl flag 3019 (FIG. 30) and DECDIMl SD flag 3020 (FIG. 30) that are included in the flag field 2921 of stream template register 2900. DECDIM2_WIDTH 2932 is used in conjunction with DECDIM2 flag 3019 (FIG. 30) and DECDIM2SD flag 3020 (FIG. 30).[00310] The DECDIMx flag field and DECDIMx WIDTH supports a data strip mining feature that allows a programmer to define a“total actual width” size of an image using the DECDIM WIDTH count to provide this maximum actual width. In this mode, the DECDIM WIDTH is decremented by the selected dimension (DIM1 - DIM5), as shown in Table 30, when the address generator 2811/2821 (FIG. 28) enters that loop dimension. For example, if DECDIM = 010b, then every time the loop enters dimension 2, the current value of DECDIM WIDTH is decremented by the DIM2 value. When the DECDIM WIDTH becomes
less than the ICNTO (the“Tile Width” in DECDIM mode), the SE 125 will pad the unused elements with zero data. This is referred to as“data strip mining”.[00311] The DECDIM WIDTH count value will only reload to the programmed value again when the selected DECDIM dimension loop count (ICNT) expires. In the above example, the width would reload when dimension 2 (i.e., ICNT2) expires, and dimension 3 is entered. In other words, the width reloads when we enter any dimension higher than the selected DECDIM dimension. Thus, it is possible to program the width smaller than the loop iterations count for the selected dimension, which could cause the width count to underflow. This is described further hereinbelow.[00312] As mentioned above, there is also a secondary DECDIMxSD flag. The secondary flag allows a“secondary decrement” count mask on top of its respective DECDIMx flag for the respective DECDIM WIDTH. In other words, the respective DECDIM WIDTH is decremented or reloaded using the settings for both of the respective DECDIMx and DECDIMxSD when the respective selected dimension is entered or ends.[00313] In an example, the DECDIM selection can be set for any DIM1 - DIM5 while operating in linear mode. However, in transpose mode, DIMl selection is not supported and only DIM2 - DIM5 is supported. In this example, the selected DIMx value must be an unsigned value (i.e., DIMx bit 31 is zero). All other dimensions can be unsigned or signed.[00314] For example, if DECDIM = 010b, then DIM2 must be an unsigned value. DIMl, DIM3, DIM4, and DIM5 can be unsigned or signed. In example hardware, the innermost loop count is scaled in terms of total number of bytes. In other words, the innermost loop ICNT value is equal to (ICNTO x ELEM BYTES), and in the case of DECDIM mode, the DECDIM WIDTH count is (DECDIM WIDTH x ELEM BYTES). During processing unit core 110 fetches, the innermost loop count expires when all elements have been consumed before accounting for any data formatting (element duplication, promotion, decimation). Thus, with data formatting enabled, multiple processing unit core 110 fetches could occur before the innermost loop count expires.[00315] In an example, the value of DECDIM WIDTH count will be saturated at zero in the case where the loop iteration count parameters for the selected DECDIM (i.e., ICNT1 - ICNT5) are incorrectly programmed causing DECDIM WIDTH count to underflow when decremented with the selected DIMx value. When saturation occurs, streaming engine 125 will hold
DECDIM WIDTH count at zero and subsequent data phases to processing unit core 110 will also be zero.[00316] The following tables and FIGS. 39-42 illustrate various examples how this feature operates. FIG. 39 illustrates what a“normal” stream pattern would look like. FIGS. 40-42 illustrate several examples of what data strip mining patterns would look like. The DECDIM WIDTH value in the tables represent the count of the entire frame, which gets decremented by the selected DIMx value as each DECDIM dimension is entered. Note that the examples show data moving left to right, thus the LSByte is leftmost. Also, the FIGS. 39-42 do not enable any‘data formatting’. If data formatting is enabled, additional processing unit core 110 fetches would be required to consume each of the“Tile” width, i.e., innermost ICNT0.[00317] SE 125 will not send any pTLB 2812/2822 (FIG. 28) or L2 130 (FIG. 1) fetches for regions which have been fully masked and zeroed out (DECDIM WIDTH saturated), and thus programmers need not allocate these regions in memory even though the stream addressing patterns cover these regions. SE 125 does this by keeping track of the overall DECDIM WIDTH mask count and applying the DECDIMx and DECDIMxSD settings, and when this count saturates to zero or ends on a 64-byte line boundary, all subsequent L2 fetches are blocked. By doing this, SE 125 suppresses any errors or faults associated with subsequent un-allocated regions since no pTLB 2812/2822 or L2 130 fetches are sent.[00318] Transpose data strip mining is similar to linear data strip mining except the data moves in a transpose pattern. DECDIM flags provides the dimension for when the DECDIM WIDTH count decrement occurs. This allows the data strip mask to apply to the remaining elements in the GRANULE. In DECDIM mode and transpose, the setting for DECDIM = DIM1 is not supported as mentioned in previous tables. Note that for normal transpose patterns, when the ICNT0 is not a multiple of the GRANULE, gappy data occurs and is filled with zeros. Similarly, gappy data occurs in DECDIM transpose mode when the ICNT0 is not a multiple of the GRANULE, in addition to the data strip masking that is applied using DECDIM WIDTH.[00319] FIGS. 39, 40, 41, 42, and 43 illustrate several example linear stream transfers that can be performed by example streaming engine 125. These examples are by no means an exhaustive list of the various configurations of stream transfers the can be performed by the six-dimensional nested loop control logic included within streaming engine 125.
[00320] FIG. 39 illustrates an example of a normal mode stream transfer 3900 in which the DECDIM flags are set to“000b” as shown in Table 30. Table 31 lists the stream parameters that are placed in the stream template register 2900 (FIG. 29). In this example, VECLEN = 16 elements, and each element is one byte. Therefore, each stream vector includes 16 bytes of array elements data. The inner loop specified by ICNT0 = 56 bytes, therefore streaming engine 125 will make fetches to system memory 130 (FIG. 1) until 56 bytes of array data have been retrieved. In this case, fetch 1 indicated at 3921 actually retrieves 64 bytes of data from system memory 130, but only 16 bytes of data beginning at address 0x0 are used to form the first stream vector. The rest of the vector provided to processing unit core 110 (FIG. 1) is zero filled by alignment network 3820/3821 (FIG. 39) as described hereinabove. Similarly, fetch 2 3922, fetch 3 3923 and fetch 4 3924 access system memory 130 to form three additional stream vectors to provide to processing unit core 110. However, ICNT0 expires at 56 elements, so only eight one- byte elements are included in the fourth stream vector. Streaming engine 125 masks the remaining bytes indicated at 3925 by zero filling them using the alignment network as described hereinabove.[00321] After ICNT0 expires, the next loop level DIM1 is entered, as indicated at 3911. In this example, DIM1 specifies a distance of 128 bytes; therefore fetch 5 3926 begins at address 0x128 and the inner loop is repeated to produce four stream vectors loaded with 56 bytes of element data. This repeats seven times since ICNT1 = 7 in this example.[00322] After ICNT1 expires, the next loop level DIM2 is entered, as indicated at 3912. In this case, DIM2 specifies a distance of 80 bytes; therefore fetch 29 3927 begins at address 0x080 and the inner loop is repeated to produce four stream vectors loaded with 56 bytes of element data. Similar to 3925, eight bytes of element data is masked as indicated at 3928. This pattern repeats seven times since ICNT1 = 7 in this example.Table 31 - Linear stream example in normal mode[00323] FIG. 40 illustrates an example of a linear stream data strip mining transfer 4000 in which the DECDIM flags are set to“010b” as shown in Table 30 to associate DECDIM WIDTH with DIM2. Table 32 lists the stream parameters that are placed in the stream template register 2900 (FIG. 29). In this example, VECLEN = 16 elements, and each element is one byte. Therefore, each stream vector includes 16 bytes of array elements data. The inner loop is specified by ICNT0 = 16 bytes, therefore streaming engine 125 will make a fetch to system memory 130 (FIG. 1) to fetch 16 bytes of array data in each iteration of the inner loop. In this case, fetch 1 indicated at 4021 actually retrieves 64 bytes of data from system memory 130, but only 16 bytes of data beginning at address 0x0 are used to form the first stream vector, in response to ICNT0 as indicated at 4010. The rest of the vector provided to processing unit core 110 (FIG. 1) is zero filled by alignment network 3820/3821 (FIG. 39) as described hereinabove.[00324] Since ICNT0 expires on the first fetch to system memory, the next loop level DIM1 is entered, as indicated at 4011. In this example, DIM1 specifies a distance of 128 bytes; therefore fetch 2 4022 begins at address 0x128. ICNT1 = 7, therefore this loop is repeated seven times, such that fetch 2 4022, up to fetch 7 4023 access system memory 130 to form six additional stream vectors to provide to processing unit core 110.[00325] After ICNT1 expires, the next loop level DIM2 is entered, as indicated at 4012. In this case, DIM2 specifies a distance of 80 bytes; therefore fetch 8 4024 begins at address 0x080 and the inner loop is repeated to produce a single stream vector loaded with 16 bytes of element
data. This loop repeats up to fetch 14 4025, when ICNT 1 expires again and the next loop level DIM2 is entered again.[00326] After ICNT1 expires again, the next loop level DIM2 is entered, as indicated at 4026. Again, DIM2 specifies a distance of 80 bytes; therefore fetch 15 4026 begins at address 0x160 and the inner loop is repeated to produce a single stream vector loaded with 16 bytes of element data. This loop repeats up to fetch 21 4027, when ICNT 1 expires again and the next loop level DIM2 is entered again.[00327] After ICNT1 expires again, the next loop level DIM2 is entered, as indicated at 4026. Again, DIM2 specifies a distance of 80 bytes; therefore fetch 22 4028 begins at address 0x240 and the inner loop is repeated to produce a single stream vector loaded with 16 bytes of element data. However, in this case, the DECDIM WIDTH parameter 4014 is set to be 248 elements, which is also 248 bytes in this example. Therefore, the array data elements in the border region indicated at 4029 are not to be used in the signal processing application that is consuming this stream. Metadata produced by address generator 2811/2821 (FIG. 28) and stored in reference queue 2815/2825 (FIG. 28) is used by control logic 3814 (FIG. 38) to control alignment network 2820/2821 (FIG. 38) to mask the data in border region 4029. In this example, the data in each stream vector that involves border region 4029 is set to a value of “0”in response to the DECDIM WIDTH parameter and DECDIM flags that were stored in the stream template register 2900 (FIG. 29).[00328] This loop repeats up to fetch 28 4029, when ICNT 1 expires again. In this example, ICNT also expires after these four repetitions and the stream is complete.Table 32 - Linear stream example, data strip mining, DECDIM on DIM2[00329] FIG. 41 illustrates an example of a linear stream data strip mining transfer 4100 in which the DECDIM flags are set to“010b” as shown in Table 30 to associate DECDIM WIDTH with DIM2. Table 33 lists the stream parameters that are placed in the stream template register 2900 (FIG. 29). This example is similar to the example of FIG. 40, except that ICNT2 is set to 5, as shown in Table 33. The DIM1 and DIM2 level loop are performed in a similar manner from fetch 1 4121 to fetch 28 4128. Data in border region 4029 is masked in the same manner as described in FIG. 40 in response to DECDIM WIDTH 4012.[00330] However, in this example, ICNT2 = 5. Therefore, streaming engine 125 prepares to generate addresses for another repetition of the DIM2 loop. However, DECDIM WIDTH 4112 has now underflowed, indicating that the remining stream vectors for border region 4130 are to be set to null vectors. The underflow condition is referred to as being“saturated.” In this example, null vectors are set to“0”. When the DECDIM WIDTH count is saturated, there is no need to access system memory for array element data, since the stream vectors are set to null values. Control logic within address generator 2811/2821 suppresses address generation in response to the DECDIM WIDTH being saturated so that no accesses are made to the pTLB 2812/2822 (FIG. 28) and no accesses are made to system memory 130 (FIG. 1) to produce the null vectors. However, metadata is formed by the control logic 3814 and stored in reference queue 2815/2825. Control logic 3814 then uses the metadata to create the null vectors using alignment network 2820/2821.[00331] In this manner, a stream of array elements is provided to processing unit core 110 that is responsive to the stream parameters, but fetch cycles to system memory and the pTLB are avoided for the null vectors.
Table 33 - Linear stream example, data strip mining, DECDIM on DIM2 withDECDIM WIDTH saturation[00332] FIG. 42 illustrates an example of a linear stream data strip mining transfer 4200 in which the DECDIM flags are set to“001b” as shown in Table 30 to associate DECDIM WIDTH with DIME Table 34 lists the stream parameters that are placed in the stream template register 2900 (FIG. 29). This example is similar to the example of FIG. 40, except that the DECDIM WIDTH count is linked to DIM1 instead of DIM2. The DIM1 level loop is performed in a similar manner from fetch 1 4221 to fetch 3 4222.[00333] At fetch 4 4223, DECDIM WIDTH count reaches zero and data in border region 4224 is masked using alignment network 3820/3821 as described in more detail hereinabove.[00334] On the next iteration, DECDIM WIDTH count is saturated and fetches to system memory and to the pTLB are suppressed as described hereinabove in more detail. In this manner, stream vectors 4225, 4226, and 4227 are formed by streaming engine 125 without
accessing system memory or the pTLB.[00335] After ICNT1 expires, the next loop level DIM2 is entered, as indicated at 4212. In this case, DIM2 specifies a distance of 80 bytes; therefore fetch 8 4228 begins at address 0x080 and the inner loop is repeated to produce a single stream vector loaded with 16 bytes of element data. This loop repeats up to fetch 11 4029, where a portion of the stream vector is masked in response to DECDIM WIDTH count 4214 and the remaining three null stream vectors are formed by SE 125 without accessing system memory.[00336] In this example, ICNT2 = 4, so this same loop of system access and formation of null vectors without system access is repeated two more times, as indicated at 4230.Table 34 - Linear stream example, DECDIM on DIM1 with DECDIM WIDTH saturation[00337] FIG. 43 illustrates an example of a linear stream data strip mining transfer 4300 in which the DECDIMl flags are set to “001b” as shown in Table 30 to associate DECDIM 1 WIDTH with DIM1 and the DECDIM2 flags are set to“010b” as shown in Table 30
to associate DECDIM2 WIDTH with DIM2. Table 35 lists the stream parameters that are placed in the stream template register 2900 (FIG. 29). This example is similar to the example stream 4000 of FIG. 40, except for the addition of a second set of DECDIMx flags and DECDIMx WIDTH parameter.[00338] The DIM1 level loop is performed in a similar manner to stream 4000 until DECDIM 1 WIDTH 4312 saturates. In this example, ICNT0 4310 = 16 and ICNT1 = 7. As described hereinabove in more detail, once the DECDIMI WIDTH 4312 count saturates then no further system memory accesses are needed for that loop. In this case, null stream vectors 4320 and 4321 are formed by streaming engine 125 without accessing system memory 130.[00339] After ICNT1 expires, the next loop level DIM2 is entered, as indicated at 4311. The DECDIMI WIDTH counter is reloaded with the count value from the stream template register 2900. In this case, DIM2 specifies a distance of 80 bytes; therefore fetch 8 4322 begins at address 0x080 and the inner loop is repeated to produce a single stream vector loaded with 16 bytes of element data. This loop repeats until the DECDIMI WIDTH 4312 count again saturates, then no further system memory accesses are needed for that loop. In this case, null stream vectors 4323 and 4324 are formed by streaming engine 125 without accessing system memory 130.[00340] In this example, ICNT2 = 4, so this same loop of system access and formation of null vectors without system access is repeated two more times.[00341] However, in this case, the DECDIM2 WIDTH parameter 4313 is set to be 248 elements, which is also 248 bytes in this example. Therefore, the array data elements in the border region indicated at 4325 are not to be used in the signal processing application that is consuming this stream. Metadata produced by address generator 2811/2821 (FIG. 28) and stored in reference queue 2815/2825 (FIG. 28) is used by control logic 3814 (FIG. 38) to control alignment network 2820/2821 (FIG. 38) to mask the data in border region 4325. In this example, the data in each stream vector that involves border region 4325 is set to a value of“0”in response to the DECDIM2 WIDTH parameter and DECDIM2 flags that were stored in the stream template register 2900 (FIG. 29).[00342] This loop repeats until the DECDIMI WIDTH 4312 count again saturates, then no further system memory accesses are needed for that loop. In this case, null stream vectors 4326 and 4327 are formed by streaming engine 125 without accessing system memory 130.
[00343] In this example, ICNT3 = 2 and DIM3 = 1500, so this entire sequence is repeated again beginning at address 0x1500 as indicated at 4341.Table 35 - Linear stream example, DECDIMl on DIM1 and DECDIM2 on DIM2[00344] FIGS. 44A, 44B together illustrate how a partial matrix is augmented with null vectors by the streaming engine of FIG. 28 for matrix multiplication. Table 36 lists the stream parameters that are placed in the stream template register 2900 (FIG. 29). In this example, VECLEN = 16 elements, and each element is four bytes (32 bits). Therefore, each stream vector includes 64 bytes of array elements data to match the width of MMA 3640 (FIG. 36). The inner loop specified by ICNT0 = 16 elements (64 bytes), therefore streaming engine 125 will make a fetch to system memory 130 (FIG. 1) to fetch 64 bytes of array data in each iteration of the inner loop.
[00345] In this example, a matrix 4400 is located in L2 system memory 130 and has nineteen columns of 32-bit elements and eighteen rows of 32-bit elements, where each element is indicated by“d” in FIGS. 44A, 44B. Since MMA 3640 of this particular example can only handle a 16 x 16 array of 32-bit elements, matrix 4400 is subdivided into four sub-matrices 4401, 4402, 4403, 4404.[00346] Using the stream parameters listed in Table 36, streaming engine 125 first fetches sub-array 4401 to provide to MMA 3640. Streaming engine 125 then fetches sub-array 4402. Sub-array 4402 includes a border region 4405 in which all the data elements provided to MMA 3640 need to be set to zero in order for a matrix multiplication operation to performed by MMA 3640 to operate correctly. As described hereinabove in more detail, streaming engine 125 uses DECDIMI WIDTH 4410 to define the extent of matrix 440, which in this example is nineteen elements.[00347] In this case, the DECDIMI WIDTH parameter 4410 is set to be nineteen elements, which is 76 bytes in this example. Therefore, the array data elements in the border region indicated at 4405 are not to be used in the signal matrix multiplication application that is consuming this stream. Metadata produced by address generator 2811/2821 (FIG. 28) and stored in reference queue 2815/2825 (FIG. 28) is used by control logic 3814 (FIG. 38) to control alignment network 2820/2821 (FIG. 38) to mask the data in border region 4405. In this example, the data in each stream vector that involves border region 4405 is set to a value of“0”in response to the DECDIMI WIDTH parameter and DECDIMl flags that were stored in the stream template register 2900 (FIG. 29).[00348] Streaming engine 125 then begins fetching sub-array 4403. At fetch 4412, DECDIM2 WIDTH count reaches zero and is saturated and fetches to system memory and to the pTLB are suppressed as described hereinabove in more detail. In this manner, stream vectors for border region 4406 are formed by streaming engine 125 without accessing system memory or the pTLB.[00349] Streaming engine 125 then begins fetching sub-array 4404. Fetches to system memory are performed at 4413 and 4414 and the border region 4405 is again masked by alignment network 2820/2821 as described hereinabove in response to DECDEMI WIDTH saturating. On the next cycle, DECDIM2 WIDTH count reaches zero and is saturated and fetches to system memory and to the pTLB are suppressed as described hereinabove in more
detail. In this manner, stream vectors for border region 4406 in sub-array 4404 are formed by streaming engine 125 without accessing system memory or the pTLB.Table 36 - Linear stream example, DECDIMl on DIM3 and DECDIM2 on DIM2NULL STREAM VECTORS BY COUNT[00350] In the examples described hereinabove, several ways of padding stream vectors with constant values and several ways of forming null stream vectors without accessing system memory were described that are based on a specified value for a DECDIMx WIDTH count. In another example, it may be useful to specify a number of null vectors to be inserted in a stream by the streaming engine.
[00351] Creation of a Convolution neural network (CNN) Toeplitz style matrix on the fly requires zero or constant values fed into the Matrix Multiplication Accelerator after the last feature map in the CNN layer. In linear algebra, a Toeplitz matrix or diagonal-constant matrix in which each descending diagonal from left to right is constant. As described with regard to FIG. 36, MMA 3640 has a fixed number of rows which need to be filled before starting matrix multiplication. The rows remaining during creation of Toeplitz matrix cannot have a junk value, otherwise the results will be wrong.[00352] Assuring that all unused data elements are set to zero or to a selected null value using software requires allocation of memory and execution of instructions to write out the null values.[00353] In an example, a loop end zero (LEZR) feature is implemented in the streaming engine 125 (FIG. 28) hardware. When the streaming engine dimension for the last feature map expires, a number of null rows are fed to the streaming engine without reading data from the memory. In this example, streaming engine 125 will send a full 64-byte stream vector on each fetch by the processing unit core 110, thus any data formatting , scaling, and replay during the normal dimensional loop is not adhered to. The data is then provided to the matrix multiplication accelerator 3640. The rows fed to the matrix accelerator after the CNN network last feature expires makes the matrix complete and ready for block matrix multiplication. There is not any overhead in software needed and improves performance, helps to reduce software code size and complexity.[00354] In this example, an eight-bit LEZR CNT (LEZR count) field 2933 is provided in stream template 2900 (FIG. 29). This allows up to 255 null vectors to be inserted in a stream by streaming engine 125 without accessing system memory. A three-bit LEZR flag 3023 is included in the flag field 2921 of stream template 2900. An example definition of the LEZR flags is included in Table 37.Table 37 LEZR flags to control data strip mining operation[00355] Referring to FIG. 38, control logic 3814 can control each of the multiplexers on alignment networks 3820, 3821 to force a null value onto selected elements of a stream vector based on the meta-data provided by reference queue 2815. In this manner, an entire stream vector can be set to a null value without fetching any data from system memory. In an example, the null value is“0.” In another example, a predefined value or pattern may be used to represent a null value.[00356] In another example, control logic 3814 can control each of the multiplexers on alignment networks 3820, 3821 to force a selected value onto selected elements of a stream vector based on the meta-data provided by reference queue 2815. In this manner, an entire stream vector can be set to a selected value, such as a min value or a max value without fetching any data from system memory.[00357] In this example, control logic in the address generator 2811/2821 (see FIG. 28) performs count tracking for the six levels of nested iterations. When a loop count for the dimension specified by LEZR expires, it signals that a null value or a selected value is to be inserted into the vector stream without fetching data from memory for the null value or selected value vector for a number of stream vectors specified by the LEZR field in stream template 2900.[00358] Metadata produced by the address generator is pushed into the stream reference queue 2815/2825 (see FIG. 28). On the backend, when processing unit core 110 (FIG. 1) performs a read, this metadata is popped out and sent to control logic 3814 in alignment network 2820/2821 with the lanes to be nulled or set to a selected value without reading data from system memory.[00359] FIG. 45 illustrates adding null vectors to a stream using the LEZR function. In this example, multiple feature maps indicated at 4501 are stored in system memory L2 130 (FIG. 1). A stream template is set up by a software application in which ICNT1 defines the number of filter rows (Fr) in a feature map, ICNT2 defines a number of filter columns (Fc) in a feature map, and ICNT3 defines a number (Ni) of input feature maps. ICNT0 defines the number of elements of the feature maps 4501 that are fetched by the streaming engine from a row. In this example,
for eight-bit elements ICNTO will be set to 64, and for sixteen-bit elements it will be set to 32. Streaming engine 125 fetches the feature maps according to the multi dimension stream template as described in more detail above to form stream 4500 as indicated at 4502. Stream 4500 is provided to MMA 3640 as a B matrix as described in more detail hereinabove.[00360] In this example, the MMA width can be selected to be 16, 32, or 64 elements, depending on the size of the elements, as described with respect to FIG. 36. Ni * Fr * Fc forms the rows of the B matrix. This value is not necessarily a multiple of the selected MMA width. Therefore, an additional number of null vectors are formed by specifying LEZR CNT in the stream template, where LEZR CNT is defined by expression (4).pad_zero_rows = MMAjvidth— ( Ni * Fr * FC % MMAjvidth ) (4)PAD VALUE[00361] In some applications, such as a pooling operation in a convolution neural network for maximum (max), minimum (min), and average pooling, the end of a column of image pixels needs to be either a max or a min value. Similarly, analysis tools for various deep learning neural networks, such as TensorFlow, Caffe networks, etc. require arbitrary zero, max, or min padding for the pooling operations.[00362] In this example, a three-bit pad value (PADVAL) flag 3023 is included in the flag field 2921 of stream template 2900. An example definition of the PADVAL flags is included in Table 38. Pad size is done on a full element, as defined by the ELTYPE flags 3001 (FIG. 30) in stream template 2900 (FIG. 29). In this example, the default value of zero is selected by setting the PADVAL flag 3023 to 000b,“unsigned min.”Table 38 PADVAL flags to control data strip mining operation[00363] In an example, a stream vector is formed using the specified pad value as the last element of a stream vector of the stream. In another example, another flag field (such as LEZR flags 3023, Table 37) may be used to specify a selected dimension on which to include a pad value.[00364] Referring to FIG. 38, control logic 3814 can control each of the multiplexers on alignment networks 3820, 3821 to force a null value onto selected elements of a stream vector based on the meta-data provided by reference queue 2815. In this manner, an entire stream vector can be set to a specified null value without fetching any data from system memory. In an example, the null value an unsigned max value. In another example, other predefined values or patterns may be used to represent a null value.[00365] In an example, control logic 3814 can control each of the multiplexers on alignment networks 3820, 3821 to force a selected value onto selected elements of a stream vector based on the meta-data provided by reference queue 2815. In this manner, an entire stream vector can be set to a selected value, such as a min value or a max value without fetching any data from system memory.[00366] In this example, control logic in the address generator 2811/2821 (see FIG. 28) performs count tracking for the six levels of nested iterations. When a loop count for the end of a stream expires, it signals that a null value or a selected value is to be inserted into the vector stream without fetching data from memory for the null value or selected value vector, as defined by the PADVAL flags 3024 in stream template 2900.[00367] Metadata produced by the address generator is pushed into the stream reference queue 2815/2825 (see FIG. 28). On the backend, when processing unit core 110 (FIG. 1) performs a read, this metadata is popped out and sent to control logic 3814 in alignment network 2820/2821 with the lanes to be nulled or set to a selected value without reading data from system memory.[00368] In order to achieve this in software code requires performing a permute operation. In an example simulation, performance of the hardware based PADVAL solution was 4x better than the software solution for image width > 64 and 2x better performance for an image with < 64; furthermore, the software solution required three different loops.[00369] FIG. 46 illustrates formation of a stream by inserting null or predefined data vectors by a streaming engine, such as streaming engine 125 of FIG. 28. In this example, at 4600 a
stream is opened on a streaming engine by storing stream parameters in a stream template register (2900, FIG. 29) within the streaming engine. The stream parameters include an element size of the array, a number of elements to include in each vector of a stream, a number of vectors to include in the stream for each dimension of the array, and a width indicator for a selected dimension of the array, such as a DECDIM flag and a DECDIM COUNT, see Table 30.[00370] At 4601, an address stream is generated according to the stream parameters stored in the stream template. Metadata is saved that indicates loop counts, end of loop, remaining width count, etc.[00371] At 4602 a line of matrix data is fetched by the streaming engine from system memory using the sequence of addresses generated by the address generator.[00372] At 4603, a check is made to see if a width count parameter is associated with the current dimension of the multi-dimension stream. If not, a normal stream vector is formed at 4605.[00373] If a width count parameter is associated with the current dimension, then at 4604, a check is made to determine if the width count is depleted. If not, a normal stream vector is formed at 4605.[00374] If the width count is depleted, then at 4606 a check is made to determine the width count is saturated. If not, then a stream vector is formed at 4607 by masking a portion of the vector that exceeds the width count. In some examples, the mask may insert a value of zero. In another example, a predefined value may be inserted by the mask, such as a min value, a max value, etc.[00375] If the width count is saturated, then at 4608 a null vector is formed without accessing system memory. In an example, the null vector may be set to all zeros. In another example, the null vector may be set to a predetermined pad value, such as a min value, a max value, etc. A predetermined pad value may be designated by a flag field, such a PADVAL flag 3023 (FIG. 30) in the stream template.[00376] At 4609, if the current dimension is not complete the loop repeats at 4602. If the current dimension is complete, then the width count is decremented at 4610 if it is associated with the current dimension. Otherwise, the width count is not decremented.[00377] At 4611 a check is made to determine if the stream is complete. If not, the process repeats to access more matrix data from the system and to form stream vectors.
[00378] At 4612, once the entire matrix has been accessed from memory, the data stream is closed.[00379] FIG. 47 illustrates formation of a stream by inserting null or predefined data vectors by a streaming engine, such as streaming engine 125 of FIG. 28. In this example, at 4700 a stream is opened on a streaming engine by storing stream parameters in a stream template register (2900, FIG. 29) within the streaming engine. The stream parameters include an element size of the array, a number of elements to include in each vector of a stream, a number of vectors to include in the stream for each dimension of the array, and a loop end count, such as loop end zero count 2933 (FIG. 29), and a flag to associate the loop end zero count with a selected dimension of the multidimensional loop, as described in Table 37.[00380] At 4701, an address stream is generated according to the stream parameters stored in the stream template. Metadata is saved that indicates loop counts, end of loop, remaining width count, etc.[00381] At 4702 a line of matrix data is fetched by the streaming engine from system memory using the sequence of addresses generated by the address generator.[00382] At 4703 a normal stream vector is formed using the data elements fetched from system memory.[00383] At 4704, if the current dimension is not complete the loop repeats at 4602. If the current dimension is complete.[00384] At 4705, a check is made to determine if the null count applies to the current dimension. If so, a number n of null vectors equal to the null count value in the stream template are formed by the streaming engine without accessing null data from system memory. In this example, the data in the null vectors are set to zero. In another example, a preselected null value may be used to form the null vector, such as a min value, a max value, etc.[00385] At 4707, a check is made to determine if the stream is complete. If not, the process repeats with the next loop at 4708 to access more matrix data from the system and to form stream vectors.[00386] At 4709, once the entire matrix has been accessed from memory and the specified null vectors formed without accessing data from system memory, the data stream is closed.[00387] FIG. 48 illustrates an example multiprocessor system. In this example, SoC 4800 includes processor 100 (FIG. 1) (referred to as“processor A”) and it is combined with a second
processor 4811 (referred to as“processor B”). Each processor is coupled to a block of shared level three (L3) memory 4850 via bus 4851. Processor B includes a block of unshared level two memory 4812. A direct memory access (DMA) engine 4860 may be programmed to transfer blocks of data/instructions from L3 memory to L2 memory 130 or L2 memory 4812 using known or later developed DMA techniques. Various types of peripherals 4862 are also coupled to memory bus 4851, such as wireless and/or wired communication controllers, etc.[00388] In this example, processor A, processor B, L3 memory 4850 are all included in a SoC 4800 that may be encapsulated to form a package that may be mounted on a substrate such as a printed circuit board (PCB) using known or later developed packaging techniques. For example, SoC 4800 may be encapsulated in a ball grid array (BGA) package. In this example, external memory interface (EMI) 4852 allows additional external bulk memory 4854 to be accessed by processor A and/or processor B.[00389] In this example, processor B is an ARM® processor that may be used for scalar processing and control functions. In other examples, various types of known or later developed processors may be combined with DSP 100. While two processors are illustrated in this example, in another example, multiple copies of DSP 100 and/or multiple copies of processor B may be included within an SoC and make use of the techniques for forming masked and null vectors without accessing system memory provided by streaming engine 125 that are described herein in more detail.OTHER EXAMPLES[00390] In a described example, a streaming engine includes two closely coupled streaming engines that can manage two data streams simultaneously. In another example, the streaming engine maybe capable of managing only a single stream, while in other examples the streaming engine is capable of handling more than two streams. In each case, for each stream, the streaming engine includes an address generation stage, a data formatting stage, and some storage for formatted data waiting for consumption by the processor.[00391] In a described example, addresses are derived from algorithms that can involve multi dimensional loops, each dimension maintaining an iteration count. In one example, the streaming engine supports six levels of nested iteration. In other examples, more or fewer levels of iteration are supported.[00392] In a described example, one-dimensional zero padding of stream vectors is provided.
In another example, two-dimensional zero padding of stream vectors is provided. In yet another example, more than two dimensions of zero padding may be provided.[00393] In described examples, a complex DSP processor with multiple function units and dual data paths is described. In another example, a simpler DSP that is coupled to a stream processor may be used. In another example, other types of known or later developed processors may be coupled to a stream processor, such as a reduced instruction set computer (RISC), a microprocessor, etc.[00394] In a described example, the MMA supports 32 x 32 16-bit matrix multiplication, and 64 x 64 8-bit matrix multiplication, and the streaming engine is configured to provide 64-byte stream vectors. In another example, the MMA may be configured to support large or smaller matrix sizes. An associated streaming engine may be configured to provide stream vectors that have a size that is larger or smaller than 64 bytes.[00395] In described examples, a processor that consumes a stream of data and a streaming engine that retrieves the stream of data from system memory are all included within a single integrated circuit (IC) as a system on a chip. In another example, the processor that consumes the stream of data may be packaged in a first IC and the streaming engine may be packaged in a second separate IC that is coupled to the first IC by a known or later developed communication channel or bus.[00396] In this description, the term“couple” and derivatives thereof mean an indirect, direct, optical, and/or wireless electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, and/or through a wireless electrical connection.[00397] Modifications are possible in the described examples, and other examples are possible, within the scope of the claims. |
A system, method, and computer program product are provided for debugging graphics programs via a system with a single graphics processing unit. The method includes the steps of storing an initial state of an application programming interface context in a memory, intercepting a stream of API commands associated with the frame, transmitting the stream of API commands to a software layer that implements the API to render the frame, and in response to a breakpoint, storing a graphics processing unit context in the memory. The initial state of the API context corresponds to the start of a frame, and the stream of API commands are generated by a graphics application. |
1.A method comprising:An initial state of an application programming interface (API) context is stored in the memory, wherein the initial state of the API context corresponds to a beginning of a frame;Intercepting an API command stream associated with the frame, wherein the API command stream is generated by a graphics application;Streaming the API command to a software layer implementing the API to render the frame;In response to the breakpoint, a graphics processing unit (GPU) context is stored in the memory.2.The method of claim 1 further comprising performing a playback loop comprising:Generating one or more API commands to restore the API context to the initial state;Retransmitting the API command stream to the software layer to re-render the frame;In response to another breakpoint, another GPU context is stored in the memory.3.The method of claim 2 wherein said replay loop is repeated in response to commands received from an integrated development environment (IDE).4.The method of claim 1 wherein said software layer is a driver.5.The method of claim 4 wherein said driver implements an OpenGL API.6.The method of claim 1 wherein said software layer is a runtime library.7.The method of claim 6 wherein said runtime library implements a Direct3D API.8.The method of claim 1 further comprising tracking the status of the API context.9.The method of claim 8 wherein tracking the status of the API context comprises:Turning on a state model associated with the graphics application;The state model is updated based on the API command for each API command generated by the graphics application.10.The method of claim 1 wherein said API command stream is intercepted by an application fill code.11.The method of claim 1 further comprising:Determining whether to continue execution based on the breakpoint in response to the breakpoint;If the execution should continue, then transfer the API command to the software layer to resume execution, orThe GPU context is stored in the memory if execution should not continue.12.The method of claim 11 wherein determining whether to continue execution based on the breakpoint comprises comparing a counter associated with the breakpoint to a threshold associated with an iteration number of the playback loop.13.The method of claim 1 wherein said initial state of said API context represents a state of a graphics processing unit associated with said API context at the beginning of said frame.14.A non-transitory computer readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the following steps:An initial state of an application programming interface (API) context is stored in the memory, wherein the initial state of the API context corresponds to a beginning of a frame;Intercepting an API command stream associated with the frame, wherein the API command stream is generated by a graphics application;Streaming the API command to a software layer implementing the API to render the frame;In response to the breakpoint, a graphics processing unit (GPU) context is stored in the memory.15.A non-transitory computer readable storage medium according to claim 14, wherein said software layer is a driver.16.The non-transitory computer readable storage medium of claim 14, wherein said step further comprises tracking a status of said API context by:Turning on a state model associated with the graphics application;The state model is updated based on the API command for each API command generated by the graphics application.17.A system comprising:Graphics processing unit (GPU);a memory configured to store an application fill code, the application fill code configured to:An initial state of an application programming interface (API) context is stored in the memory, wherein the initial state of the API context corresponds to a beginning of a frame;Intercepting an API command stream associated with the frame, wherein the API command stream is generated by a graphics application;Streaming the API command to a software layer implementing the API to render the frame;In response to the breakpoint, the GPU context is stored in the memory.18.The system of claim 17 wherein said graphics application is associated with one or more shader programs configured to be executed by said GPU.19.The system of claim 17 wherein said software layer comprises a driver that implements the OpenGL API.20.The system of claim 17 wherein said application fill code is further configured to track the status of said API context by:Turning on a state model associated with the graphics application;The state model is updated based on the API command for each API command generated by the graphics application. |
System, method and computer program product for locally debugging graphics programsCross-reference to related applicationsThis application claims the benefit of the U.S. Provisional Application Serial No. 61/730,025, filed on Nov. 26, 2012, which is incorporated herein by reference.Technical fieldThis application relates to software design and, more particularly, to debugging of graphics programs.Background techniqueProgrammers are now accustomed to being able to create and debug programs via numerous tools implemented in today's integrated development environments (IDEs) such as Microsoft Visual Studio. The programmer can create source code for the program executed by the target processor, compile the source code to generate the executable file, and run the executable file on the target processor. The IDE can include a program that allows the programmer to execute programs using breakpoints, step through the program one instruction at a time, step through the program from a breakpoint to a breakpoint, and view the contents of the memory or registers at different points during execution of the program. tool.Typically, the target processor may be a central processing unit (CPU), such as an Intel x86 family of processors or an ARM Cortex family of processors including a CPU core based RISC (Reduced Instruction Set). Such a processor is implemented to have the ability to interrupt or preempt the execution of a code executed by the processor. This capability enables programmers to debug programs via a single processor that is also used to execute an operating system (OS), IDE, or other software substantially simultaneously. However, today's conventional graphics processing units (GPUs) may not be able to operate in this manner. For example, a conventional GPU might not enable preemption for specific processes executing on the GPU. That is, the programmer cannot stop execution of the program on the GPU while allowing other operations such as generating graphical information for display on the attached monitor to continue. Without such capabilities, test platforms for GPUs are typically limited to remote systems with GPUs connected to the client system via a network or local systems with multiple GPUs, one of which is dedicated to display operations and the other The GPU is dedicated to test operations. Such system setup and operation are more complex and require additional hardware and special configurations. Encoding on a single GPU system is useful to programmers, and a single GPU system is available on a large number of desktops and laptops. Therefore, there is a need to address this and/or other issues related to the prior art.Summary of the inventionSystems, methods, and computer program products are provided for debugging graphics programs via a system employing a single graphics processing unit. The method includes the steps of storing an initial state of an application programming interface context in memory, intercepting an API command stream associated with the frame, streaming the API command to a software layer implementing the API to render the frame, and responding to the breakpoint, The graphics processing unit context is stored in memory. The initial state of the API context corresponds to the beginning of the frame, and the API command stream is generated by the graphics application.DRAWINGS1 shows a flow diagram of a method for debugging a graphics program using a system having a single graphics processing unit, in accordance with one embodiment;Figure 2 illustrates a system configured to debug a graphics program in accordance with the prior art;Figure 3 illustrates a system configured to debug a graphics program, in accordance with one embodiment;Figure 4 illustrates a parallel processing unit in accordance with one embodiment;Figure 5 illustrates the stream multiprocessor of Figure 4, in accordance with one embodiment;Figure 6 is a schematic illustration of a graphics processing pipeline implemented by the parallel processing unit of Figure 4, in accordance with one embodiment;Figure 7A illustrates a portion of code for a shader program, in accordance with one embodiment;Figure 7B illustrates a system for debugging a shader program using a single graphics processing unit, in accordance with another embodiment;8A, 8B and 8C illustrate a flow chart of a method for debugging a graphics program with a single graphics processing unit, in accordance with another embodiment;Figure 9 illustrates an exemplary system in which various architectures and/or functionalities of the various embodiments above may be implemented.Detailed waysThe present disclosure describes a mechanism for debugging a graphics program on a system with a single GPU. The application shim is configured to intercept API commands generated by graphics applications executed by the host processor. The graphics application, when compiled, is configured to generate API commands that are passed to the software layer that implements the API, such as a driver or runtime library. Instructions configured to generate API commands for the software layer can be replaced by instructions configured to generate API commands for the application fill code. Instructions can be automatically replaced by a binary code in a software tool or manually by linking the application fill code to the source code for the graphics application. When executing a graphics application, API commands are routed to the application fill code instead of the software layer.The application shim is configured to track the API context associated with the graphics application. The API context can be tracked by creating and modifying a state model that represents the current API context. The application filler code is configured to update the state model based on API commands received from the graphics application. After the state mode is updated, the application fill code can forward API commands to the software layer, as the graphical application originally intended.The application fill code can be configured to implement a replay mechanism that allows the debug tool to implement various debug techniques that are typically only associated with conventional CPUs. The playback mechanism includes the steps of storing an initial state of the API context when the frame for rendering the image data is started for display, storing the API command stream generated by the graphics application for the frame of the image data in the playback buffer, and turning on The playback loop repetitively renders the frames of the image data several times. Each pass of the replay loop includes restoring the API context to match the initial state of the API context, and streaming the API commands in the replay buffer to the software layer. When a breakpoint is encountered during a replay loop, the current state of the GPU context can be captured during rendering of the frame. Using the replay mechanism described above, the debugging tool can allow the programmer to stop at the program's breakpoint on a system with a single GPU, step through the program one instruction at a time, step through the program one at a time, and so on. Will not freeze the display.1 shows a flow diagram of a method 100 for utilizing a system debug graphics program with a single GPU, in accordance with one embodiment. At step 102, the initial state of the API context is stored in memory. The initial state of the API context may correspond to information included in the state model representing the API context at the beginning of the frame. The initial state of the API context is copied to a separate data structure in the memory to reset the state of the API context at a later point in time. At step 104, the API command stream generated by the graphics application is intercepted. The API command stream can be stored in the replay buffer. There may be multiple API contexts at a given point in time, and each API context may be associated with one or more API command streams generated by the graphics application. In one embodiment, two or more API command streams associated with an API context are stored in a replay buffer. In the context of this description, a replay buffer is any data structure allocated in memory and is configured to store an ordered list of API commands. In one embodiment, the replay buffer is a linked list. In another embodiment, the playback buffer is a FIFO.At step 106, the API command stream is passed to the software layer. In one embodiment, the software layer may be a driver that implements an API, such as a driver that implements the OpenGL API. In another embodiment, the software layer can be a runtime library that implements an API, such as a runtime library that implements the Direct3D API. In such an embodiment, the software layer can be linked to another driver or other intermediate layer. At step 108, the application fill code determines if a breakpoint has been reached. In the context of this disclosure, a breakpoint is a special instruction executed by a processor that causes execution to halt and potentially cause an error handler or other routine to be executed. A breakpoint can be a special instruction or associated with another instruction (e.g., an instruction prefix) that indicates that the instruction is associated with a breakpoint in the program. In one embodiment, the breakpoint may (directly or indirectly through an error handler) cause the GPU to pass a message indicating that the breakpoint has arrived and the GPU has stopped execution of the further instruction to the driver. If the breakpoint is not reached, then method 800 returns to step 106 where additional API commands are passed to the software layer. Returning to step 108, in response to reaching the breakpoint, method 800 proceeds to step 110 where the current state of the GPU context is stored in memory. At step 112, a replay loop is initiated that causes the initial state of the API context to be restored, the API command stream to be retransmitted to the software layer, and another state of the GPU context to be stored in memory.More exemplary information will now be set forth with respect to various alternative architectures and features, which may be implemented with or without the architecture and features as desired by the user. It should be strongly noted that the following information is set forth for illustrative purposes and should not be construed as limiting in any way. Any of the following features may optionally be combined, with or without repelling other features as described.Figure 2 illustrates a system configured to debug a graphics program in accordance with the prior art. As shown in FIG. 2, the system includes a client platform 200 coupled to a target platform 250 via a network 230. The client platform 200 can be, for example, a desktop computer, a laptop, a tablet, or any other system configured to run an IDE or other debugging software. The client platform 200 includes a central processing unit (CPU) 201, a memory 204, a GPU 206, a display device 208, and a network interface controller (NIC) 203. The CPU 201 may be an x86 type processor, a RISC processor, a PowerPC processor, or the like. Memory 204 can be a volatile memory such as a dynamic random access memory (DRAM). Although not explicitly shown, the client platform 200 can include a non-volatile storage device such as a hard disk drive (HDD) or other type of magnetic or optical storage system. The NIC 203 can implement the TCP/IP protocol for connecting to one or more other devices over the network 230, which can be, for example, a local area network (LAN), a wide area network (WAN), the Internet, and the like. Display device 208 can be a liquid crystal display (LCD) device, an organic light emitting diode (OLED) display device, a cathode ray tube (CRT) display device, or the like.GPU 206 is a processor that implements a programmable, parallel processing architecture that includes at least a portion of an image processing pipeline. The image processing pipeline is configured to generate image data for display on display device 208, which may be connected to GPU 206 via any type of communication link, such as a video graphics array (VGA) connection, a high definition multimedia interface (HDMI) connection , display port (DP) connection, etc. Although not explicitly shown, GPU 206 can be coupled to a local graphics memory, such as Synchronous Dynamic Random Access Memory (SDRAM). The GPU 206 can be configured to generate frames of image data based on commands transmitted from the driver executed by the CPU 201 to the GPU 206. The frames of image data may be stored in a frame buffer in the local graphics memory and converted to video signals that are transmitted to display device 208 via the communication link.The target platform 250 can be, for example, another desktop computer connected to the network 230 and including a second CPU 251, a second GPU 256, a second memory 254, and a second NIC 253. The CPU 251, the memory 254, and the NIC 253 are similar to the above-described CPU 201, memory 204, and NIC 203. The second GPU 256 can be referred to as a target device. Development applications such as IDEs or other types of debugging tools can be executed by the client platform 200. Developing the application can cause the graphics program (i.e., the shader) to be executed by the GPU 256 to implement debugging mechanisms such as breakpoints, instruction stepping, etc., through the development application executing on the client platform 200. It will be appreciated that the target platform 250 may not include a display device attached to the GPU 256 because the GPU 256 may be stopped during debugging, thereby preventing the GPU 256 from generating display signals for the display device. However, the programmer can view the state of GPU 256 on display device 208 of client platform 200 because GPU 206 does not stop during debugging and is therefore capable of generating display signals for display device 208.This type of remote system for debugging graphics programs may be sufficient to satisfy certain situations in which the programmer utilizes a client platform 200 that is located in a central office and connected to one or more target platforms 250 via a local area network. However, this system requires additional hardware that adds unnecessary costs and is complicated to set up, requiring the programmer or network administrator to configure various IP addresses of the target platform 250 and configure the development application accordingly. Many times programmers will only be able to access a regular system (such as a desktop or laptop) that includes only a single GPU, and the programmer may not be able to access the target platform connected to the network.It will be appreciated that alternative systems for merging multiple GPUs into a single platform can be constructed, at least one of which can be dedicated to generating image data for display and at least one other GPU dedicated to debugging operations. This type of system has been used to debug computing applications for general purpose computing (GPGPU) on graphics processing units. However, such a system requires a separate driver for the GPU dedicated to debugging operations, which prevents the GPU dedicated to debugging operations from processing the graphics program. That is, conventional operating systems such as Microsoft Windows are configured to assign image operations to any available GPU configured to process graphics programs. Therefore, in the absence of a potential for the operating system to display image data generated by the operating system or other applications, the development application cannot stop the operation of the graphics program for debugging purposes. This problem can be solved by using the system described below.FIG. 3 illustrates a system configured to debug a graphics program, in accordance with one embodiment. As shown in FIG. 3, the client platform 300 is similar to the client platform 200 in that the client platform 300 includes a CPU 301, a memory 304, a GPU 306, a display device 308, and a NIC 303. In one embodiment, these components are similar to the components of the client platform 200. The memory 304 can include an operating system (OS) 331, a driver 332, an IDE 333, a graphics application 334, one or more shader programs 335, and a shadow state memory 336. The OS331 can be Microsoft Windows, Linux, Mac OSX, and the like. The IDE 333 can be Microsoft Visual Studio, NVIDIA Nsight (an extension to Visual Studio), the open source Eclipse Platform, or any other type of development environment or debugging software that can debug graphics programs.Driver 332 is configured to transfer instructions to GPU 306 to perform tasks on GPU 306. In one embodiment, driver 332 implements an API defined by the OpenGL specification. The API enables the graphics application 334 to generate hardware-independent API commands that are sent to the driver 332, which in turn causes the GPU 306 to implement the operations specified by the API commands. In another embodiment, driver 332 implements an API associated with the runtime library that implements the Direct3D API. The API enables the graphics application 334 to generate hardware-independent API commands that are sent to the runtime library, which in turn transmits additional API commands to the driver 332, which causes the GPU 306 to perform the operations specified by the API commands. It will be appreciated that one or more intermediate software layers, such as application fill codes, libraries, etc., through which API commands generated by graphics application 334 may pass are transmitted directly or indirectly to driver 332.Graphics application 334 may be a software application configured to be executed by CPU 301 to generate API commands that are passed to the software layer. Graphics application 334 can be associated with one or more shader programs 335, such as vertex shaders, geometry shaders, or fragment shaders (ie, pixel shaders) configured to be executed by programmable program units of GPU 306. A shader is a generic term that is configured to be executed by a GPU to transform a set of instructions for transforming geometric primitives or colored pixels (i.e., generating color component values for one or more pixels). Each shader can be configured to receive one or more input buffers (e.g., vertex buffers, etc.) and generate one or more output buffers (e.g., triangle patches, color vectors, etc.).As noted above, conventional GPUs do not have the ability to stop during execution but continue to produce image data for display on a display device. To solve this problem, the application shim is configured to replay the operations specified by the graphics application, such that IDE 333 will display the state of GPU 306 as if GPU 306 was stopped during debugging but the GPU actually allowed to continue execution, allowing the GPU context to be allowed. Switching from the API context associated with the graphics application to, for example, an API context associated with the operating system 331 that generates image data for display on the display device 308.In one embodiment, the application fill code is configured to track the API context associated with the graphics application 334. The application fill code stores the initial state of the API context in memory 304 at the beginning of a particular frame or frames. The programmer can use the debug tool to indicate which frame or frames to be interested in. The application fill code stores the API command stream for one or more frames generated by graphics application 334 in memory 304. The application fill code can then turn on the replay loop to repeatedly execute the API command stream to render one or more frames of image data several times. That is, a single iteration of the replay loop executes the API command stream to render one or more frames of image data. At the end of the API command stream, the initial state of the API context can be restored, and the API command stream can be replayed to repeat and re-render one or more frames of image data in substantially the same order. API commands can include loading a call to a particular shader program, specifying a push buffer call that includes multiple geometry primitives (such as triangles), drawing a call, and so on. The API command stream can be saved in the replay buffer of memory 304 and then replayed many times as desired by the user to perform the debugging operation.This replay functionality can be utilized by IDE 333 or other debugging tools to implement various debugging techniques. For example, graphics application 334 and/or shader program 335 can be debugged by executing graphics application 334 and turning on the replay loop during a particular frame. A breakpoint (i.e., special instruction) can be included at a line in shader program 335 that causes GPU 306 to stop execution of any further instructions associated with graphics application 334. In one embodiment, the breakpoint instruction causes the GPU 306 to execute an error handler. The error handler can cause the message (e.g., via driver 332) to be passed to the application fill code. In one embodiment, the application fill code is configured to copy the current state of the GPU context into the shadow state memory 336. Once the current state of the GPU context has been stored in the shadow state memory 336, the GPU 306 can be allowed to continue execution of the shader program 335 and any other instructions specified by the API command stream. In general, allowing GPU 306 to continue execution will prevent the programmer from checking the state of the GPU context (i.e., registers, associated memory constructs, etc.) because the GPU context will be updated as additional instructions are executed. However, in this case, the state of the GPU context is stored in the shadow state memory 336 and is not affected by allowing the program to continue or allowing different contexts to be loaded to the GPU 306. Thus, although GPU 306 has continued other tasks, the programmer can check the information stored in shadow state memory 336.In one embodiment, GPU 306 includes parallel processing unit 400 as described below in connection with Figures 4 and 5. It will be appreciated that other embodiments may include GPUs employing different architectures, and the architectures shown below are for exemplary purposes.FIG. 4 illustrates a parallel processing unit (PPU) 400, in accordance with one embodiment. Although a parallel processor is provided herein as an example of a PPU 400, it should be strongly noted that such a processor is for exemplary purposes only, and any processor may be employed to supplement and/or replace it for the same purpose. In one embodiment, PPU 400 is configured to execute multiple threads concurrently in two or more Stream Multiple Processors (SM) 450. A thread (ie, an execution thread) is an instantiation of an instruction set that is executed in a particular SM450. Each SM 450, described in detail below in connection with FIG. 5, may include, but is not limited to, one or more processing cores, one or more load/store units (LSUs), a level one (L1) cache, a shared memory, and the like.In one embodiment, PPU 400 includes an input/output (I/O) unit 405 that is configured to transmit and receive communications (i.e., commands, data, etc.) from a central processing unit (CPU) via system bus 402. I/O unit 405 can implement a Peripheral Component Interconnect Express (PCIe) interface for communication over a PCIe bus. In an alternate embodiment, I/O unit 405 can implement other types of known bus interfaces.The PPU 400 also includes a host interface unit 410 that decodes commands and transmits commands to the task management unit 415 of the PPU 400 or other units (e.g., memory interface 480) as may be specified by the command. Host interface unit 410 is configured to route communications between various logical units of PPU 400.In one embodiment, the program encoded as a command stream is written to the buffer by the CPU. A buffer is an area in memory that can be accessed (i.e., read/written) by both the CPU and PPU 400, such as memory 404 or system memory. The CPU writes the command stream to the buffer and then passes a pointer to the beginning of the command stream to the PPU 400. Host interface unit 410 provides pointers to one or more streams to task management unit (TMU) 415. The TMU 415 selects one or more streams and is configured to organize the selected streams into suspended grid pools. Suspending the grid pool can include new grids that have not been selected for execution and grids that have been partially executed and have been suspended.A work distribution unit 420 coupled between the TMU 415 and the SM 450 manages the active mesh pool, and selects and dispatches the active mesh for execution by the SM 450. When the suspended grid is eligible for execution, that is, there are no unresolved data dependencies, the suspended grid is transferred from the TMU415 to the active grid pool. When the execution of the active grid is blocked by dependencies, the active grid is moved to the suspended pool. When the execution of the grid is completed, the grid is removed from the active grid pool by the work distribution unit 420. In addition to receiving the mesh from host interface unit 410 and work distribution unit 420, TMU 415 also receives the mesh dynamically generated by SM 450 during execution of the mesh. These dynamically generated meshes are added to other suspended meshes in the suspended grid pool.In one embodiment, the CPU executes a driver kernel that implements an application programming interface (API) that enables one or more applications executing on the CPU to schedule operations for execution on the PPU 440. The application can include instructions (i.e., API commands) that cause the driver kernel to generate one or more grids for execution. In one embodiment, PPU 400 implements a SIMD (Single Instruction, Multiple Data) architecture in which each thread block (ie, warp) in a grid is concurrently executed by different threads in a thread block for different data sets. The driver kernel definition includes thread blocks for k related threads, so threads in the same thread block can exchange data through shared memory. In one embodiment, the thread block includes 32 related threads, and the grid is an array of one or more thread blocks that execute the same stream, and different thread blocks can exchange data through the global memory.In one embodiment, PPU 400 includes X SM450s (X). For example, the PPU 400 can include 15 different SM450s. Each SM 450 is multi-threaded and configured to execute multiple threads (eg, 32 threads) from a particular thread block concurrently. Each SM 450 is connected to a secondary (L2) cache 465 via a crossbar 460 (or other type of interconnected network). L2 cache 456 is coupled to one or more memory interfaces 480. The memory interface 480 implements a 16, 32, 64, 128-bit data bus, etc. for high speed data transfer. In one embodiment, PPU 400 includes U memory interfaces 480 (U), with each memory interface 480 (U) connected to a corresponding memory device 404 (U). For example, PPU 400 can be connected to up to six memory devices 404, such as graphics double data rate, version 5, synchronous dynamic random access memory (GDDR5 SDRAM).In one embodiment, PPU 400 implements a multi-level memory hierarchy. Memory 404 is located off-chip of the SDRAM coupled to PPU 400. Data from memory 404 can be retrieved and stored in L2 cache 465, which is located on-chip and shared between SM450s. In one embodiment, each SM 450 also implements an L1 cache. The L1 cache is a private memory dedicated to a particular SM450. Each L1 cache is coupled to a shared L2 cache 465. Data from the L2 cache 465 can be retrieved and stored in each L1 cache for processing in the functional units of the SM 450.In one embodiment the PPU 400 includes a graphics processing unit (GPU). The PPU 400 is configured to receive commands that specify shader programs for processing graphics data. Graphic data can be defined as a collection of primitives such as points, lines, triangles, quads, triangle strips, and the like. Typically, the primitive includes data specifying a number of vertices of the primitive (e.g., in the model space coordinate system) and attributes associated with each vertex of the primitive. The attributes may include one or more of position, color, surface normal vector, texture coordinates, and the like. The PPU 400 can be configured to process graphics primitives to generate a frame buffer (i.e., pixel data for each pixel of the display). The driver kernel implements a graphics processing pipeline, such as a graphics processing pipeline defined by the OpenGL API.The application writes model data for the scene (ie, a collection of vertices and attributes) to memory. The model data defines each object that is visible on the display. The application then makes an API call to the driver kernel that requests the model data to be rendered and displayed. The driver kernel reads the model data and writes the command to the buffer to perform one or more operations to process the model data. Commands can encode different shader programs, including one or more of vertex shaders, shell shaders, geometry shaders, pixel shaders, and more. For example, TMU 415 can configure one or more SMs 450 to execute a vertex shader program that processes a number of vertices defined by the model data. In one embodiment, the TMU 415 can configure different SMs 450 to execute different shader programs concurrently. For example, a first subset of the SM450 can be configured to execute a vertex shader program, and a second subset of the SM450 can be configured to execute a pixel shader program. The first subset of SM 450 processes the vertex data to produce processed vertex data and writes the processed vertex data to L2 cache 465 and/or memory 404. After the processed vertex data is rasterized (ie, transformed from three-dimensional data into two-dimensional data of the screen space) to produce segment data, a second subset of the SM 450 performs a pixel shader to produce processed segment data, which is subsequently The other processed segment data is mixed and written to the frame buffer in memory 404. The vertex shader program and the pixel shader program can execute concurrently, processing different data from the same scene in a pipeline manner until all model data for the scene has been rendered to the frame buffer. The contents of the frame buffer are then passed to the display controller for display on the display device.The PPU 400 can be included in desktop computers, notebook computers, tablets, smart phones (e.g., wireless, handheld devices), personal digital assistants (PDAs), digital cameras, handheld electronic devices, and the like. In one embodiment, PPU 400 is embodied on a single semiconductor substrate. In another embodiment, PPU 400 is included with a one or more other logic units, such as a reduced instruction set (RISC) CPU, a memory management unit (MMU), a digital to analog converter (DAC), and the like, in a system on chip (SoC).In one embodiment, PPU 400 can be included on a graphics card that includes one or more memory devices 404, such as GDDR5 SDRAM. The graphics card can be configured to interface with a PCIe slot on a motherboard that includes a desktop computer such as a Northbridge chipset and a Southbridge chipset. In yet another embodiment, PPU 400 may be an integrated graphics processing unit (iGPU) included in the chipset (i.e., Northbridge) of the motherboard.FIG. 5 illustrates the stream multiprocessor 450 of FIG. 4, in accordance with one embodiment. As shown in FIG. 5, the SM 450 includes an instruction cache 505, one or more scheduler units 510, a register file 520, one or more processing cores 550, one or more double precision units (DPUs) 551, one or more A special function unit (SFU) 552, one or more load/store units (LSU) 553, an interconnection network 580, a shared memory/L1 cache 570, and one or more texture units 590.As described above, the work distribution unit 420 dispatches an active mesh for execution on one or more SMs 450 of the PPU 400. The scheduler unit 510 receives the grid from the work distribution unit 420 and manages the instruction schedule for one or more thread blocks for each active grid. The scheduler unit 510 schedules threads for execution in a parallel thread group, where each group is referred to as a warp. In one embodiment, each warp includes 32 threads. The scheduler unit 510 can manage a plurality of different thread blocks, allocate thread blocks to the warp for execution during each clock cycle, and then schedule multiple from each functional unit (ie, core 550, DPU 551, SFU 552, and LSU 553). A different thread bundle of instructions.In one embodiment, each scheduler unit 510 includes one or more instruction dispatch units 515. Each dispatch unit 515 is configured to transmit an instruction to one or more of the functional units. In the embodiment illustrated in Figure 5, scheduler unit 510 includes two dispatch units 515 that cause two different instructions from the same warp to be dispatched during each clock cycle. In an alternate embodiment, each scheduler unit 510 can include a single dispatch unit 515 or an additional dispatch unit 515.Each SM 450 includes a register file 520 that provides a collection of registers for the functional units of the SM 450. In one embodiment, register file 520 is divided between each functional unit, thus allocating a dedicated portion of register file 520 for each functional unit. In another embodiment, register file 520 is divided between different warps being executed by SM 450. Register file 520 provides temporary storage for the operands of the data path connected to the functional unit.Each SM 450 provides L processing cores 550. In one embodiment, the SM 450 includes a large number (e.g., 192, etc.) of different processing cores 550. Each core 550 is a fully-pipelined single precision processing unit that includes a floating point arithmetic logic unit and an integer arithmetic logic unit. In one embodiment, the floating point arithmetic logic unit implements the IEEE 754-2008 standard for floating point operations. Each SM450 also includes M DPUs 551 that implement double precision floating point operations, N SFUs 552 that implement special functions (eg, pixel mixing operations, etc.), and load and store operations between the shared memory/L1 cache 570 and the register file 520. P LSU553. In one embodiment, the SM 450 includes 64 DPUs 551, 32 SFUs 552, and 32 LSUs 553.Each SM 450 includes an interconnection network 580 that connects each functional unit to a register file 520 and a shared memory/L1 cache 570. In one embodiment, interconnect network 580 is a crossbar switch that can be configured to connect any functional unit to a register in register file 520 or to any memory location in shared memory/L1 cache 570.In one embodiment, the SM 450 is implemented in a GPU. In such an embodiment, the SM 450 includes J texture units 590. Texture unit 590 is configured to load a texture map (e.g., a 2D array of texels) from memory 404 and sample the texture map to produce sampled texture values for use in the shader program. The texture unit 590 uses a mip-map (i.e., a texture map that changes the level of detail) to implement a texture operation such as an anti-aliasing operation. In one embodiment, the SM 450 includes 16 texture units 590.The PPU 400 described above can be configured to implement high parallel computing that is much faster than conventional CPUs. Parallel computing has advantages in graphics processing, data compression, metrology, and stream processing algorithms.FIG. 6 is a schematic diagram of a graphics processing pipeline 600 implemented by the PPU 400 of FIG. 4, in accordance with one embodiment. Graphics processing pipeline 600 is an abstract flow diagram that implements the processing steps for generating a 2D computer generated image from 3D geometric data. As is known, by dividing the operation into multiple stages, the pipeline architecture can more efficiently implement long delay operations where the output of each stage is coupled to the input of the next successive stage. Thus, the graphics processing pipeline receives input data 601, which is passed from the first stage of graphics processing pipeline 600 to the next stage to generate output data 602. In one embodiment, graphics processing pipeline 600 may represent a graphics processing pipeline defined by the OpenGL API.As shown in Figure 6, graphics processing pipeline 600 includes a pipeline architecture that includes several stages. Levels include, but are not limited to, data assembly stage 610, vertex shading stage 620, primitive assembly level 630, geometry shading stage 640, viewport scaling, culling and cropping (VSCC) stage 650, rasterization stage 660, fragment shading stage 670, and raster Operating stage 680. In one embodiment, the input data 601 includes commands that configure the processing unit to implement the stages of the graphics processing unit 600 and geometric primitives (e.g., points, lines, triangles, quads, triangle strips, or sectors, etc.) to be processed by the stages. Output data 602 may include pixel data (i.e., color data) that is copied into a frame buffer or other type of surface data structure in memory.Data assembly stage 610 receives input data 601 that specifies vertex data for high-order surfaces, primitives, and the like. Data assembly stage 610 collects vertex data into a temporary store or queue, such as by receiving a command from a host processor that includes a pointer to a buffer in memory and reading vertex data from the buffer. The vertex data is then passed to vertex shader stage 620 for processing.Vertex shading stage 620 processes vertex data by implementing a set of operations (i.e., vertex shaders or programs) for each vertex. A vertex can be specified, for example, as a 4-coordinate vector associated with one or more vertex attributes. Vertex shading stage 620 can manipulate features such as position, color, texture coordinates, and the like. That is, vertex shading stage 620 implements operations for vertex coordinates or other vertex attributes associated with vertices. Such operations typically include a lighting operation (i.e., modifying the color attributes of the vertices) and a transform operation (i.e., modifying the coordinate space of the vertices). For example, coordinates can be specified using coordinates in the object's coordinate space, which is transformed by multiplying the coordinates by a matrix that transforms the coordinates from the object's coordinate space to world coordinates or normalized device coordinates (NCD) space. Vertex shading stage 620 generates transformed vertex data that is passed to primitive assembly stage 630.Primitive assembly stage 630 collects the vertices output by vertex shading stage 620 and groups the vertices into geometric primitives for processing by geometric shading stage 640. For example, primitive assembly stage 630 can be configured to group every three consecutive vertices into geometric primitives (i.e., triangles) for transmission to geometric shading stage 640. In some embodiments, specific vertices may be repeated for consecutive geometric primitives (e.g., two consecutive triangles in a triangular strip may share two vertices). Primitive assembly level 630 conveys geometric primitives (i.e., a collection of associated vertices) to geometric shading level 640.The geometric shading stage 640 processes the geometric primitives by implementing an operational set (ie, a geometry shader or program) for the geometric primitives. A tessellation operation can generate one or more geometry primitives from each geometry primitive. That is, the geometry shader can subdivide each geometry primitive into two or more geometric primitives of a finer mesh for processing by the remainder of graphics processing pipeline 600. The geometric shading stage 640 conveys the geometry primitives to the viewport SCC stage 650.The viewport SCC stage 650 implements viewport scaling, culling, and cropping of geometric primitives. Each rendered surface is associated with an abstract camera position. The imaging position represents the position of the observer who views the scene and defines the viewing frustum that surrounds the object of the scene. The view body can include a view plane, a back plane, and four crop planes. Any geometric primitive that is completely outside the viewport can be culled (ie, discarded) because the geometry primitive will be detrimental to the final rendered scene. Any geometric primitive that is partially within the optic body and partially outside the viewport can be cropped (i.e., transformed into a new geometric primitive surrounded by the optic body). In addition, each geometric primitive can be scaled based on the depth of the view. All potentially visible geometric primitives are then transmitted to the rasterization stage 660.Rasterization stage 660 converts the 3D geometric primitives into 2D segments. Rasterization stage 660 can be configured to utilize the vertices of the geometric primitives to set a set of planar equations over which various properties can be interpolated. Rasterization stage 660 also calculates an overlay mask for a plurality of pixels that indicates whether one or more sample locations for the pixel intercept geometric primitives. In one embodiment, a z-test can also be implemented to determine if the geometric primitive is obscured by other geometric primitives that have been rasterized. Rasterization stage 660 generates segment data (i.e., interpolated vertex attributes associated with a particular sample location for each overlay pixel), which is passed to segment shader stage 670.Fragment shading stage 670 processes fragment data by implementing an operation set (ie, a fragment shader or program) for each fragment. The segment shader stage 670 can generate pixel data (i.e., color values) for the segment, such as by performing a highlighting operation on the segment or sampling the texture map using the interpolated texture coordinates. Fragment shading stage 670 generates pixel data that is passed to raster operation stage 680.Raster operation stage 680 can perform various operations on the pixel data, such as performing alpha testing, template testing, and blending pixel data with other pixel data corresponding to other segments associated with the pixels. When the raster operation stage 680 has finished processing pixel data (i.e., output data 602), the pixel data can be written to a render target, such as a frame buffer, a color buffer, and the like.It will be appreciated that one or more additional stages may be included in graphics processing pipeline 600 in addition to or in lieu of one or more of the above stages. Various implementations of abstract graphics processing pipelines can implement different levels. Moreover, one or more of the above stages (such as geometric shading stage 640) may be excluded from the graphics processing pipeline in some embodiments. Other types of graphics processing pipelines are considered to be included within the scope of the present disclosure. Moreover, any stage in graphics processing pipeline 600 can be implemented by one or more dedicated hardware units in the graphics processor, such as PPU 400. Other stages in graphics processing pipeline 600 may be implemented by a programmable hardware unit such as SM450 of PPU 400.Unlike a GPGPU program that launches a single kernel on GPU 306 and turns on hundreds or thousands of threads, the graphics program is implemented by launching the initial kernel on GPU 306, which in turn initiates one or more subsequent cores without the intervention of CPU 301. . For example, the graphics program can launch the kernel on GPU 306 to implement vertex coloring stage 620 on one SM 450 (or multiple SMs 450). The kernel then launches a separate kernel to implement the geometry shading stage 640, which in turn launches another core to implement the fragment shader 670 and the like. Additionally, some other stages of graphics processing pipeline 600 may be implemented on fixed unit hardware such as a rasterizer or data assembler. It will be appreciated that results from one core may be processed by one or more intermediate fixed function hardware units before being processed by the subsequent kernel on the SM450. As a result, the GPGPU program can potentially restore by loading the saved state to one of the SM450s and restarting the stopped kernel. Unlike GPGPU programs, graphics program reloading is much more difficult because many fixed functional units are not considered. And the saved state that will be reloaded. The above replay mechanism relieves the problem.FIG. 7A shows a portion of code for shader program 700, in accordance with one embodiment. As shown in Figure 7A, shader program 700 is a vertex shader programmed in a high level shader language such as NVIDIA Cg (C for graphics). Shader program 700 is part of the code developed to render tessellated geometry. It will be appreciated that the code of shader program 700 is for exemplary purposes only and that any shader program code can be debugged using the techniques described herein.As described above, data assembly stage 610 receives a list of geometric primitives from graphics application 334. The API context is configured to set the state of the PPU 400 to implement at least a portion of the graphics processing pipeline 600. Once the API context is set, another API command can initiate the task on the PPU 200. Tasks may include multiple threads configured to implement, for example, a shader program. Each of the plurality of threads represents a single instance of the shader program 700. As the vertices are passed from the data assembly stage 610 to the vertex shader stage 620, the scheduler unit 520 assigns the vertices to the available threads.FIG. 7B illustrates a system 750 for debugging a shader program 335 using a single GPU 306, in accordance with another embodiment. As shown in FIG. 7B, system 750 includes graphics application 334, API intercept module 755, driver 332, GPU 306, and display device 308. The graphics application 334, the API intercept module 755, and the driver 332 are executed by the CPU 301. To debug the source code of the colorizer program 335, the source code can be compiled by the compiler and stored in a memory accessible by the GPU 306. The graphics application 334 generates an API command stream that includes loading the shader program 335 onto the SM 450 of the PPU 400 and launching at least one command of a number of threads on the SM 450, each thread being an instance of the shader program 335. The API intercept module 755 is configured to intercept the API command stream generated by the graphics application 334. The API intercept module 755 tracks the state of the API context by managing a state model that is configured to simulate changing the state of the API context in response to the API command stream. The API intercept module 755 forwards the API command stream to the driver 332. Additionally, in some embodiments, the API command stream can be forwarded to one or more intermediate software layers, such as a runtime library that implements the Dricon3D API. Driver 332 then translates the API commands into commands or instructions that are sent to GPU 306 to implement various operations for generating image data for display on display device 308.To begin debugging the source code 335 of the graphics program, the IDE 333 can pass commands to the API intercept module 755. At the beginning of the next frame, the API intercept module 755 can store the initial state of the API context. The API intercept module 755 then stores the API command stream generated by the graphics application 334 for one or more frames in the replay buffer. Once the API command stream has been stored in the replay buffer, the API intercept module 755 then turns on the replay loop until the GPU 306 encounters a breakpoint. In addition, the breakpoint is a special instruction inserted into the compiled shader program 335 by the IDE 333. When the breakpoint is executed by GPU 306, an error handler is executed that causes API intercept module 755 to store the current state of the GPU context to shadow state memory 336. It will be appreciated that storing the current state of the GPU context includes copying information related to the state model, including parameters, register values, and shared memory, to the shadow state memory 336. In one embodiment, the GPU context includes information related to the currently active thread, register values, local memory values, buffers stored in global memory, and the like. Once the current state of the GPU context is saved to the shadow state memory 336, threads on the GPU 306 can be allowed to complete execution. GPU 306 can then freely handle different contexts. As a result, display device 308 is allowed to generate new image data for display such that display device 308 is not frozen in the previous frame or off because display device 308 stops receiving the video signal.Returning to Figure 7A, the API Intercept Module 755 enables the use of a system employing a single GPU 306 to implement various debug functionalities. For example, a programmer can use IDE 333 to insert a breakpoint into the source code of shader program 335. The programmer can select the specific line of the source code and use the commands implemented by the IDE 333 to insert breakpoints. Once the programmer has specified a breakpoint, the programmer can select a command in IDE 333 to compile and execute the current version of the source code of shader program 335. In one embodiment, a breakpoint can be inserted into a compiled version of shader program 335 using binary patching techniques (i.e., instead of an instruction in a shader program to jump to an instruction set that includes a breakpoint). A graphics application 334 can be executed that generates an API command stream that is passed to a software layer, such as driver 332. The API command creates an API context to render frames of video data. The API command stream causes the modified shader program 335 to be executed by the GPU 306.Once the first breakpoint is reached, the error handler is executed in GPU 306 and the instructions of the error handler cause API intercept module 755 to save the state of the GPU context to shadow state store 336. Subsequently, GPU 306 allows the execution of the thread to continue until the frame has been rendered. The programmer can then use the graphical user interface (GUI) implemented by the IDE 333 to verify the state of the GPU context stored in the shadow state memory 336. Once the programmer has verified that the state of GPU 306 is at the first breakpoint, the programmer can repeat the process to set different breakpoints in the source code of shader program 335.Another debugging mechanism that can be implemented using the replay mechanism is instruction stepping. It can be understood that stepping to the next instruction or the next breakpoint is not as easy as simply executing the next pending instruction for a thread or thread group, because the GPU cannot stop for a while while waiting for the programmer to indicate that the program should continue to execute. The image displayed on the display device 308 is not fixed. Thus, the API command stream can be repeatedly executed using the replay functionality enabled by the API intercept module 755, allowing the GPU 306 to continue execution while stopping execution at different points in the program, during each iteration of the replay loop. The current state of the GPU context stored at different points and the stored state of the display GPU context.In one embodiment, the API intercept module 755 is configured to restore the initial state of the API context during each pass of the replay loop. Once the initial state of the API context has been restored, the API intercept module 755 can transfer the API command stream stored in the replay buffer to the software layer, which causes the GPU 306 to render one or more frames. For instruction stepping between breakpoints, the API intercept module 755 can keep track of the list of breakpoints that have been encountered. Each breakpoint can be associated with a particular number of rows in shader program 335 and a particular API command (e.g., a third draw call). Breakpoints can also be associated with other states such as specific primitives (eg, vertices, triangles, fragments, etc.), particular frames, and the like. As GPU 306 executes a breakpoint, the error handler causes API intercept module 755 to evaluate the particular breakpoint that caused the error and to determine if the program should be allowed to continue or if the current breakpoint is the next sequential breakpoint that should be displayed to the programmer. The next breakpoint can represent a single step from the previous instruction.In this way, an instruction is stepped by an instruction or a breakpoint followed by a breakpoint. However, the entire frame (or some frames) is actually re-rendered each time the playback loop is processed, and the API intercept module 755 only attempts to capture the state of the GPU context at different points during rendering. In fact, the actual order performed by GPU 306 during each pass of the playback loop can be different. That is, although the API command streams have a constant order, the architecture of GPU 306 can schedule the execution of a particular thread in a different order based on various scheduling algorithms, thus, in contrast to the previous iterations of the replay loop, in replay The order of execution of the threads during different iterations of the loop may not be exactly the same, but the state for the particular thread of interest may be accurately restored. However, by continuing to step through the instructions and monitoring the state of the GPU context during each step, the program appears to be executed sequentially and the programmer can identify potential holes in the source code, similar to conventional debugging tools.It will be appreciated that in some architectures, a particular shader program 335 can execute in parallel in substantially hundreds or thousands of threads. Thus, a single breakpoint in shader program 335 can arrive for a particular thread group (ie, a warp) during one clock cycle, while the same breakpoint can arrive during one or more additional clock cycles for being the same shader Different thread groups of instances of program 335. For example, a fragment shader can be performed one or more times for each pixel in an image, where the 1080p HD image has more than 2 million pixels. GPU 306 can only process a portion of those threads for a given clock cycle. Thus, for a related thread group, a single breakpoint in the shader program 335 can arrive several times. In one embodiment, the API intercept module 755 tracks the number of times the current pass of the playback loop has encountered a particular breakpoint. That is, the API Intercept Module 755 can maintain a calculator indicating how many times a breakpoint has been hit during a particular debug session. The API Intercept Module 755 will then track how many times a particular breakpoint has triggered an error handler during a particular pass of the replay loop. If the breakpoint has not encountered a threshold number of times during this particular pass of the replay loop, the API intercept module 755 allows the GPU 306 to continue execution. However, if the breakpoint has encountered a threshold number of times during this particular pass of the replay loop, the API intercept module 755 causes the current state of the GPU context to be stored to the shadow state store 336. This type of operation provides the illusion of the progress of the execution of the shader program 355, even when there is only a single breakpoint included in the shader program 335.That is, stopping at the first breakpoint in a shader program that is executed by hundreds or thousands of threads in parallel will always stop rendering at a particular point near the beginning of the frame. The illusion of progress is provided by automatically skipping several breakpoints during each iteration of the replay loop to advance to different points in the rendering of the frame.8A, 8B, and 8C illustrate a flow diagram of a method 800 for debugging a graphics program with a single GPU 306, in accordance with another embodiment. At step 802, the API intercept module 755 monitors the API command stream generated by the graphics application 334. In one embodiment, API intercept module 755 is an application fill code configured to intercept API commands generated by graphics application 334 and manage state models representing API contexts associated with graphics application 334. At step 804, the API intercept module 755 determines if the next frame is captured. In one embodiment, the API intercept module 755 can be configured to receive a command from the IDE 333 that causes the API intercept module 755 to capture an API command stream for the next frame. In another embodiment, the API intercept module 755 can be configured to automatically capture frames when the first breakpoint is set in the shader program and configured to not automatically capture frames when all breakpoints are removed from the shader program 335 . If the API intercept module 755 has not received an instruction to capture the next frame, the API intercept module 755 continues to monitor the API command stream generated by the graphics application 334. However, if the API intercept module 755 has received an instruction to capture the next frame, then in step 806, the API intercept module 755 captures the initial state of the API context. In one embodiment, the API intercept module 755 creates a copy of the state model at the beginning of the next frame in the memory 304.At step 808, API interception module 755 stores the API command stream for the current frame in a replay buffer. The replay buffer is a data structure in memory 304 that maintains an ordered list of API commands for at least one frame. At step 810, the API intercept module 755 can suspend execution of the graphics application 334. Understandably, some processes can be stopped in today's modern operating systems. At step 812, the API intercept module 755 turns on the playback loop. Each pass of the replay loop will reset the context of the API to the initial state of the API context captured in step 806 and retransmit the API command stream stored in the replay buffer to the software layer one or more times. Re-render the frame twice.At step 814, the API intercept module 755 resets the state of the API context to the initial state of the API context. In one embodiment, the API intercept module 755 can be configured to generate a new API context that represents an initial state of an API context stored in the memory 304. A new API context can be generated by issuing a new API command that includes parameters related to the initial state of the API context. In another embodiment, the API intercept module 755 is configured to generate an API command that modifies the current API context to reset the initial state of the API context. It will be appreciated that the API intercept module 755 can also be configured to reset the state of objects (e.g., buffers, textures, etc.) in the memory 304 based on the initial state of the API context.At step 816, the API intercept module 755 streams the API commands from the API command stored in the replay buffer to the software layer. In addition, the software layer can be a driver or runtime library that implements the API. At step 818, the API intercept module 755 determines if a breakpoint has been reached. In one embodiment, when a breakpoint is reached, GPU 306 executes an error handler that causes the message to be passed to driver 332. The driver 332 can notify the API intercept module 755 that the breakpoint has caused the GPU 306 to stop execution of the graphics program and the API intercept module 755 can implement various operations related to the breakpoint. If the breakpoint has not been reached, then method 800 returns to step 816 where the next API command in the replay buffer is transferred to the software layer. However, if the breakpoint has arrived, then at step 820, the API intercept module 755 determines if execution continues. In one embodiment, the API intercept module 755 determines whether a particular breakpoint that triggered the error handler should cause the API intercept module 755 to store the current state of the GPU context to the shadow state memory 336. For example, if the breakpoint has been reached before and the API intercept module 755 is configured to provide an illusion of progress by waiting until a later thread triggers a breakpoint, the API intercept module 755 allows the GPU to continue execution and the method returns to step 818 to wait for the next Breakpoint. However, if execution should be stopped, then at step 822, the GPU context is stored to shadow state memory 336. Once the GPU context is stored to shadow state memory 336, then at step 824, the GPU can be restored and allowed to continue execution.At step 826, API intercept module 755 transmits the next API command in the playback buffer to the software layer. At step 828, the API intercept module 755 determines if the end of the frame has been reached. A specific API command in the API command stream can indicate that the end of the frame has been reached. If the end of the frame has not been reached, then method 800 returns to step 826 and another API command is transmitted to the software layer. However, if the end of the frame has been reached, then at step 830, the API intercept module 755 determines if another pass of the playback loop is used to continue. The API intercept module 755 can wait for a command from the debug tool that instructs the programmer to implement another GPU context at a different point in the process and monitor the program. If the API intercept module 755 determines that another pass of the replay loop should be performed, then the method 800 returns to step 814 where the initial state of the API context is restored. However, if the API intercept module 755 determines that the replay loop should be terminated, then at step 832, the API intercept module 755 cleans the replay loop. In one embodiment, the API intercept module 755 can release the memory for the shadow state memory 336, the initial state of the API, and the like. After step 832, method 800 terminates.FIG. 9 illustrates an exemplary system 900 in which various architectures and/or functionality of the various embodiments above may be implemented. As shown, a system 900 is provided that includes at least one central processor 901 coupled to a communication bus 902. Communication bus 902 can be implemented using any suitable protocol, such as PCI (Peripheral Component Interconnect), PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol. System 900 also includes a main memory 904. Control logic (software) and data are stored in main memory 904 in the form of random access memory (RAM).System 900 also includes input device 912, graphics processor 906, and display 908, i.e., conventional CRT (cathode ray tube), LCD (liquid crystal display), LED (light emitting diode), plasma display, and the like. User input can be received from input device 912, such as a keyboard, mouse, touch pad, microphone, and the like. In one embodiment, graphics processor 906 may include multiple shader modules, rasterization modules, and the like. Each of the foregoing modules may even be arranged on a single semiconductor platform to form a graphics processing unit (GPU).In this description, a single semiconductor platform can refer to a single, single semiconductor-based integrated circuit or chip. It should be noted that the term single semiconductor platform may also refer to a multi-chip module with enhanced connectivity that simulates on-chip operation and achieves substantial improvements by utilizing conventional central processing unit (CPU) and bus implementations. The various modules can also be placed individually or placed in various combinations of semiconductor platforms as desired by the user.System 900 can also include secondary storage 910. The secondary storage 910 includes, for example, a hard disk drive and/or a removable storage drive, which represents a floppy disk drive, a tape drive, an optical disk drive, a digital versatile disk (DVD) drive, a recording device, a universal serial bus (USB) flash memory. The removable storage drive reads and/or writes to the removable storage unit from the removable storage unit in a known manner.Computer programs or computer control logic algorithms may be stored in main memory 904 and/or secondary storage 910. Such computer programs, when executed, enable system 900 to perform the respective functions. Memory 904, storage 910, and/or any other storage are possible examples of computer readable media.In one embodiment, the architecture and/or functionality of the various figures above may be implemented in the context of a central processor 901, a graphics processor 906, and at least a central processor 901 and a graphics processor 906. A partially capable integrated circuit (not shown), a chipset (i.e., an integrated circuit designed and sold as a unit for implementing related functions, etc.) and/or any other integrated circuit used therefor.In addition, the architecture and/or functionality of the various figures above may be implemented in the context of a general purpose computer system, a circuit board system, a game console system dedicated to entertainment purposes, an application specific system, and/or any Other desired systems. For example, system 900 can take the form of a desktop computer, laptop, server, workstation, game console, embedded system, and/or any other type of logic. Additionally, system 900 can take a variety of other forms of equipment including, but not limited to, personal digital assistant devices (PDAs), mobile telephone devices, televisions, and the like.Moreover, although not shown, system 900 can be coupled to a network for purposes of communication (e.g., a telecommunications network, a local area network (LAN), a wireless network, a wide area network (WAN) such as the Internet, a peer-to-peer network, a wired network, etc.).While the embodiments have been described above, it should be understood that Therefore, the breadth and scope of the preferred embodiments should not be limited by any of the above-described exemplary embodiments, but only by the following claims and their equivalents. |
Systems, methods, and computer programs are disclosed for selectively compressing/decompressing flash storage data. An embodiment of a system comprises a compression/decompression component, a flash memory device, a flash controller in communication with the flash memory device, and a storage driver in communication with the compression/decompression component and the flash controller. The storage driver is configured to selectively control compression and decompression of data stored in the flash memory device, via the compression/decompression component, according to a storage usage collar comprising an upper usage threshold and a lower usage threshold. |
CLAIMSWhat is claimed is:1. A method for selectively compressing/decompressing flash storage data, the method comprising:defining a storage usage collar associated with a flash memory device, the storage usage collar comprising an upper usage threshold and a lower usage threshold; if the storage usage exceeds the upper usage threshold of the storage usage collar, increasing an amount of free space on the flash memory device by: reading a first portion of uncompressed data from the flash memory device, compressing the first portion of uncompressed data to generate a first portion of compressed data, and rewriting the first portion of compressed data to the flash memory device; andif the storage usage falls below the lower usage threshold of the storage usage collar, decreasing the amount of free space on the flash memory device by: reading a second portion of compressed data from the flash memory device, uncompressing the second portion of compressed data to generate a second portion of uncompressed data, and rewriting the second portion of uncompressed data to the flash memory device.2. The method of claim 1, wherein the flash memory device comprises NAND flash.3. The method of claim 1, wherein the compressing and the decompressing are implemented by a processor device in communication with a flash controller.4. The method of claim 1, wherein the upper usage threshold and the lower usage threshold are adjusted.5. The method of claim 1, wherein the storage usage is periodically compared to the storage usage collar.6. The method of claim 1, wherein the first portion of uncompressed data and the second portion of compressed data read from the flash memory device are selected by inspecting a file system directory associated with the flash memory device.7. The method of claim 6, wherein the inspecting the file system directory involves a background scrubbing process.8. A system for selectively compressing/decompressing flash storage data, the system comprising:means for defining a storage usage collar associated with a flash memory device, the storage usage collar comprising an upper usage threshold and a lower usage threshold;means for increasing an amount of free space on the flash memory device if the storage usage exceeds the upper usage threshold of the storage usage collar by: reading a first portion of uncompressed data from the flash memory device, compressing the first portion of uncompressed data to generate a first portion of compressed data, and rewriting the first portion of compressed data to the flash memory device; andmeans for decreasing the amount of free space on the flash memory device if the storage usage falls below the lower usage threshold of the storage usage collar by:reading a second portion of compressed data from the flash memory device,uncompressing the second portion of compressed data to generate a second portion of uncompressed data, and rewriting the second portion of uncompressed data to the flash memory device.9. The system of claim 8, wherein the flash memory device comprises NAND flash.10. The system of claim 8, wherein the compressing and the decompressing are implemented by a processor device in communication with a flash controller.1 1. The system of claim 8, wherein the upper usage threshold and the lower usage threshold are adjusted.12. The system of claim 8, further comprising:means for periodically comparing the storage usage to the storage usage collar.13. The system of claim 8, further comprising:means for inspecting a file system directory associated with the flash memory device to select the first portion of uncompressed data and the second portion of compressed data to be read from the flash memory device.14. The system of claim 13, wherein the means for inspecting the file system directory comprises a background scrubbing process.15. A computer program embodied in a memory and executable by a processor for selectively compressing/decompressing flash storage data, the computer program comprising logic configured to:define a storage usage collar associated with a flash memory device, the storage usage collar comprising an upper usage threshold and a lower usage threshold;if the storage usage exceeds the upper usage threshold of the storage usage collar, increase an amount of free space on the flash memory device by: reading a first portion of uncompressed data from the flash memory device, compressing the first portion of uncompressed data to generate a first portion of compressed data, and rewriting the first portion of compressed data to the flash memory device; andif the storage usage falls below the lower usage threshold of the storage usage collar, decrease the amount of free space on the flash memory device by: reading a second portion of compressed data from the flash memory device, uncompressing the second portion of compressed data to generate a second portion of uncompressed data, and rewriting the second portion of uncompressed data to the flash memory device.16. The computer program of claim 15, wherein the flash memory device comprises NAND flash.17. The computer program of claim 15, wherein a processor device incommunication with a storage driver executes the compressing and the decompressing.18. The computer program of claim 15, wherein the upper usage threshold and the lower usage threshold are adjusted.19. The computer program of claim 15, further comprising logic configured:periodically compare the store usage to the storage usage collar.20. The computer program of claim 15, further comprising logic configured to: inspect a file system directory associated with the flash memory device to determine the first portion of uncompressed data and the second portion of compressed data to be read from the flash memory device.21. The computer program of claim 20, wherein the logic configured to inspect the file system directory comprises a background scrubbing process.22. A system for selectively compressing/decompressing flash storage data, the system comprising:a compression/decompression component;a flash memory device;a flash controller in communication with the flash memory device; and a storage driver in communication with the flash controller, the storage driver configured to selectively control compression and decompression of data stored in the flash memory device, via the compression/decompression component, according to a storage usage collar comprising an upper usage threshold and a lower usage threshold.23. The system of claim 22, wherein the storage driver is further configured to: increase an amount of free space on the flash memory device if the storage usage exceeds the upper usage threshold of the storage usage collar; anddecrease the amount of free space on the flash memory device if the storage usage falls below the lower usage threshold of the storage usage collar.24. The system of claim 23, wherein the storage driver increases the amount of free space when the storage usage exceeds the upper usage threshold by:reading a first portion of uncompressed data from the flash memory device, compressing the first portion of uncompressed data to generate a first portion of compressed data;and rewriting the first portion of compressed data to the flash memory device.25. The system of claim 24, wherein the storage driver decreases the amount of free space on the flash memory device if the storage usage falls below the lower usage threshold of the storage usage collar by:reading a second portion of compressed data from the flash memory device; uncompressing the second portion of compressed data to generate a second portion of uncompressed data; andrewriting the second portion of uncompressed data to the flash memory device.26. The system of claim 22, wherein the flash memory device comprises NAND flash.27. The system of claim 22, wherein the upper usage threshold and the lower usage threshold are adjusted.28. The system of claim 22, wherein the storage driver is further configured to periodically compare the store usage to the storage usage collar.29. The system of claim 22, wherein the compression/decompression component and the flash controller reside on a system on chip (SoC) electrically coupled to the flash memory device.30. The system of claim 29, incorporated in a portable computing device comprising one of a smart phone, a tablet computer, and a wearable device. |
SELECTIVE FLASH MEMORYCOMPRESSION/DECOMPRESSIONUSING A STORAGE USAGE COLLARDESCRIPTION OF THE RELATED ART[0001] Non-volatile storage, such as flash storage, is incorporated in various types of computing devices, including portable computing devices (e.g., cellular telephones, smart phones, tablet computers, portable digital assistants (PDAs), portable game consoles, wearable devices, and other battery-powered devices). To address user demands, the capacity of NAND flash storage in portable computing devices continues to increase. However, larger NAND flash storage significantly increases the cost of portable computing devices. A common solution to cost pressure is to implement filesystem compression, which keeps user data as compact as possible. While compression solutions can temporarily extend the limited capacity of NAND flash storage, the process of compressing/decompressing the data negatively impacts performance of the portable computing device and increases power consumption, which undesirably reduces battery life.[0002] Accordingly, there is a need for improved systems and methods for selectively enabling compression/decompression of flash storage data to increase storage capacity without negatively impacting device performance and user experience.SUMMARY OF THE DISCLOSURE[0003] Systems, methods, and computer programs are disclosed for selectively compressing/decompressing flash storage data. An embodiment of a system comprises a compression/decompression component, a flash memory device, a flash controller in communication with the flash memory device, and a storage driver in communication with the compression/decompression component and the flash controller. The storage driver is configured to selectively control compression and decompression of data stored in the flash memory device, via the compression/decompression component, according to a storage usage collar comprising an upper usage threshold and a lower usage threshold.[0004] Another embodiment is a method for selectively compressing/decompressing flash storage data. The method comprises defining a storage usage collar associated with a flash memory device. The storage usage collar comprises an upper usage threshold and a lower usage threshold. If the storage usage exceeds the upper usage threshold of the storage usage collar, an amount of free space on the flash memory device is increased by: reading a first portion of uncompressed data from the flash memory device, compressing the first portion of uncompressed data to generate a first portion of compressed data, and rewriting the first portion of compressed data to the flash memory device. If the storage usage falls below the lower usage threshold of the storage usage collar, the amount of free space on the flash memory device is decreased by: reading a second portion of compressed data from the flash memory device, uncompressing the second portion of compressed data to generate a second portion of uncompressed data, and rewriting the second portion of uncompressed data to the flash memory device.BRIEF DESCRIPTION OF THE DRAWINGS[0005] In the Figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as " 102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same Figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all Figures.[0006] FIG. 1 is a block diagram of an embodiment of a system for providing selective flash memory compression/decompression using a storage usage collar.[0007] FIG. 2 is a block diagram illustrating an exemplary embodiment of a storage usage collar for controlling compression/decompression of data in the flash memory device.[0008] FIG. 3a illustrates an initial control mode of the system in FIG. 1 in which data is written to the flash memory device without compression when current storage usage is below the storage usage collar.[0009] FIG. 3b illustrates a second control mode of the system in FIG. 1 in which a background scrubbing process is initiated when current storage usage exceeds the lower threshold of the storage usage collar.[0010] FIG. 3c illustrates a third control mode of the system in FIG. 1 in which data is written to the flash memory device with compression when the current storage usage exceeds the upper threshold of the storage usage collar. [0011] FIG. 3d illustrates a fourth control mode of the system in FIG. 1 in which previously compressed data is rewritten to the flash memory device as uncompressed data when the current storage usage falls below the lower threshold of the storage usage collar.[0012] FIG. 4 is a flowchart illustrating an embodiment of a method for providing selective flash memory compression/decompression using the storage usage collar.[0013] FIG. 5 is a data diagram illustrating exemplary blocks of compressed and uncompressed data in the flash memory device.[0014] FIG. 6 is a block diagram of an embodiment of a portable computing device for incorporating the system of FIG. 1.DETAILED DESCRIPTION[0015] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0016] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[0017] The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[0018] As used in this description, the terms "component," "database," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).[0019] In this description, the terms "communication device," "wireless device," "wireless telephone", "wireless communication device," and "wireless handset" are used interchangeably. With the advent of third generation ("3G") wireless technology and four generation ("4G"), greater bandwidth availability has enabled more portable computing devices with a greater variety of wireless capabilities. Therefore, a portable computing device may include a cellular telephone, a pager, a PDA, a smartphone, a navigation device, or a hand-held computer with a wireless connection or link.[0020] FIG. 1 illustrates a system 100 for selectively compressing/decompressing flash storage data using a storage usage collar. The system 100 comprises a system on chip (SoC) 102 electrically coupled to a flash memory device (e.g., NAND flash 104) and a volatile random access memory (VRAM), such as, a dynamic random access memory (DRAM) 106. The SoC 102 may be electrically coupled to the NAND flash 104 via a control bus 126 and a data bus 128. The SoC 102 may be electrically coupled to the DRAM 106 via a bus 130. The system 100 may be implemented in any computing device, including a personal computer, a workstation, a server, a portable computing device (PCD), such as a cellular telephone, a smartphone, a portable digital assistant (PDA), a portable game console, a navigation device, a tablet computer, a wearable device, such as a sports watch, a fitness tracking device, etc., or other battery- powered, web-enabled devices.[0021] The SoC 102 comprises various on-chip components, including a central processing unit (CPU) 110 that executes an operating system (O/S) 122, a DRAM controller 1 12, static random access memory (SRAM) 116, read only memory (ROM) 1 14, a data compression component 1 18, and a flash controller 108 interconnected via a SoC bus 120. The SoC 102 may include one or more memory clients that request memory resources from the DRAM 106 and the NAND flash 104. The memory clients may comprise one or more processing units (e.g., central processing unit (CPU) 1 10, a graphics processing unit (GPU), a digital signal processor (DSP), etc.), a video encoder, or other clients requesting read/write access to the NAND flash 104 and the DRAM 106.Γ00221 In the embodiment illustrated in FIG. 1, the NAND flash 104 is separate from the SoC 102, although in other embodiments the NAND flash 104 may be physically attached or stacked onto the SoC die and reside in the same physical package as the SoC die. As known in the art, the NAND flash 104 may include a controller and a main array for storing physical pages. The CPU 1 10 residing on the SoC 102 reads and/or writes data in units of logical pages to the NAND flash 104 via the flash controller 108. The data is stored and retrieved from the physical pages of the main array, along with error correction bit(s) generated/checked by an error correcting code (ECC) module either located within the flash device 104 or in the SoC 102.[0023] As further illustrated in FIG. 1, software running on CPU 110 comprises various components for selectively enabling compression/decompression of the data stored in the NAND flash 104. It should be appreciated that selective flash memory compression/decompression provides the ability to increase the storage capacity of the NAND flash 104 without negatively impacting device performance and user experience. The CPU 1 10 is operatively coupled to the data compression component 1 18. In this manner, the software on CPU 110 controls whether data written to the NAND flash is to be compressed by the data compression component 1 18 or remain uncompressed. In an embodiment, the data compression component 1 18 comprises a separate hardware unit for executing the data compression and decompression. In another embodiment, the CPU 110 may execute the data compression and decompression.[0024] In the embodiment of FIG. 1, the CPU 1 10 provides selective flash memory compression/decompression via a storage usage monitor 132, a selectivecompression/decompression component 134, a storage usage collar 136, and a file system/storage driver 124. The storage usage collar 136 defines a lower usage threshold and an upper usage threshold associated with the capacity of the NAND flash 104. FIG. 2 illustrates an exemplary embodiment of the storage usage collar 136. The storage usage of the NAND flash 104 may be represented as a percentage along the y-axis from zero to full capacity (100%). A lower usage threshold 206 and an upper usage threshold 204 define a collar 202. The percentage values for the lower and upper usage thresholds 206 and 204 may be predefined, calculated, or programmed. It should be appreciated that the threshold values 206 and 204 are determined to provide an optimal window for extending the storage capacity of the NAND flash 104 (by writing compressed data) while minimizing power consumption and latency due to overuse of data compression. As described below in more detail, the file system/storage driver 124 running on CPU 1 10 selectively controls compression and decompression in such a way to generally maintain the storage capacity within the collar 202.[0025] The usage monitor 132 comprises the logic for monitoring the storage capacity of the NAND flash 104 during operation of the system 100. The usage monitor 132 may be either a low priority task running on the OS 122 or a HW block with the monitoring functionality. The usage monitor 132 compares a current storage usage percentage against the lower usage threshold 206 and the upper usage threshold 204. Based on the periodic comparison, the usage monitor 132 keeps track of when the current storage capacity of the NAND flash 104 is within range 208 (i.e., below the lower usage threshold 206), within range 210 (i.e., the collar 202 between the lower usage threshold 206 and the upper usage threshold 204), or within range 212 (i.e., above the upper usage threshold 204). In this regard, the selective compression/decompression component 134 may select various control modes depending on the current storage usage determined by the usage monitor 132.[0026] FIGS. 3a - 3d illustrate four exemplary control modes. FIG. 3a illustrates an initial control mode of the systemlOO that is active when the current storage usage is initially below the lower usage threshold 206. In the initial control mode, the file system/storage driver 124 may initially write data to the NAND flash 104 without compression to avoid latency and power consumption associated with performing the compression algorithms. As uncompressed data is written to the NAND flash 104, the storage usage may exceed the lower usage threshold 206 (FIG. 3b). When the lower usage threshold 206 is initially exceeded, the file system/storage driver 124 may initiate a background (i.e., low priority) scrubbing process to be carried out by the selective compression /decompression 134. In one embodiment, the scrubbing process piecewise traverses the nodes in the NAND file system directory to determine files that will be a candidate for compression. A flag is introduced to the file system mechanism to indicate whether a file has already been stored in a compressed format. The scrubbing process may determine candidate files for compression based on, for example, a file type, a file "modified date/time", a file size, etc. For example, certain file types may be specified as "compressible" while other file types may be specified as "non-compressible". The "modified date/time" may indicate files that are not in use or have not been recently accessed, which may be candidates for compression. Furthermore, files with a large size may give more benefit with compression. In another embodiment, the scrubbing process piecewise traverses the block in the storage. A header may be used for the data that is written to a block in a compressed format. An exemplary header format (FIG. 5) is described below in more detail.[0027] As illustrated in FIG. 3c, as files continue to be written to NAND flash 104 without compression, the current storage usage may exceed the upper usage threshold 204. When the current storage usage exceeds the upper usage threshold 204, the file system/flash driver 124 may determine that the amount of free space on the NAND flash 104 should be increased, so that the storage usage is maintained in the collar 202. To increase the amount of free space, the filesystem/flash driver 124 may invoke the selective compression/decompression component 134 to select one or more files identified as compression candidates by the background scrubbing process. The uncompressed data in the candidate file(s) is read from the NAND flash 104, compressed by the data compression component 1 18, and rewritten to the NAND flash 104 to generate free space.[0028] As illustrated in FIG. 3d, if the current storage usage falls below the lower usage threshold 206 (e.g., as a result of files being deleted), the file system/flash driver 124 may determine that the amount of free space on the NAND flash 104 may be decreased. To decrease the amount of free space, the file system/flash driver 124 may invoke the selective compression/decompression component 134 to select one or more compressed files to be uncompressed. The compressed data is read from the NAND flash 104, uncompressed by the data compression component 1 18, and rewritten to the NAND flash. It should be appreciated that the selection of file(s) for decompression may take into consideration a "modified date/time" and a file size to favor the more frequent used file to be uncompressed.[0029] FIG. 4 is a flowchart illustrating an embodiment of a method 400 for providing selective flash memory compression/decompression using the storage usage collar 136. At block 402, a storage usage collar 136 associated with the file system/storage driver is determined. The values for the lower usage threshold 206 and the upper usage threshold 204 may be predetermined and stored in memory either in the flash controller 108 or otherwise. It should be appreciated that these values may also be calculated during operation of the system 100 based on varying conditions, use cases, etc. The lower usage threshold 206 and/or the upper usage threshold 204 may be individually or collectively adj usted to manage the inherent tradeoffs between available storage capacity, compression and decompression latency, and user experience.[0030] The usage monitor 132 periodically checks the storage usage in the NA D flash 104 and compares it against the lower usage threshold 206 and the upper usage threshold 204. If the current storage usage exceeds the upper usage threshold 204 (decision block 404), the flash controller 108 increases an amount of free space on the NAND flash 104 (block 406). The file system/storage driver 124 may control the flash controller 1 08 to read a first portion of uncompressed data stored in the NAND flash 104. The first portion of uncompressed data may be compressed by the datacompression component 1 18 to generate a first portion of compressed data. The first portion of compressed data is rewritten to NAND flash 104. A timer (block 408) may be used to periodically check the storage usage and return flow to decision block 404.[0031] Referring to decision block 404, if the current storage usage does not exceed the upper usage threshold 204, the file system/ storage driver 124 may determine (decision block 410) whether the current storage usage has fallen below the lower usage threshold 206. If the current storage usage is below the lower usage threshold 206, the flash controller 1 08 may decrease the amount of free space on the NAND flash 104 (block 412). The flash controller 108 may read a second portion of compressed data from the NAND flash 104. The second portion of compressed data may beuncompressed to generate a second portion of uncompressed data. The second portion of uncompressed data may be rewritten to the NAND flash 104. The timer (block 408) may be used to periodically check the storage usage and return flow to decision block 404.[0032] FIG. 5 is a data diagram 500 illustrating exemplary blocks of compressed and uncompressed data in the NAND flash 104. Block 504 comprises uncompressed data 506. Block 502 shows an exemplary implementation for compressing data. After compression, the block 502 comprises compressed data 508 leaving free space 512. The compressed data 508 may include compression metadata (e.g., a compression flag checksum 510). When selecting blocks of data for compression or decompression, the selective compression/decompression component 134 may check for a predetermined compression flag in the header. If the flag does not present in the header position (block 506), the data is not stored in a compressed format, and this block may be indicated as a potential target for compression. If the flag presents in the header position, then the selective compression/decompression component 134 may further compute a checksum of the data in the block. If the computed checksum matches the checksum in the header, then the selective compression/decompression component 134 may determine that the data is stored in a compressed format, and that this block is a potential target for decompression.[0033] As mentioned above, the system 100 may be incorporated into any desirable computing system. FIG. 6 illustrates the system 100 incorporated in an exemplary portable computing device (PCD) 600. It will be readily appreciated that certain components of the system 100 may be included on the SoC 322 (e.g., data compression component 1 18 and flash controller 108) while other components (e.g., the DRAM 106, the NAND flash 104) may be external components coupled to the SoC 322. The SoC 322 may include a multicore CPU 602. The multicore CPU 602 may include a zeroth core 610, a first core 612, and an Nth core 614. One of the cores may comprise, for example, a graphics processing unit (GPU) with one or more of the others comprising the CPU.[0001] A display controller 328 and a touch screen controller 330 may be coupled to the CPU 602. In turn, the touch screen display 606 external to the on-chip system 322 may be coupled to the display controller 328 and the touch screen controller 330.[0002] FIG. 6 further shows that a video encoder 334, e.g., a phase alternating line (PAL) encoder, a sequential color a memoire (SECAM) encoder, or a national television system(s) committee (NTSC) encoder, is coupled to the multicore CPU 602. Further, a video amplifier 336 is coupled to the video encoder 334 and the touch screen display 606. Also, a video port 338 is coupled to the video amplifier 336. As shown in FIG. 6, a universal serial bus (USB) controller 340 is coupled to the multicore CPU 602. Also, a USB port 342 is coupled to the USB controller 340.[0003] Further, as shown in FIG. 6, a digital camera 348 may be coupled to the multicore CPU 602. In an exemplary aspect, the digital camera 348 is a charge-coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera.[0004] As further illustrated in FIG. 6, a stereo audio coder-decoder (CODEC) 350 may be coupled to the multicore CPU 602. Moreover, an audio amplifier 352 may coupled to the stereo audio CODEC 350. In an exemplary aspect, a first stereo speaker 354 and a second stereo speaker 356 are coupled to the audio amplifier 352. FIG. 6 shows that a microphone amplifier 358 may be also coupled to the stereo audio CODEC 350.Additionally, a microphone 360 may be coupled to the microphone amplifier 358. In a particular aspect, a frequency modulation (FM) radio tuner 362 may be coupled to the stereo audio CODEC 350. Also, an FM antenna 364 is coupled to the FM radio tuner 362. Further, stereo headphones 366 may be coupled to the stereo audio CODEC 350.[0005] FIG. 6 further illustrates that a radio frequency (RF) transceiver 368 may be coupled to the multicore CPU 602. An RF switch 370 may be coupled to the RF transceiver 368 and an RF antenna 372. A keypad 204 may be coupled to the multicore CPU 602. Also, a mono headset with a microphone 376 may be coupled to the multicore CPU 602. Further, a vibrator device 378 may be coupled to the multicore CPU 602.[0006] FIG. 6 also shows that a power supply 380 may be coupled to the on-chip system 322. In a particular aspect, the power supply 380 is a direct current (DC) power supply that provides power to the various components of the PCD 600 that require power. Further, in a particular aspect, the power supply is a rechargeable DC battery or a DC power supply that is derived from an alternating current (AC) to DC transformer that is connected to an AC power source.[0007] FIG. 6 further indicates that the PCD 600 may also include a network card 388 that may be used to access a data network, e.g., a local area network, a personal area network, or any other network. The network card 388 may be a Bluetooth network card, a WiFi network card, a personal area network (PAN) card, a personal area network ultra -low-power technology (PeANUT) network card, a television/cable/satellite tuner, or any other network card well known in the art. Further, the network card 388 may be incorporated into a chip, i.e., the network card 388 may be a full solution in a chip, and may not be a separate network card 388.[0008] As depicted in FIG. 6, the touch screen display 606, the video port 338, the USB port 342, the camera 348, the first stereo speaker 354, the second stereo speaker 356, the microphone 360, the FM antenna 364, the stereo headphones 366, the RF switch 370, the RF antenna 372, the keypad 374, the mono headset 376, the vibrator 378, and the power supply 380 may be external to the on-chip system 322.[0009] It should be appreciated that one or more of the method steps described herein may be stored in the memory as computer program instructions, such as the modules described above. These instructions may be executed by any suitable processor in combination or in concert with the corresponding module to perform the methods described herein.[0010] Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method.[0011] Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example.[0012] Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the Figures which may illustrate various process flows.[0013] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM,EEPROM, NAND flash, NOR flash, M-RAM, P-RAM, R-RAM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer.[0014] Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.[0015] Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.Combinations of the above should also be included within the scope of computer- readable media.[0016] Alternative embodiments will become apparent to one of ordinary skill in the art to which the invention pertains without departing from its spirit and scope. Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. |
A method of an aspect is performed by a processor. The method includes receiving a partial width load instruction. The partial width load instruction indicates a memory location of a memory as a source operand and indicates a register as a destination operand. The method includes loading data from the indicated memory location to the processor in response to the partial width load instruction. Themethod includes writing at least a portion of the loaded data to a partial width of the register in response to the partial width load instruction. The method includes finishing writing the registerwith a set of bits stored in a remaining width of the register that have bit values that depend on a partial width load mode of the processor. The partial width load instruction does not indicate thepartial width load mode. Other methods, processors, and systems are also disclosed. |
1. A processor, comprising:a register having a width;a decoding unit for receiving a partial width load instruction for indicating a memory location of memory as a source operand and indicating the register as a destination operand;a memory subsystem coupled to the decoding unit for loading data from the indicated memory location to the processor in response to the partial width load instruction; anda register write unit coupled to the memory subsystem and the register, the register write unit responsive to the partial width load instruction for writing at least a portion of the loaded data to a partial width of the register, and is used to complete writing to the register using the bit groups stored in the remaining width of the register with bit values that depend on the partial width load mode of the processor, so The partial-width load instruction does not indicate the partial-width load mode.2. The processor of claim 1, wherein the register writing unit is configured to write all zeros as the bit group to all parts of the register in the partial width load mode. the remaining width; and in the second partial width load mode, not writing all zeros as the byte group to the remaining width of the register.3. The processor of claim 2, wherein the register writing unit is configured to: in the second partial width load mode, write sign extension bits as the bit group to the The remaining width of the register.4. The processor of claim 2, wherein the register writing unit is configured to: in the second partial width load mode, utilize the partial width load instruction received by the decoding unit Writing to the register is accomplished with the group of bits originally located in the remaining width of the register that was previously stored as the group of bits in the remaining width of the register.5. The processor of claim 1, further comprising at least one bit for indicating the partial width loading mode, wherein the processor is to access the at least one bit to determine the partial width loading mode, A corresponding method of determining the bit value of the bit group is thereby selected.6. The processor of claim 5, wherein the at least one bit is in a register of the processor and is visible to an application.7. The processor of claim 1, wherein the width of the register is at least as wide as a memory address used by the processor to access memory, and wherein the portion of the register is only is the portion of the width of the register.8. The processor of claim 7, wherein one of the following is selected:The processor is a 64-bit architecture processor using 64-bit memory addresses, and wherein a portion of the register is 32 bits wide; andThe processor is a 128-bit architecture processor using 128-bit memory addresses, and wherein the registers are partially 64-bit wide.9. The processor of any one of claims 1-8, wherein the processor allows user-level applications to change the partial width loading mode.10. The processor of any one of claims 1-8, wherein the processor allows at least one of an operating system and a virtual machine monitor to change the partial width loading mode, but does not allow User-level applications change the partial width loading mode.11. The processor of any one of claims 1-8, having an instruction set including instructions for changing the partial width load mode.12. The processor of claim 1, wherein the register writing unit is configured to write all zeros as the bit group to all parts of the register in the partial width load mode. the remaining width, and the processor further includes:Means of the processor for changing the partial width loading mode to a second partial width loading mode upon one of an interrupt and a transition from an application to an operating system.13. A method executed by a processor, comprising:receiving a partial width load instruction indicating a memory location of memory as a source operand and a register as a destination operand;Responsive to the partial width load instruction, load data from the indicated memory location to the processor;writing at least a portion of the loaded data to a partial width of the register in response to the partial width load instruction; andWriting to the register is accomplished using bit groups stored in the remaining width of the register with bit values that depend on the partial-width load mode of the processor, the partial-width load instruction not indicating that the partial Width loading mode.14. The method of claim 13, wherein completing a write to the register includes writing zeros as the bit group to the remainder of the register in the partial width load mode. width, and wherein, in a second different partial width load mode, sign extension bits are written to the remaining width of the register instead of the zeros.15. The method of claim 13, wherein completing a write to the register includes writing zeros as the bit group to the remainder of the register in the partial width load mode. width, and wherein, in a second different partial width load mode, a group of bits originally located in the remaining width is stored in the remaining width of the register.16. The method of claim 13, further comprising:accessing at least one bit to determine the partial width loading mode; andA method is selected that corresponds to the partial width loading mode for determining the bit value of the bit group.17. The method of claim 16, wherein accessing the at least one bit includes accessing at least one bit in an application-visible register of the processor.18. The method of claim 13, wherein writing to the partial width of the register includes writing to only a subset of the width of the register, and wherein the register is at least as large as the width of the register. The memory address used by the processor to access the memory is as wide as the memory address used by the processor to access the memory.19. The method of claim 18, wherein writing to the partial width of the register includes one of:writing to 32 bits of the register with zeros in the remainder of the register, and wherein the memory address used by the processor to access the memory is 64 bits; andIn the remainder of the register, 64 bits of the register are written with zeros, and where the memory address used by the processor to access the memory is 128 bits.20. The method of claim 13, further comprising:receiving a control signal from a user-level application for changing the partial width loading mode; andAfter receiving the control signal, the partial width loading mode is changed.21. The method of claim 13, further comprising:receiving a control signal from one of an operating system and a virtual machine monitor to change the partial width loading mode;changing the partial width loading mode after receiving the control signal from the one of the operating system and the virtual machine monitor; andPrevent user-level apps from changing the partial-width loading mode.22. A processing module, the processing module includes:Means for checking metadata of a software module, said checking metadata of a software module comprising checking an indication of a partial-width load mode of a processor to be used by the processor to execute a partial-width load instruction for Indicate a memory location as the source operand and a register as the destination operand; andMeans for changing the partial-width load mode of the processor to an indicated partial-width load mode, wherein the partial-width load mode is changed to control, to be stored by the processor in an indicated register, Changes in bit values in the part-width portion of data loaded from memory that are not used for storage.23. The processing module of claim 22, further comprising:means for maintaining metadata indicating which different software modules will use which different partial width loading modes, including which software modules will use the partial width loading mode; andMeans for changing the partial width load mode of the processor to the indicated partial width load mode in conjunction with transitioning back to execution of code from the software module after handling an interrupt.24. A processor comprising means for performing the method of any one of claims 13-21.25. A machine-readable storage medium storing instructions that, if executed by a machine, cause the machine to perform the method of any one of claims 13-21. |
Processors, methods, and systems for mode-dependent partial-width loading into wider registersThis application has an international filing date of 2014/6/19, an international application number of PCT/US2014/043159, and the application number entering the Chinese national phase is 201480030091.1, entitled "Mode-dependent partial width loading to wider registers" Divisional application for an invention patent application for "Processor, Method and System".backgroundTechnical fieldVarious embodiments described herein generally relate to processors. In particular, embodiments described herein generally relate to loading data from memory in a processor.Background techniqueA processor's instruction set typically includes various different types of instructions that the processor can execute or execute. For example, typically these instruction sets may include various arithmetic instructions, various logic instructions, various load instructions for loading data from memory into the processor, and so on.One challenge is that the number of instructions that can be included in an instruction set is generally limited. Each instruction in the instruction can include an operation code or opcode. An opcode may represent portions of an instruction that specify specific instructions and/or operations to be performed. For example, a given instruction that loads data from memory may have a given unique opcode to distinguish the instruction from other types of instructions and allow the processor to recognize the instruction. An opcode can represent a given length of bytes in one or more fields or locations in the instruction format. Generally, it is desirable to try to keep the number of opcode bits relatively short while providing the desired number of instructions/operations. Long opcodes tend to increase the size and/or complexity of the decoder. Additionally, long opcodes tend to increase the total instruction length, which causes the instructions to use more program space and occupy more space in the cache. The number of different instructions that can be uniquely identified with a given opcode length and/or instruction length is often more limited than desired. It is generally not possible to continuously add additional instructions to the instruction set without eventually exhausting the available opcodes or increasing the length of the instructions in a variable instruction length architecture.In some cases, different instructions and/or operations may have the same opcode (or the same portion of the opcode), but include one or more additions used to distinguish between different versions of the instruction and/or operation. Bit. A potential disadvantage of this approach is that it may tend to increase the instruction length, or in some cases there may not be available space within the instruction length to accommodate these features used to differentiate between different versions of instructions/operations. Additional bits.Description of the drawingsThe present invention may best be understood by referring to the following description and the accompanying drawings for illustrating embodiments. In the attached picture:FIG. 1 is a block diagram of an embodiment of a computer system having a processor for executing partial-width load instructions.2 is a block diagram of a first example embodiment of a partial-width load operation that may be performed according to a sign-extended partial-width load mode or a zero-extended partial-width load mode.3 is a block diagram of a second example embodiment of a partial width load operation that may be performed according to a merged partial width load mode or a zero-extended expanded partial width load mode.4 is a block flow diagram of an embodiment of a method that may be performed by a processor when processing an embodiment of a partial width load instruction.5 is a block diagram of an embodiment of a computer system including a 64-bit architecture processor for performing zero-extended 32-bit memory addressing.Figure 6 is a block diagram of an embodiment of interaction between user-level software modules and operating system modules to change partial width loading modes.Figure 7 is a block diagram of an embodiment of a method that may be performed by an operating system module, a privileged module, or other system-level module.8A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline in accordance with embodiments of the invention.8B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming out-of-order issue/execution architecture core to be included in a processor in accordance with embodiments of the invention.Figure 9A is a block diagram of a single processor core and its connection to the on-die interconnect network and a local subset of its level 2 (L2) cache, in accordance with various embodiments of the present invention.Figure 9B is an expanded view of a portion of the processor core in Figure 9A, in accordance with various embodiments of the present invention.10 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have an integrated graphics device, in accordance with various embodiments of the invention.Figure 11 shows a block diagram of a system according to one embodiment of the invention.Figure 12 shows a block diagram of a first more specific exemplary system according to an embodiment of the present invention.Figure 13 is a block diagram of a second more specific exemplary system according to an embodiment of the present invention.Figure 14 shows a block diagram of an SoC according to an embodiment of the invention.15 is a block diagram illustrating the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set, in accordance with various embodiments of the present invention.Detailed waysIn some embodiments, a processor may have two or more modes for processing a given instruction and/or a given opcode in two or more different corresponding ways, e.g., with Two or more modes of handling instructions that perform partial-width loads from memory in two or more different corresponding ways. In the following description, numerous specific details are set forth (eg, specific instruction operations, processor configurations, microarchitectural details, sequences of operations, etc.). However, embodiments may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order to avoid obscuring the understanding of this description.FIG. 1 is a block diagram of an embodiment of a computer system 100. In various embodiments, a computer system may represent a desktop computer, a laptop computer, a notebook computer, a tablet computer, a netbook, a smartphone, a personal digital assistant, a cellular phone, a server, a network device (eg, a router or switch), a mobile Internet device (MID), media player, smart TV, set-top box, video game controller, or other type of electronic device.The computer system includes an embodiment of processor 101 . In some embodiments, the processor may be a general purpose processor. For example, the processor may be a general-purpose processor of the type commonly used as a central processing unit (CPU). In other embodiments, the processor may be a special purpose processor. Examples of suitable special purpose processors include, but are not limited to, coprocessors, graphics processors, communications processors, network processors, cryptographic processors, embedded processors, and digital signal processors (DSPs), to name a few . The processor can be any of various Complex Instruction Set Computing (CISC) processors, various Reduced Instruction Set Computing (RISC) processors, and various Very Long Instruction Word (VLIW) processors. a hybrid, or some other type of processor entirely.The computer system also includes an embodiment of memory 110 . The memory is coupled to the processor 101 through a coupling or interconnection mechanism 109 . Examples of suitable coupling or interconnection mechanisms include, but are not limited to, one or more interconnects, buses, hubs, memory controllers, chipsets, chipset components, etc., and various combinations of the foregoing. Memory may include one or more memory devices of the same or different types. One common type of memory suitable for various embodiments is dynamic random access memory (DRAM), but other types of memory (eg, flash memory) may alternatively be used. The memory may have software stored therein, such as one or more operating systems (OS) and one or more applications (not shown). During operation, instructions for the software may be provided to the processor and executed on the processor.As shown, partial width load instructions 102 may be provided to processor 101 . For example, a fetch unit (not shown) may fetch partial width load instructions. A partial-width load instruction may represent a machine code instruction, an assembly language instruction, a macro instruction, or a control signal of the device's ISA. A partial-width load instruction may explicitly designate memory location 111 in memory 110 (e.g., via one or more fields or bit groups) as the source operand, or otherwise indicate memory location 111 in memory 110 ( For example, ) is implicitly indicated as the source operand, and destination register 115 may be specified or otherwise indicated as the destination operand that will store partial-width data 112 loaded from memory. The term "partial width data" is used herein to refer to data that fills only a partial width of the destination register 115 where data is to be stored, or is data that is only a partial width of the destination register 115 where data is to be stored.The processor includes a decoding unit 103. The decoding unit may also be called a decoder or decoding logic. The decoding unit can receive partial width load instructions. The decoding unit is operable to decode relatively higher-level instructions (eg, machine code instructions, assembly language instructions, macro instructions, etc.) and output one or more corresponding relatively lower-level instructions or control signals (e.g., one or more microinstructions, micro-ops, microcode entry points, etc.). Lower-level instructions or control signals may reflect, represent, and/or be derived from relatively higher-level instructions and may be obtained by lower-level operations (e.g., circuit-level or hardware-level operations). Implement higher-level instructions. The decoding unit may be implemented using a variety of different mechanisms including, but not limited to, microcode read-only memory (ROM), lookup tables, hardware implementation, programmable logic array (PLA), as is known in the art. Other mechanisms for implementing decoding units, and combinations of the above.In other embodiments, an instruction emulator, transformer, deformer, interpreter or converter may be used instead of or in addition to decoding unit 103 . A command converter may simulate, transform, transform, interpret, or otherwise convert a command into one or more corresponding derived commands or control signals. Various types of such instruction converters are known in the art and may be implemented in hardware, firmware, software, or combinations thereof. In some embodiments, the instruction converter may convert the received higher-level instructions into one or more mid-level instructions, and the decoding unit may decode the one or more mid-level instructions into a form that can be used by the processor. One or more lower-level instructions or control signals executed by native logic.As shown, the decoding unit is part of the mode-dependent partial width loading system 113 of the processor. In some embodiments, two or more modes may be used to process partial width load instructions and/or their opcodes in two or more different corresponding ways. Advantageously, this may help to allow different operations to be performed without the need to utilize further opcodes, and in some cases, as discussed in the Background section, allow different operations to be performed without utilizing further opcodes. The operation will be challenging. The decoding unit may be coupled to, or otherwise in communication with, the partial width loading mode 105 . In some embodiments, the partial width loading mode may include one or more bits to indicate one of a number of different partial width loading modes. On the one hand, a single bit can be used to indicate either of two different partial-width loading modes. On the other hand, two or more bits may be used to indicate any of two, at least three, at least four, or optionally more than four different partial width loading modes. In some embodiments, this one or more bits may be stored in register (eg, control register, status register, or configuration register) 106 or another on-die storage location. Alternatively, the partial width load mode may optionally be included in one or more separate or isolated bits that are not necessarily in the control register, status register, or configuration register. As will be explained further below, software modules (e.g., user-level application modules, operating system modules, virtual machine monitor modules, licensed software modules, etc.) may change the partial-width load mode 105 (e.g., by changing this one or bits) is a partial-width load mode that is appropriate, expected, or intended for the partial-width load instruction 102 (eg, software utilizing the partial-width load instruction).The decoding unit and/or the processor may access the partial width load mode to determine the current partial width load mode to use, and the decoding unit may decode the partial width load instruction based on the determined partial width load mode. In some embodiments, partial-width load mode may be used to provide different meanings, interpretations, and/or operations for partial-width load instructions and/or their opcodes. In some embodiments, the decoding unit may include partial-width load mode-dependent decoding logic 104 for performing decoding operations in a manner that depends on, is based on, and/or is consistent with the current value associated with the instruction. The partial width load mode (for example, the mode when the instruction is decoded) is used to decode the partial width load instruction. For example, a first set of one or more instructions or control signals may be decoded from a partial width load instruction in a first partial width load mode and a second set of partial width load instructions in a different second set of partial width load instructions (e.g., A different second set of one or more instructions or control signals are decoded from the same operation code). Mode-dependent instructions or control signals 107 may be output from the decoding unit consistent with the current mode 105 .In some embodiments, partial width load instructions may not specify or otherwise indicate partial width load mode 105 (e.g., there may be no means for selecting or distinguishing between multiple different styles of instructions/operations or otherwise indicating instruction bits specifying the mode). This generally helps avoid increasing the instruction length, can potentially allow the use of techniques where the instruction length does not permit such additional bits, can potentially reduce or limit the number of bits to be decoded, or provide other potential advantages. In some embodiments, it may not be possible to discern which partial-width load mode to use by examining any or all bits of a partial-width load instruction.Referring again to FIG. 1 , memory subsystem 108 is coupled to decoding unit 103 and to memory 110 . A variety of conventional memory subsystems known in the art are suitable. In response to and/or due to the partial width load instruction 102 (e.g., in response to one or more decoded instructions or control signals corresponding to the current partial width load mode 105 ), the memory subsystem is operable to transfer data from the indicated The memory location 111 is loaded into the processor. As previously described, a partial-width load instruction may specify or otherwise indicate a memory location 111 having partial-width data 112 . The partial-width load instruction may also specify or otherwise indicate the destination register 115 in which the partial-width data is to be stored.An embodiment of the register write unit 114 that relies on partial width load mode is coupled to the decoding unit 103 , to the memory subsystem 108 , and to the destination register 115 . For the sake of simplicity, a register write unit that is dependent on the partial width load mode may also be referred to simply as a mode-dependent register write unit or a register write unit. In response to and/or due to the partial width load instruction 102 (eg, in response to one or more decoded instructions or control signals 107 corresponding to the current partial width load mode 105 ), the mode dependent register write unit is operable Register 115 is written to or otherwise accessed such that a result dependent on and/or consistent with the partial width load mode is stored therein. In certain embodiments, the mode-dependent register write unit is operable to write at least a portion of the loaded partial-width data 112 to a partial width of the register, and is operable to utilize data stored in the remaining width of the register. , ending or completing the write to the register with a bit group 117, 118 having a bit value that is dependent on and/or consistent with the corresponding partial width load mode. The mode-dependent register write unit and/or processor may include specific or specific logic (e.g., circuitry or other hardware, potentially combined with one or more of firmware and/or software) that responds to partial width load instructions. ).Various combinations of different types of partial width loading modes are suitable for different embodiments. Examples include, but are not limited to, sign-extended partial-width loading mode, zero-extended partial-width loading mode, merged partial-width loading mode, and other partial-width loading modes known in the art. In sign-extended partial-width load mode, the sign bits of the partial-width data (e.g., the most significant bit with a value of binary zero or binary one) may be copied, extended, or otherwise stored or written to a file that is not partially The width data occupies the remaining width of the register. In zero-extended partial-width load mode, binary zeros may be inserted, extended, or otherwise stored or written into the remaining width of the register that is not occupied by partial-width data. In merged partial-width load mode, the partial-width data can be merged with the original or pre-existing bits or data already stored in the register when the register is written. This initial or pre-existing data need not be modified or overwritten. For example, a merged partial-width load mode may store the partial width of the lowest-order portion that overwrites the initial or pre-existing data in that lowest-order portion, and may retain the original or pre-existing data in the remainder of the register. in the highest order part. In various embodiments, a combination of two or more of these or other types of partial width loading modes may be used. Different corresponding values 116 may be written to the destination register in different modes. In the example shown, in zero-extended partial-width load mode, partial-width data 112 may be included in a portion of the destination register (eg, the lowest-order half or other portion), and all zeros 117 may be stored in in another part of the destination register (e.g., the highest-order part). In a sign-extended partial-width load mode, the partial-width data 112 may be included in a portion of the destination register (e.g., the lowest-order half or other portion), and the sign bit 118 (e.g., based on the partial-width data 112 The most significant bit) is stored in another part of the destination register (e.g., the highest-order half or some other part). In other embodiments, other types of different partial width loads or combinations of different partial width loads may be used. For example, in another embodiment, a merged extended partial width loading mode (see, e.g., Figure 3) may be used instead of one of the modes shown or as an additional third type of partial width loading mode (or other type of partial-width loading mode).In order to avoid obscuring the present description and concepts, a simplified processor 101 has been shown and described. The device may include various other well-known components typically found in processors. Examples of such components include, but are not limited to, branch prediction units, instruction fetch units, instruction and data caches, second level or higher caches, instruction and data translation lookaside buffers, prefetch buffers, microinstructions Queues, microinstruction sequencers, register renaming units, instruction dispatch units, bus interface units, retirement units, other components included in the processor, and various combinations of the foregoing. Indeed, there are numerous different combinations and configurations of components within a processor, and embodiments are not limited to any particular combination or configuration.2 is a block diagram of a first example embodiment of a partial width load operation 219 that may be performed according to a sign-extended partial width load mode 220 or a zero-extended partial width load mode 221 . The partial width load operation may be performed in response to the partial width load instruction 202 and an associated partial width load mode 205 (eg, the partial width load mode when the partial width load instruction is decoded).A partial width load instruction may specify or otherwise indicate memory location 211 as the source operand and may specify or otherwise indicate destination register 215 as the destination operand. The memory location stores partial-width data to be loaded into the destination register in response to a partial-width load instruction. To simplify illustration, 8-bit wide partial width data 212 is shown. In the example shown, the 8-bit partial width data has the value "11110000" from the position of the most significant bit on the left to the position of the least significant bit on the right. However, it should be understood that these values are only examples and any other possible bit values may alternatively be used. Additionally, in other embodiments, the partial width data may have other widths (eg, 16 bits, 32 bits, 64 bits, etc.) instead of the 8-bit wide partial width data. Similarly, to simplify illustration, the destination register is shown as having a width of 16 bits. In other embodiments, the destination register may have other widths (eg, 32 bits, 64 bits, 128 bits, etc.). In some embodiments, as in this example, the width of the partial width data may be half the width of the destination register, but this is not required. In other embodiments, for example, the partial width data may be 16 bits and the destination register may be 64 bits, or the partial width data may be 8 bits and the destination register may be 32 bits, etc. As another example, the partial width data may be 64 bits and the destination register may be 128 bits.The mode-dependent register write unit 214 is coupled to the destination register 215, to the memory location 211, and to the partial-width load instruction 202 (eg, decode unit). In response to the partial width load instruction, the register write unit is operable to perform a write or store to the destination register. The manner in which the write or store is performed may depend on or be based on the partial-width load mode associated with the partial-width load instruction. In this first example embodiment, the partial-width loading mode may indicate either a sign-extended partial-width loading mode 220 (on the left in the diagram) or a zero-extended partial-width loading mode 221 (on the right in the diagram) anyone. In these two different types of partial width loads, partial width data 212 (eg, a partial width of 8 bits in this example) may be written or stored in the destination register. In the embodiment shown, the loaded partial width data is written to the lowest order portion of the destination register, but this is not required. As shown, in sign-extended partial width load mode 220 or zero-extended partial width load mode 221, an 8-bit partial width data value "11110000" may be stored in destination registers 215-1, 215-2 in bits [7:0]. In other embodiments, other sizes of partial width data may be used and/or partial width data may be written to other portions/locations of the destination register (eg, the most significant portion, etc.).In the embodiment shown, the highest order portion of the destination register (eg, the 8-bit highest order portion in this example) is handled differently for these two different types of partial width load operations/modes. For sign-extended partial-width load mode 220 , the mode-dependent register write unit is controlled to write or store a sign bit 218 (eg, the sign bit of the most significant bit of the partial-width data) to destination register 215 - 1 , all remaining higher significant bits that are not used to store part-width data. In the example shown, bit 7 is the sign bit or most significant bit of the partial width data. In this example, the sign bit is a binary one, and accordingly a binary one 218 is stored in each of bits [15:8] of destination register 215-1. For a zero-extended partial-width load mode 221 , the mode-dependent register write unit is controlled to write or store all zeros 217 to the remaining higher portion of the destination register 215 - 2 that is not used to store partial-width data. of valid bits. As shown, binary zeros 217 may be stored in each of bits [15:8] of destination register 215-2. Accordingly, a mode-dependent register writing unit and/or processor may utilize portions stored in destination register 215 that are not used to store partial-width data (e.g., one of the lowest-order portions used to store partial-width data). Writing to destination register 215 is accomplished with bits (eg, 217 or 218) that depend on the partial width load mode, except all higher significant bits).3 is a block diagram of a second example embodiment of a partial width load operation 319 that may be performed according to a merged partial width load mode 322 or a zero-extended partial width load mode 321 . This second example embodiment has certain similarities to the previously described first example embodiment of FIG. 2 . To avoid obscuring this description, different or additional features and characteristics of this second example embodiment will be mainly described without repeating all similarities. However, it should be understood that the features and characteristics described above with respect to the first example embodiment of Figure 2 may optionally also apply to this second example embodiment.Partial width load operation 319 may be performed in response to partial width load instruction 302 and associated partial width load mode 305 . A partial width load instruction may specify or otherwise indicate memory location 311 as the source operand, and may specify or otherwise indicate destination register 315 as the destination operand. The memory location stores partial width data 312 to be loaded into the destination register in response to a partial width load instruction. To simplify the explanation, in this example, 8-bit partial width data and a 16-bit destination register are shown, but the scope of the invention is not so limited.In response to the partial width load instruction, the mode-dependent register write unit 314 is operable to perform a write or store to the destination register. The manner in which the write or store is performed may depend on or be based on the partial-width load mode associated with the partial-width load instruction. In this second example embodiment, the partial width loading mode may indicate a merged or merged partial width loading mode 322 (on the left in the diagram) or a zero-extended partial width loading mode 321 (on the right in the diagram) any of them. In these two different types of partial width loads, partial width data 312 (eg, an 8-bit partial width in this example) may be written or stored in the destination register. In the embodiment shown, the loaded partial width data is written to the lowest order portion of the destination register, but this is not required. As shown, in merged or merged extended partial width load mode 322, or in zero extended partial width load mode 321, an 8-bit partial width data value "11110000" may be stored in destination register 315- 1. In bits [7:0] of 315-2. In other embodiments, other sizes of the partial width data may be used and/or the partial width data may be written to other portions/locations of the destination register (eg, the most significant portion, etc.).In the illustrated embodiment, the highest-order portion of destination register 215 (eg, the 8-bit highest-order portion in this example) is processed differently for the two different types of partial-width load operations/modes. For merged or merged extended partial width load mode 322, the mode-dependent register write unit is controlled to utilize the portion stored in destination register 315-1 that is not used to store partial width data (e.g., with Writing to the destination register 315-1 is accomplished with bits 323-1 that depend on the partial-width load mode (all higher significant bits except the lowest-order 8 bits that store the 8-bit partial-width data). For example, as shown, the register write unit may utilize the initial set of bits of the most significant remaining 8-bit portion retained in destination register 315-1 (i.e., initially in destination register 315 prior to the partial-width load instruction). -1) to complete writing to destination register 315-1. For reference, a pre-existing byte value 323-2 is shown that existed prior to execution of the partial register write instruction. Bits[15:8] of the pre-existing bit value 323-2 have the value "01010101". Note that after the partial register write instruction is executed, these same values "01010101" in bits[15:8] of the pre-existing bit value 323-2 that existed before the partial register write instruction was executed are at the destination The same bit value "01010101" is represented in bits [15:8] of register 315-2. Partial-width data can be merged or inserted into the destination register, thereby replacing some bits while leaving other bits unchanged. For zero-extended partial-width load mode 321, the mode-dependent register write unit is controlled to write or store all zeros 317 to all remaining updates of destination register 315-2 that are not used to store partial-width data. Highly significant bit. As shown, binary zero 317 may be stored in each of bits [15:8] of destination register 315-2.It should be understood that these are only a few illustrative example embodiments of suitable types of partial-width loading modes. Another embodiment is contemplated where the first mode uses a sign-extended partial width loading mode and the second mode uses a merged partial width loading mode. Yet another embodiment is contemplated in which the embodiment of Figure 2 adds a further third mode to include a merged partial width loading mode. Further embodiments are contemplated where the embodiment of Figure 3 adds a further third mode to include a sign-extended partial width loading mode. Other embodiments may be based on other types of partial-width loading patterns (e.g., "one" expansion, etc.). Furthermore, as previously mentioned, in other embodiments, the partial width data and/or the bit width of the destination register may each be wider or narrower, and the width of the partial width data need not be half the width of the destination register.In certain embodiments, the operations of FIG. 2 and/or FIG. 3 may be performed by and/or within the processor of FIG. 1 . The details and optional details described above with respect to the processor of Figure 1 may also optionally apply to the operations of Figures 2 and/or 3, and in various embodiments, these operations may be performed by such processors and/or These operations can be performed within such processors. Alternatively, the operations of Figures 2 and/or 3 may be performed by and/or within a similar or different processor. Additionally, the processor of Figure 1 may perform operations that are the same, similar, or different than those of Figures 2 and/or 3.4 is a flow block diagram of an embodiment of a method 430 that may be performed by a processor when processing an embodiment of a partial width load instruction. In certain embodiments, the operations and/or methods of FIG. 4 may be performed by and/or within the processor of FIG. 1 . Components, features, and specific optional details described with respect to the processor of FIG. 1 may also optionally be applied to the operations and/or methods of FIG. 4, which operations and/or methods may be performed by such processors, and/or These operations and/or methods may be performed within such processors. Alternatively, the operations and/or methods of FIG. 4 may be performed by a similar or different processor and/or may be performed within a similar or different processor. Additionally, the processor of FIG. 1 may perform operations and/or methods that are the same, similar, or different than those of FIG. 4 .The method includes, at block 431, receiving a partial width load instruction. In various aspects, the instructions may be received at a processor, an instruction processing device, or a portion thereof (eg, an instruction fetch unit, a decoding unit, etc.). In various aspects, the instructions may be received from an off-die source (eg, from main memory, interconnect, etc.) or from an on-die source (eg, from a fetch unit, instruction cache, etc.). In some embodiments, a partial-width load instruction may specify or otherwise indicate a memory location as the source operand and may specify or otherwise indicate a register as the destination operand. In some embodiments, a partial-width load instruction may not specify or otherwise indicate a partial-width load mode (eg, may not have bit(s) to specify a mode or select a predetermined mode).The method may optionally include, at block 432, checking or otherwise determining a partial width loading mode. Alternatively, partial width loading mode can be imposed automatically without checking or determining it. The method also includes, at block 433, loading data from the indicated memory location to the processor in response to the partial width load instruction.The method includes, at block 434 , writing at least a portion of the loaded data to a partial width of the destination register in response to a partial width load instruction. Depending on the implementation of the directive, all or only part of the loaded data can be written. In one example embodiment, 32 bits of the loaded data may be written to the lowest-order 32-bit half of the 64-bit destination register. In other embodiments, other sizes of partial width data and/or destination registers may optionally be used.The method also includes, at block 435, completing the write to the destination register using the bit group stored in the remainder of the width of the destination register with a bit value that is dependent on the processor's partial width load mode. Those bits may be different (ie, have different bit values) from one mode to another. In some embodiments, in the first mode 436, this may optionally include, at block 437, copying, writing, or otherwise storing the sign bit (eg, a copy of the sign bit of the partial width data) into the remaining width of the destination register. In some embodiments, in the second mode 438, this may optionally include, at block 439, writing all zeros to all remaining widths of the destination register. In another embodiment, either of the first and second modes may be replaced with a mode using a merged or merged partial width loading mode or some other type of partial width loading mode.The method as previously described is described in relatively basic form, but operations may be optionally added to and/or removed from the method. For example, as noted above, determining a partial width loading mode is optional (e.g., a partial width loading mode may instead be imposed). As another example, additional operations may optionally be added to e.g. decode instructions, fetch instructions, transmit bits on the bus, receive bits from the bus, etc. As another example, operations associated with using the results in the destination register can optionally be added to the method. For example, the result may be loaded using a zero-extended partial width in the destination register by a 64-bit processor originally operating in a 64-bit processing mode so that the memory is accessed as a zero-extended 32-bit memory address. As another example, a result may be loaded using a zero-extended partial width in the destination register by a 128-bit (or other relatively wide-width) processor originally operating in a 128-bit (or other relatively wide-width) processing mode , so that the memory is accessed as a zero-extended 64-bit (or other relatively small-width) memory address. Additionally, although the flowcharts illustrate a specific order of operations according to various example embodiments, the specific order is exemplary. Alternative embodiments may optionally perform these operations in a different order, combine certain operations, have certain operations overlap, etc. For example, the operations at blocks 434 and 435 may be performed simultaneously or in parallel, rather than serially.The mode-dependent partial-width loading instructions and partial-width loading modes disclosed herein are general and may be used for a variety of different purposes. For example, you can use them to provide different types of partial-width load operations for the same opcode. However, to further illustrate certain concepts, specific examples of using mode-dependent partial-width load instructions and partial-width load modes for different or more efficient memory addressing are further described below. In some embodiments, relatively small memory addresses or pointers may be used within a relatively large address space. As an example, a 64-bit long zero-extended 32-bit pointer may be used in a 64-bit architecture processor and/or a 64-bit address space. As another example, a 128-bit long zero-extended 64-bit pointer may be used in a 128-bit architecture processor and/or a 128-bit address space.Most modern general-purpose processors have either a 32-bit architecture or a 64-bit architecture. In the future, it is likely that 128-bit architecture processors will become popular. Various specialized processors may use 32-bit or 64-bit architecture, or various other widths, such as 8-bit, 9-bit, 12-bit, 18-bit, 24-bit, 36-bit, 39-bit, 40-bit, 48-bit, or 60-bit architecture. These bit widths generally refer to the bit widths of various attributes within the processor's architecture. For example, these bit widths may refer to supported memory address sizes, integer sizes, integer register sizes, etc., or a combination of the above. For example, a 64-bit architecture processor can support 64-bit integer formats, can have 64-bit integer general-purpose registers, can support the use of 64-bit memory addresses to access memory, and so on. Some processors do not use the same bit width for all such parameters. The 64-bit width is sometimes referred to as the "word" size of the processor. In other words, a 64-bit architecture processor can use 64-bit "words", a 32-bit architecture processor can use 32-bit "words", and so on.Compared to 32-bit architecture processors, 64-bit architecture processors may tend to have certain enhanced features or advantages. For example, in some embodiments, a 64-bit processor may have a greater number of processor registers, greater general processing capabilities, or various other mechanisms or features to help improve performance, etc. However, certain aspects of operating on 64-bit architectures typically involve relatively higher overhead than operating on 32-bit architectures. One such potential aspect could involve using 64-bit memory addresses or pointers. As mentioned above, in some cases, 64-bit architecture processors can usually use 64-bit memory addresses or pointers to access memory. Such 64-bit memory addresses or pointers generally allow data to be accessed from anywhere in memory. Continuous or sequential 64-bit memory address values can specify corresponding consecutive locations of memory. These units may represent units of address resolution (eg, words, bytes, etc.).However, many user-level application modules or other software modules do not need to use 64-bit memory pointers frequently enough and/or do not need to access data from anywhere in memory frequently enough to justify the risks associated with using 64-bit memory pointers. The extra expense is justified. This may often be true, for example, when an entire user-level application module or other software module can fit within a limited contiguous range of memory. As a specific example, many commonly used user-level software modules and other modules can be contained within four gigabytes of memory. A 32-bit memory address is sufficient to address anywhere within a contiguous four gigabytes of memory. When software modules (e.g., code and data to be accessed most frequently) fit within four gigabytes addressable by 32-bit memory addresses or pointers, using smaller 32-bit memory addresses can be more efficient than using smaller 32-bit memory addresses. Large 64-bit memory addresses are more efficient and/or more efficient. Advantageously, using smaller 32-bit memory addresses in 64-bit architecture processors can help to take advantage of the enhanced features generally associated with 64-bit architectures, when appropriate (e.g., when software modules are housed in four thousand within megabytes) to selectively help reduce unwanted or unnecessary overhead associated with using 64-bit memory addresses and/or using a larger memory footprint.Programs tend to run faster and/or use less memory when using smaller pointers (eg, 32-bit pointers) than when using larger pointers (eg, 64-bit pointers). For one thing, less memory is usually required to store smaller pointers. In addition, smaller pointers generally occupy less space in the cache used in the processor, thus speeding up access to frequently used data. As a result, when using small pointers, the cache may be able to hold a larger number of such smaller pointers and/or more data. Additionally, the likelihood of finding data or pointers in the cache will tend to increase the more of such pointers or data are held in the cache (e.g., when smaller pointers are used), which Can help achieve increased program performance. Additionally, programs that use less data tend to make the system run faster. In some systems, the entire collection of code and data for a program in use may not fit in memory. Portions of the program can be temporarily stored on disk and brought into memory when needed (for example, in the case of demand paging). However, when programs are smaller (such as may be the case when using smaller pointers), these programs may tend to leave more space in memory for other programs that are not currently running, making it unnecessary Such programs are paged out to make room in memory. In general, system performance tends to improve when less paging occurs, since such paging tends to delay program execution and may tend to delay the input/output of other programs. Other potential advantages of using 32-bit memory addresses include reduced power consumption compared to 64-bit memory addresses.5 is a block diagram of an embodiment of a computer system 500 including a 64-bit architecture processor 501 for performing zero-extended 32-bit memory addressing. The 64-bit architecture processor is coupled to memory 510 through an interconnect mechanism 509 . The memory has a 64-bit addressable range 550. The 64-bit addressable range includes, as a subset thereof, a smaller contiguous 32-bit addressable range 552. The 32-bit addressable range may have approximately four thousand Megabytes. Software modules 554 (eg, user-level applications or other software) are stored within this 32-bit addressable range. This software module has a 32-bit memory address 512.The 64-bit processor may optionally have features or characteristics of other processors disclosed herein (eg, processor 101 of Figure 1). The 64-bit architecture processor includes a mode-dependent partial-width loading system 513. The mode-dependent partial width loading system may optionally have features and characteristics similar or identical to those of the mode-dependent partial width loading system 113 of FIG. 1 . Mode-dependent partial-width loading The system supports zero-extended 32-bit loading mode 505. The processor also has 64-bit registers 515. In some embodiments, the register may be a register that may be used for memory addressing, such as a general purpose register in some processors, or a special purpose register for memory addressing in other processors.32-bit load instructions 502 may be provided to mode-dependent partial-width load systems. The 32-bit load instruction may specify or otherwise indicate one of the 32-bit memory addresses 512 (representing partial-width data) as the source operand, and may specify or otherwise indicate the 64-bit register 515 as the destination operand. . In response to the 32-bit load instruction 502 and the zero-extended 32-bit load mode 505 , the mode-dependent partial-width load system may perform a zero-extended 32-bit load operation 519 that includes: converting the indicated 32-bit memory address 512 store into the lowest-order 32 bits of the 64-bit destination register 515; and store zero in the highest-order 32 bits of the 64-bit destination register 515.Subsequently, memory access instructions 556 may be provided to the processor. The memory access instruction may indicate a 64-bit, zero-extended 32-bit memory address as a result of the 32-bit load instruction 502 and/or operation 519 . For example, a memory access instruction may indicate destination register 515, or the results may be moved from the destination register to another register indicated by the memory access instruction. In response to the memory access instruction 556, the processor may provide memory access 558 (eg, load from memory, write to memory, etc.) using or based on a 64-bit, zero-extended 32-bit memory address.This is just an illustrative example. In other embodiments, processors with architectures other than 64-bit may optionally be used. Additionally, in other embodiments, memory addresses having sizes other than 32 bits may optionally be used. For example, in some embodiments, 128-bit zero-extended 64-bit memory addresses may be used for 128-bit architecture processors and/or 128-bit address spaces or addressing modes. As another example, 64-bit zero-extended 16-bit or 48-bit pointers may optionally be used for 64-bit architecture processors or 64-bit address spaces. In general, zero-extended smaller pointers or memory addresses may be used for larger architecture processors or address spaces.User-level application modules or other unprivileged software modules generally need to have a way to change the partial-width loading mode when it is appropriate to do so (e.g., when they plan to use a different mode than the one that currently exists). This can be done in different ways. In some embodiments, the partial-width loading mode may be directly changeable by user-level or unprivileged software modules. In other embodiments, user-level or unprivileged software modules may not be able to directly change the partial-width loading mode. Conversely, in some embodiments, changes to partial-width loading mode may be reserved for privileged software modules (eg, operating system modules, virtual machine monitor modules, etc.). In such embodiments, the privileged software module may provide an interface (eg, service, interface, etc.) to allow user-level or unprivileged software modules to request the privileged software module to change the partial-width loading mode.6 is a block diagram illustrating an embodiment of interaction between a user-level software module 670 (or other unprivileged software module) and an operating system module 660 (or other privileged software module) for changing the partial-width loading mode. The user-level software module includes metadata 672 that includes an indication 674 of using a given partial width loading mode. A given partial-width loading mode may be the partial-width loading mode that the software module expects or is intended to use. As one example, metadata may include an object module format or other data structure used by user-level software modules to pass information about themselves to operating system module 660 . For example, the metadata module may include a flag or one or more bits to indicate the desired partial-width loading mode. Operating system modules include program loader module 662. The program loader module and/or operating system module is operable to inspect the metadata, including inspection instructions 674 . The operating system module may include a partial width load mode changer module 664 operable to change the processor's partial width load mode 605 to the indicated partial width load mode 674 . For example, the partial-width load mode can be changed in the processor's configuration or control registers.In some embodiments, a processor may have an instruction set including instructions for changing partial width loading mode. In this example, the instruction would be a privilege-level instruction reserved for an operating system module or similar privilege-level software module, but in other embodiments, the instruction may be a user-level instruction capable of use by a user-level software module. This instruction may be used by the partial width loading mode changer module 664 when the partial width loading mode is to be changed. The instructions may be decoded and executed or otherwise executed by the processor to change the mode from the initial mode to a different mode. In some embodiments, after such a mode change, a jump may optionally be performed to fetch and decode the instruction in the updated different mode, rather than using the instruction that was already in the pipeline when the execution instruction was executed. Just those instructions that have been decoded in obsolete mode. In some embodiments, the instruction set may also have instructions for reading partial width load mode. This instruction may be a user-level instruction that can be used by a user-level software module, or a privileged-level instruction reserved for operating system modules or similar privileged-level software modules. The instruction may be decoded and when executed or otherwise executed may read the partial width load mode (eg, read one or more bits in a register). As an example, user-level code modules or other software modules can use such instructions to learn about partial-width loading mode.The possible use of a partial-width load mode to support 32-bit addressing on 64-bit processors was discussed above. Operating system modules, interrupt handler modules, and potentially other software can primarily use 64-bit memory addressing instead of zero-extended 32-bit memory addressing. As a result, operating system modules, interrupt handler modules, etc. may not need to use a partial-width load mode that provides zero-extended 32-bit loads, but may instead use a partial-width load mode that provides an alternative type of 32-bit load for the same instruction or opcode , such as a sign-extended 32-bit load or a merged 32-bit load. As shown, in some embodiments, the processor may include default partial width loading mode fallback logic or unit 676 for causing or causing an automatic fallback to the default partial width loading mode under certain conditions. Examples of such conditions may include: a switch or transition from executing user-level code to operating system code; after detecting an interrupt 678 or when an interrupt 678 is reported to the interrupt handler module 668; during interrupt handling by the interrupt handler module at the beginning, and so on. The unit or logic may be implemented in hardware or in hardware potentially combined with some firmware and/or possibly some software. In the default partial-width load mode, an instruction or opcode may be treated as, for example, a sign-extended 32-bit load or a merged 32-bit load, rather than a zero-extended 32-bit load. In some embodiments, the default partial-width load mode fallback logic or unit may also cause the partial-width load mode to automatically revert back from the default partial-width load mode before switching to operating system code or interrupt processing (e.g., during Partial-width loading mode that exists since the return of interrupt 679). The return instruction can read the partial-width load mode on return to determine how to decode the partial-width load instruction.Figure 7 is a block diagram of an embodiment of a method 780 that may be performed by an operating system module, a privileged module, or other system-level module. The method includes, at block 781 , checking the metadata of the software module. This includes checking for an indication of the partial-width load mode of the processor that will be used by the processor to execute the partial-width load instruction. Partial-width load instructions are used to indicate a memory location as the source operand and a register as the destination operand.The method includes, at block 782, changing the partial width loading mode of the processor to the indicated partial width loading mode. In some embodiments, changing the partial-width load mode may be used to control changes in bit values to be stored by the processor in portions of the indicated register that are not used to store partial-width data loaded from memory. In some embodiments, an operating system module or other system-level module may maintain a set of metadata to record or track which programs use which partial-width loading modes. Some programs can use a partial-width load mode when executing a partial-width load instruction (e.g., a given opcode), while other programs execute a partial-width load instruction (e.g., the same given opcode). , a different partial width loading mode can be used. When switching between multiple programs and/or when returning from an interrupt, operating system modules or other modules may access metadata to determine which partial-width load mode appropriate to the program to put the processor into. As an example, in some embodiments, when switching to a program that desires to use a different mode, the operating system may use instructions to change modes. As an example, in some embodiments, the operating system may use different instructions to read the current mode to learn whether a mode change is required when switching programs.In some embodiments, a user-level application module or other unprivileged software module may include a debug information module or other metadata module for conveying to the debugger module which partial-width loading mode the software module expects or plans to use. Debugger modules can access and inspect this debug information module or other metadata modules to determine the partial-width loading mode to be used by the software. This allows the debugger module to correctly interpret how the processor will handle the instruction.For simplicity of description, two different modes and/or meanings of opcodes are generally described herein. However, it should be understood that other embodiments may use three, four, or more different modes and/or meanings for a given opcode. As an example, a processor may have two or more bits to indicate which of a number of such different meanings should be used for a given opcode.Different interpretations of the partial-width load directive have been highlighted in this article. In other embodiments, another type of instruction other than a partial width load instruction may be interpreted as a partial width load instruction in another mode (eg, a zero-extended partial width load instruction).Exemplary core architecture, processor and computer architectureProcessor cores can be implemented in different ways, for different purposes, and in different processors. For example, implementations of such cores may include: 1) general-purpose in-order cores intended for general-purpose computing; 2) high-performance general-purpose out-of-order cores intended for general-purpose computing; 3) intended primarily for graphics and/or or dedicated cores for scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general-purpose in-order cores intended for general-purpose computation and/or one or more general-purpose out-of-order cores intended for general purpose computation; and 2 ) coprocessor, which consists of one or more dedicated cores intended primarily for graphics and/or scientific (throughput) use. Such different processors result in different computer system architectures, which may include: 1) a co-processor on a separate chip from the CPU; 2) in the same package as the CPU but on a separate die 3) A coprocessor on the same die as the CPU (in which case such a coprocessor is sometimes referred to as a dedicated processor such as integrated graphics and/or scientific (throughput) logic) logic, or referred to as dedicated cores); and 4) the described CPU (sometimes referred to as application cores or applications processors), the above-described co-processors and additional functionality can be included on a chip on the same die system. An example core architecture is described next, followed by an example processor and computer architecture.Example core architectureOrdered and unordered nuclear block diagram8A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming out-of-order issue/execution pipeline in accordance with embodiments of the invention. 8B is a block diagram illustrating an exemplary embodiment of an in-order architecture core and an exemplary register renaming out-of-order issue/execution architecture core to be included in a processor in accordance with embodiments of the invention. The solid boxes in Figures 8A-B show the in-order pipeline and in-order cores, while the optionally added dashed boxes show the register-renaming out-of-order issue/execution pipeline and cores. Disordered aspects will be described considering that ordered aspects are a subset of disordered aspects.In Figure 8A, processor pipeline 800 includes fetch stage 802, length decode stage 804, decode stage 806, allocation stage 808, rename stage 810, dispatch (also called dispatch or issue) stage 812, register read/memory read Fetch level 814, execution level 816, writeback/memory write level 818, exception handling level 822, and commit level 824.8B illustrates a processor core 890 that includes a front-end unit 830 coupled to an execution engine unit 850, both of which are coupled to a memory unit 870. Core 890 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, core 890 may be a dedicated core, such as, for example, a network or communications core, a compression engine, a coprocessor core, a general computing graphics processor unit (GPGPU) core, a graphics core, or the like.Front-end unit 830 includes branch prediction unit 832 coupled to instruction cache unit 834 which is coupled to instruction translation lookaside buffer (TLB) 836 which is coupled to instruction fetch unit 838 . 838 is coupled to decoding unit 840. Decoding unit 840 (or decoder) may decode the instruction and generate one or more micro-ops, microcode entry points, microinstructions that are decoded from, or otherwise reflect, or derived from the original instruction. , other instructions, or other control signals as output. Decoding unit 840 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementation, programmable logic array (PLA), microcode read-only memory (ROM), etc. In one embodiment, core 890 includes a microcode ROM or other medium that stores the microcode for certain macroinstructions (eg, in decode unit 840 or otherwise within front-end unit 830). Decoding unit 840 is coupled to renaming/distributor unit 852 in execution engine unit 850 .Execution engine unit 850 includes a rename/distributor unit 854 coupled to a retirement unit 852 and a set of one or more scheduler units 856 . Scheduler unit 856 represents any number of different schedulers, including reservation stations, central command windows, and the like. Scheduler unit 856 is coupled to physical register set unit 858 . Each of the physical register file units 858 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, Packed integer, packed floating point, vector integer, vector floating point, state (e.g., the instruction pointer that is the address of the next instruction to be executed), and so on. In one embodiment, physical register set units 858 include vector register units, write mask register units, and scalar register units. These register units provide architectural vector registers, vector mask registers, and general purpose registers. Physical register file unit 858 is overlaid by retirement unit 854 to illustrate the various ways in which register renaming and out-of-order execution may be implemented (e.g., using reorder buffers and retirement register files; using future files, history buffers and retirement register sets; using register maps and register pools, etc.). Retirement unit 854 and physical register file unit 858 are coupled to execution cluster 860 . Execution cluster 860 includes a set of one or more execution units 862 and a set of one or more memory access units 864. Execution unit 862 may perform various operations (e.g., shifts, additions, subtractions, multiplications) and may perform operations on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point) to perform various. Although some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. Scheduler unit 856, physical register file unit 858, and execution cluster 860 are shown as possibly multiple, as some embodiments create separate pipelines for certain types of data/operations (e.g., each having their own scheduler unit , physical register file unit and/or scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline and/or memory access pipeline of the execution cluster - and in the case of a separate memory access pipeline Below, some embodiments are implemented in which only the execution cluster of the pipeline has memory access units 864). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution while the remainder are in-order.A set of memory access units 864 is coupled to memory unit 870 , which includes a data TLB unit 872 coupled to a data cache unit 874 , which is coupled to a level 2 (L2) cache unit 876 . In one exemplary embodiment, memory access unit 864 may include a load unit, a store address unit, and a store data unit, each of which is coupled to data TLB unit 872 in memory unit 870 . Instruction cache unit 834 is further coupled to level 2 (L2) cache unit 876 in memory unit 870 . L2 cache unit 876 is coupled to one or more other levels of cache and ultimately to main memory.As an example, an exemplary register-renaming, out-of-order issue/execution core architecture may implement pipeline 800 as follows: 1) instruction fetch 838 performs fetch and length decode stages 802 and 804; 2) decode unit 840 performs decode stage 806 ; 3) Renaming/allocator unit 852 executes allocation stage 808 and renaming stage 810; 4) Scheduler unit 856 executes scheduling stage 812; 5) Physical register set unit 858 and memory unit 870 execute register read/memory read Stage 814; execution cluster 860 executes execution stage 816; 6) memory unit 870 and physical register set unit 858 execute writeback/memory write stage 818; 7) units may be involved in exception handling stage 822; and 8) retire unit 854 and physical register set unit 858 performs commit stage 824.Core 890 may support one or more instruction sets (e.g., the x86 instruction set (with some extensions added with newer versions); MIPS Instruction Set from MIPS Technologies, Inc., Sunnyvale, Calif.; The ARM instruction set from ARM Holdings, Nivelles (with optional additional extensions such as NEON), which includes the instructions described in this article. In one embodiment, core 890 includes logic to support packed data instruction set extensions (eg, AVX1, AVX2), thereby allowing packed data to be used to perform operations used by many multimedia applications.It should be understood that the core may support multi-threaded operations (performing two or more parallel operations or collections of threads), and that multi-threaded operations may be accomplished in a variety of ways, including time-divided multi-threading, simultaneous multi-threading, etc. Threaded operations (where a single physical core provides a logical core to each of multiple threads that the physical core is synchronizing for multi-threaded operations) or a combination thereof (e.g., time-divided fetch and decode and thereafter synchronization such as with Hyper-Threading Technology multi-threaded operation).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in in-order architectures. Although the illustrated embodiment of the processor also includes separate instruction and data cache units 834/874 and a shared L2 cache unit 876, alternative embodiments may have a single internal cache for both instructions and data, Such as, for example, a first level (L1) internal cache or multiple levels of internal cache. In some embodiments, a system may include a combination of internal caches and external caches external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.Specific exemplary ordered core architecture9A-B illustrate a block diagram of a more specific exemplary in-order core architecture in which cores will be among multiple logical blocks in the chip (including other cores of the same type and/or different types). one of. Depending on the application, these logic blocks communicate with some fixed function logic, memory I/O interfaces, and other necessary I/O logic through a high-bandwidth interconnect network (e.g., a ring network).Figure 9A is a block diagram of a single processor core and its connection to an on-die interconnect network 902 and a local subset of its level 2 (L2) cache 904, in accordance with embodiments of the present invention. In one embodiment, instruction decoder 900 supports the x86 instruction set with packed data instruction set extensions. L1 cache 906 allows low latency access to cache memory into scalar and vector units. Although in one embodiment (to simplify design), scalar unit 908 and vector unit 910 use separate sets of registers (scalar register 912 and vector register 914, respectively), and data transferred between these registers is written to memory and subsequently read back from the first level (L1) cache 906, but alternative embodiments of the invention may use different approaches (e.g., use a single set of registers, or include allowing data to be transferred between these two register sets without communication path being written and read back).The local subset of L2 cache 904 is part of the global L2 cache that is divided into multiple separate local subsets, ie, one local subset per processor core. Each processor core has a direct access path to its own local subset 904 of the L2 cache. Data read by a processor core is stored in a subset of its L2 cache 904 and may be accessed quickly in parallel with other processor cores accessing those subsets of the processor core's own local L2 cache. Data written by the processor core is stored in its own subset of L2 cache 904 and flushed from other subsets, if necessary. Ring networks ensure consistency of shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other blocks of logic to communicate with each other within the chip. The data path of each ring is 1012 bits wide in each direction.Figure 9B is an expanded view of a portion of the processor core in Figure 9A, in accordance with an embodiment of the present invention. 9B includes the L1 data cache 906A portion of the L1 cache 904, as well as more details about the vector unit 910 and vector register 914. Specifically, vector unit 910 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 928) that executes one or more of integer, single-precision floating point, and double-precision floating point instructions. The VPU supports mixing of register inputs using mixing unit 920, numerical conversion using value conversion units 922A-B, and copying of memory inputs using copy unit 924. The write mask register 926 allows prediction of the resulting vector writes.Processor with integrated memory controller and graphicsFigure 10 is a block diagram of a processor 1000 that may have more than one core, may have an integrated memory controller, and may have integrated graphics devices, in accordance with various embodiments of the invention. The solid box in Figure 10 shows a processor 1000 with a single core 1002A, a system agent 1010, and a set of one or more bus controller units 1016, while the optional addition of a dashed box shows multiple cores 1002A-N. , a set of one or more integrated memory controller units 1014 in the system agent unit 1010 and the processor 1000 of dedicated logic 1008 .Accordingly, different implementations of processor 1000 may include: 1) a CPU, where dedicated logic 1008 is integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and cores 1002A-N are a or multiple general-purpose cores (e.g., general-purpose in-order cores, general-purpose out-of-order cores, a combination of both); 2) co-processors, where cores 1002A-N are intended primarily for graphics and/or scientific (throughput a) a large number of specialized cores; and 3) a co-processor, where cores 1002A-N are a large number of general purpose in-order cores. Thus, processor 1000 may be a general-purpose processor, a co-processor, or a special-purpose processor such as, for example, a network or communications processor, a compression engine, a graphics processor, a GPGPU (general-purpose graphics processing unit), a high-throughput integrated many-core (MIC) co-processor (including 30 or more cores), or embedded processor, etc. The processor can be implemented on one or more chips. Processor 1000 may be part of one or more substrates, and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. .The memory hierarchy includes one or more levels of cache within the core, a set or one or more shared cache units 1006, and external memory (not shown) coupled to an integrated set of memory controller units 1014 . The set of shared cache units 1006 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, the last level of which Caching (LLC), and/or a combination of the above. Although in one embodiment, the ring-based interconnect unit 1012 interconnects the integrated graphics logic 1008, the set of shared cache units 1006, and the system agent unit 1010/integrated memory controller unit 1014, alternative embodiments may use any number well-known techniques to interconnect such units. In one embodiment, coherency is maintained between one or more cache units 1006 and cores 1002-A-N.In certain embodiments, one or more of cores 1002A-N are capable of multi-threaded operations. System agent 1010 includes those components that coordinate and operate cores 1002A-N. System agent unit 1010 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or may include the logic and components required to regulate the power states of cores 1002A-N and integrated graphics logic 1008 . The display unit is used to drive one or more externally connected displays.Cores 1002A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of these cores 1002A-N may be able to execute the same instruction set, while other cores may be able to Only a subset of the instruction set or a different instruction set is executed.Example computer architectureFigures 11-14 are block diagrams of exemplary computer architectures. Known in the art for use in laptop computers, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network equipment, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics Other system designs and configurations for devices, video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and a variety of other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of incorporating the processors and/or other execution logic disclosed herein are generally suitable.Referring now to Figure 11, shown is a block diagram of a system 1100 in accordance with one embodiment of the present invention. System 1100 may include one or more processors 1110 , 1115 coupled to controller hub 1120 . In one embodiment, controller hub 1120 includes graphics memory controller hub (GMCH) 1190 and input/output hub (IOH) 1150 (which may be on separate chips); GMCH 1190 includes memory and graphics controller, memory 1140 Coprocessor 1145 is coupled to the memory and graphics controller; IOH 1150 couples input/output (I/O) device 1160 to GMCH 1190 . Alternatively, one or both of the memory and graphics controller are integrated within the processor (as described herein), with memory 1140 and coprocessor 1145 processor 1110 and controller hub 1120 in a single chip that controls The device hub 1120 and the IOH 1150 are in a single chip.The optional nature of the additional processor 1115 is shown in dashed lines in Figure 11 . Each processor 1110, 1115 may include one or more of the processing cores described herein, and may be some version of processor 1000.Memory 1140 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1120 communicates with the processors 1110 , 1115 via a multi-drop bus such as a front-side bus (FSB), a point-to-point interface such as a quick path interconnect (QPI), or a similar connection 1195 communication.In one embodiment, coprocessor 1145 is a special purpose processor such as, for example, a high throughput MIC processor, a network or communications processor, a compression engine, a graphics processor, a GPGPU, or an embedded processor, or the like. In one embodiment, controller hub 1120 may include an integrated graphics accelerator.There may be various differences between physical resources 1110, 1115 in terms of a range of merit metrics including architectural, microarchitectural, thermal, power consumption characteristics, etc.In one embodiment, processor 1110 executes instructions that control general types of data processing operations. Embedded within these instructions may be coprocessor instructions. Processor 1110 identifies these coprocessor instructions as the type that should be executed by attached coprocessor 1145 . Accordingly, processor 1110 issues these coprocessor instructions (or control signals representing the coprocessor instructions) to coprocessor 1145 over the coprocessor bus or other interconnect. Coprocessor 1145 accepts and executes received coprocessor instructions.Referring now to Figure 12, shown is a block diagram of a first more specific exemplary system 1200 in accordance with one embodiment of the present invention. As shown in FIG. 12 , multi-processor system 1200 is a point-to-point interconnect system and includes a first processor 1270 and a second processor 1280 coupled via a point-to-point interconnect 1250 . Each of processors 1270 and 1280 may be some version of processor 1000. In one embodiment of the invention, processors 1270 and 1280 are processors 1110 and 1115, respectively, and coprocessor 1238 is coprocessor 1145. In another embodiment, processors 1270 and 1280 are processor 1110 and coprocessor 1145, respectively.Processors 1270 and 1280 are shown including integrated memory controller (IMC) units 1272 and 1282, respectively. Processor 1270 also includes point-to-point (P-P) interfaces 1276 and 1278 as part of its bus controller unit; similarly, second processor 1280 includes P-P interfaces 1286 and 1288. Processors 1270, 1280 may exchange information via a P-P interface 1250 using point-to-point (P-P) interface circuits 1278, 1288. As shown in Figure 12, IMCs 1272 and 1282 couple the processors to respective memories, namely, memory 1232 and memory 1234, which may be portions of main memory attached locally to the respective processors.The processors 1270, 1280 may each exchange information with the chipset 1290 via separate P-P interfaces 1252, 1254 using point-to-point interface circuits 1276, 1294, 1286, 1298. Chipset 1290 may optionally exchange information with coprocessor 1238 via high performance interface 1239. In one embodiment, coprocessor 1238 is a special purpose processor such as, for example, a high throughput MIC processor, a network or communications processor, a compression engine, a graphics processor, a GPGPU, or an embedded processor, or the like.A shared cache (not shown) may be included within either processor, or external to both processors but still connected to the processors via the P-P interconnect, such that if a processor is placed in a low power mode , local cache information for either processor or both processors can be stored in this shared cache.Chipset 1290 may be coupled to first bus 1216 via interface 1296 . In one embodiment, the first bus 1216 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as PCI Express or another third generation I/O interconnect bus, but the present invention The scope is not limited by this.As shown in FIG. 12 , various I/O devices 1214 may be coupled to the first bus 1220 along with a first bus 1216 and a bus bridge 1216 coupling the first bus 1216 to the second bus 1220 . In one embodiment, a processor such as a coprocessor, a high throughput MIC processor, a GPGPU, an accelerator (such as, for example, a graphics accelerator or digital signal processor (DSP) unit), a field programmable gate array, or any other processing One or more additional processors 1215, such as a processor, are coupled to the first bus 1216. In one embodiment, the second bus 1220 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1220, including, for example, a keyboard and/or mouse 1222, a communication device 1227, and a storage unit 1228, such as, that may include instructions/code and data. 1230 disk drive or other mass storage device. Additionally, audio I/O 1224 may be coupled to second bus 1220 . Note that other architectures are possible. For example, the system could implement a multi-drop bus or other such architecture rather than the point-to-point architecture in Figure 12.Referring now to Figure 13, shown is a block diagram of a second more specific exemplary system 1300 in accordance with one embodiment of the present invention. Identical elements in Figures 12 and 13 have the same reference numerals, and certain aspects of Figure 12 have been omitted from Figure 13 so as not to obscure other aspects of Figure 13.Figure 13 illustrates that processors 1270, 1280 may include integrated memory and I/O control logic ("CL") 1272 and 1282, respectively. Therefore, CL 1272, 1282 includes an integrated memory controller unit and includes I/O control logic. Figure 13 shows that not only memories 1232, 1234 are coupled to CLs 1272, 1282, but also I/O devices 1314 are coupled to control logic 1272, 1282. Legacy I/O devices 1315 are coupled to chipset 1290 .Referring now to Figure 14, shown is a block diagram of SoC 1400 in accordance with an embodiment of the present invention. Identical elements in Figure 10 have the same reference numbers. Additionally, the dashed box is an optional feature of more advanced SoCs. In Figure 14, interconnect unit 1402 is coupled to: application processor 1410, which includes a set of one or more cores 202A-N and shared cache unit 1006; system agent unit 1010; bus controller unit 1016; integrated memory control processor unit 1014; one or more co-processors 1420, which may include integrated graphics logic, image processor, audio processor, and video processor; static random access memory (SRAM) unit 1430; direct memory access (DMA) unit 1432; and a display unit 1440 for coupling to one or more external displays. In one embodiment, coprocessor 1420 includes a special purpose processor such as, for example, a network or communications processor, a compression engine, a GPGPU, a high throughput MIC processor, or an embedded processor, among others.Various embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation methods. Embodiments of the invention may be implemented as a computer program or program code executing on a programmable system including at least one processor, a memory system (including volatile and non-volatile memory and/or storage elements) , at least one input device and at least one output device.Program code, such as code 1230 shown in Figure 12, may be applied to input instructions to perform the functions described herein and generate output information. Output information can be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as a digital signal processor (DSP), microcontroller, application specific integrated circuit (ASIC), or microprocessor.Program code may be implemented in a high-level procedural language or an object-oriented programming language to communicate with the processing system. When necessary, assembly language or machine language can also be used to implement program code. In fact, the mechanisms described in this article are not limited to the scope of any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, representing various logic in a processor, which instructions, when read by a machine, cause the machine to use logic for performing the techniques described in this article. Such representations, known as "IP cores," may be stored on a tangible machine-readable medium and provided to various customers or production facilities for loading into the manufacturing machines that actually manufacture the logic or processors.Such machine-readable storage media may include, but is not limited to, a non-transitory tangible arrangement of an article manufactured or formed by a machine or device, including: storage media such as hard disks; any other type of disk, including floppy disks, optical disks , compact disk read-only memory (CD-ROM), compact disk-rewritable (CD-RW), and magneto-optical disks; semiconductor devices such as read-only memory (ROM), such as dynamic random access memory (DRAM) and Random access memory (RAM) such as static random access memory (SRAM), erasable programmable read-only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM); phase change memory ( PCM); magnetic or optical card; or any other type of medium suitable for storing electronic instructions.Accordingly, embodiments of the present invention also include non-transitory tangible machine-readable media containing instructions or containing design data, such as a hardware description language (HDL), which defines the structures, circuits, devices, processes described herein device and/or system characteristics. Such embodiments may also be referred to as program products.Simulation (including binary transformation, code deformation, etc.)In some cases, an instruction converter can be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction translator may transform (eg, using static binary transformation, dynamic binary transformation including dynamic compilation), transform, emulate, or otherwise transform the instruction into one or more other instructions to be processed by the core. Instruction converters can be implemented in software, hardware, firmware, or a combination of the above. The instruction translator may be on-processor, off-processor, or partially on-processor and partially off-processor.15 is a block diagram illustrating the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set, in accordance with various embodiments of the present invention. In the illustrated embodiment, the instruction translator is a software instruction translator, but alternatively, the instruction translator may be implemented in software, firmware, hardware, or various combinations of the foregoing. Figure 15 illustrates that a program in a high-level language 1502 can be compiled using an x86 compiler 1504 to generate x86 binary code 1506 that can be natively executed by a processor 1516 having at least one x86 instruction set core. Processor 1516 having at least one x86 instruction set core means any processor capable of performing substantially the same functions as an Intel processor having at least one x86 instruction set core by compatibly executing or otherwise processing: 1) Intel An essential part of the instruction set of an x86 instruction set core, or 2) an application that is targeted at running on an Intel processor with at least one x86 instruction set core to achieve substantially the same results as an Intel processor with at least one x86 instruction set core or object code versions of other software. x86 compiler 1504 represents a compiler for generating x86 binary code 1506 (eg, object code) that may or may not be processed by additional linking on a processor 1516 having at least one x86 instruction set core implement. Similarly, FIG. 15 illustrates that an alternative instruction set compiler 1508 may be used to compile a program in a high-level language 1502 to generate alternative instruction set binary code 1510 that may be processed by a processor that does not have at least one x86 instruction set core 1514 processors (e.g., having cores that execute the MIPS instruction set of MIPS Technologies, Inc., Sunnyvale, Calif.) and/or execute the ARM instruction set of ARM Holdings Inc., Sunnyvale, Calif. processor) executes natively. Instruction converter 1512 is used to convert x86 binary code 1506 into code that can be natively executed by processor 1514 that does not have an x86 instruction set core. The converted code is unlikely to be identical to the alternative instruction set binary code 1510 because an instruction converter capable of doing so is difficult to manufacture; however, the converted code will perform the general operations and consist of instructions from the alternative instruction set. Thus, instruction translator 1512 represents, through emulation, emulation, or any other process, software, firmware, hardware, or a combination thereof that allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute x86 binary code 1506 .Components, features, and details described with respect to any of Figures 2-3 and 5 may optionally also be used in any of Figures 1 and 4. Furthermore, components, features, and details described herein with respect to any of the devices may optionally be used in any of the methods described herein, and in various embodiments may be performed by such devices and /or utilize such means to perform any of these methods.In the description and claims, the terms "coupled" and "connected" and their derivatives may have been used. It should be understood that these terms are not intended as synonyms for each other. Conversely, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical and/or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other. For example, a memory subsystem may be coupled to a decoder through one or more intermediate units or logic, or a register writing unit may be coupled to a register through one or more intermediate units or logic. In the figures, bidirectional arrows are used to illustrate bidirectional connections and couplings.In the description and claims, the term "logic" may have been used. As used herein, logic may include hardware, firmware, software, or a combination of the foregoing. Examples of logic include integrated circuits, application specific integrated circuits, analog circuits, digital circuits, programmable logic devices, memory devices including instructions, and so on. In some embodiments, hardware logic may include transistors and/or gates, potentially along with other circuit components. Logic can represent modules, components, units, processor elements, and so on.In the foregoing description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments of the invention. However, it will be apparent to one skilled in the art that one or more other embodiments may be practiced without some of these specific details. The specific embodiments described are provided not to limit the invention but to illustrate it by way of example embodiments. The scope of the invention is determined not by the specific examples, but only by the claims. In other instances, well-known circuits, structures, devices, and operations are shown in block diagram form or without detail in order to avoid obscuring the description.Where deemed appropriate, reference numbers, or the end portions of reference numbers, have been repeated between the drawings to indicate corresponding or similar elements that may optionally have similar or identical properties, unless otherwise specified or otherwise obvious. . Where multiple components have been described, they generally can be combined into a single component. In other cases where a single component has been described, the single component may generally be divided into multiple components.Various operations and methods have been described. Some of the methods have been described in a relatively basic manner, in flow chart format, but operations may be optionally added to and/or removed from the methods. Additionally, although the flowcharts illustrate a specific order of operations according to various example embodiments, the specific order is exemplary. Alternative embodiments may optionally perform these operations in a different order, combine certain operations, have certain operations overlap, etc.Certain embodiments include an article of manufacture (eg, a computer program product) including a machine-readable medium. The media may include a mechanism to provide (eg, store) information in machine-readable form. A machine-readable medium may provide or have stored thereon one or more instructions that, if and/or when executed by a machine, cause the machine to perform and/or cause the machine to perform one or more of the methods disclosed herein. Multiple operations, methods or techniques. Examples of suitable machines include, but are not limited to, processors, instruction processing devices, digital logic circuits, integrated circuits, and the like. Other examples of suitable machines include computing devices and other electronic devices incorporating such processors, instruction processing devices, digital logic circuits, or integrated circuits. Examples of such computing devices and electronic devices include, but are not limited to, desktop computers, laptop computers, notebook computers, tablet computers, netbooks, smartphones, cellular phones, servers, network equipment (e.g., routers and switches), mobile Internet devices (MIDs), media players, smart TVs, mini desktops, set-top boxes and video game controllers.In certain embodiments, machine-readable media may include tangible and/or non-transitory machine-readable storage media. For example, tangible and/or non-transitory machine-readable storage media may include floppy disks, optical storage media, optical disks, optical data storage devices, CD-ROMs, magnetic disks, magneto-optical disks, read-only memory (ROM), programmable ROM ( PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), flash memory, phase change memory, Phase change data storage materials, non-volatile memory, non-volatile data storage devices, non-transitory memory, non-transitory data storage devices, and more. Non-transitory machine-readable storage media do not consist solely of transiently propagating signals.It will also be understood that throughout this specification, reference to, for example, "one embodiment," "an embodiment," or "one or more embodiments" means that particular features may be included in the practice of the invention. Similarly, it is to be understood that in the description various features are sometimes grouped together in a single embodiment, drawing, or description thereof in order to streamline the disclosure and facilitate understanding of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects may lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.Example embodimentsThe following examples relate to further embodiments. Details in these examples may be used anywhere in one or more embodiments.Example 1 is a processor including a register with a width and a decode unit for receiving a partial width load instruction. Partial-width load instructions are used to indicate a memory location of memory as the source operand and a register as the destination operand. The processor also includes a memory subsystem coupled with the decoding unit. The memory subsystem is used to load data from the indicated memory location to the processor in response to a partial-width load instruction. The processor also includes a register writing unit coupled to the memory subsystem and the registers. The register write unit is operable to write at least a portion of the loaded data to a partial width of the register in response to a partial width load instruction, and for utilizing a processor-dependent partial width load stored in the remaining width of the register. The partial-width load instruction does not indicate partial-width load mode.Example 2 includes the processor of Example 1, and optionally wherein a register write unit is configured to: write all zeros as the group of bits to the remaining width of the register in the partial width load mode; and In the second partial width load mode, all zeros are not written to the remaining width of the register as the byte group.Example 3 includes the processor of Example 2, and optionally wherein the register write unit is operable to write the sign extension bit as the bit group to the remainder of the width of the register in the second partial width load mode.Example 4 includes the processor of Example 2, and optionally wherein the register write unit is configured to: in the second partial width load mode, utilize the remainder of the register that was stored before the decoder received the partial width load instruction. The write to the register is completed with the byte group in the width that was originally located in the remaining width.Example 5 includes the processor of any preceding example, and optionally further includes at least one bit for indicating a partial width load mode, wherein the processor accesses the at least one bit to determine the partial width load mode, and for Select the corresponding method for determining the bit values of a bit group.Example 6 includes the processor of any of the preceding examples, and optionally wherein the at least one bit is in a register of the processor and visible to the application.Example 7 includes the processor of any preceding example, and optionally wherein the register is at least as wide as a memory address used by the processor to access the memory, and wherein a portion of the register is only a fraction of the register width (e.g. half).Example 8 includes the processor of any of the preceding examples, and optionally, wherein the processor is a 64-bit architecture processor using 64-bit memory addresses, and wherein the registers have a partial width of 32 bits; or the processor is a 128-bit architecture processor using 128-bit memory addresses, and in which the registers are partially 64-bit wide.Example 9 includes the processor of any of the preceding examples, and optionally wherein the processor allows the user-level application to change the partial-width loading mode.Example 10 includes the processor of any of the preceding examples, and optionally wherein the processor allows at least one of the operating system and the virtual machine monitor to change the partial-width loading mode, but does not allow user-level applications to change the partial-width loading mode. model.Example 11 includes the processor of any of the preceding examples, and optionally wherein the processor has an instruction set including instructions for changing partial width loading mode.Example 12 includes the processor of Example 1, and optionally wherein a register write unit is configured to: in said partial width load mode, write zeros as said byte group to the remainder of the width of the register, and process The processor optionally further includes means of a processor for changing the partial width loading mode to a second partial width loading mode upon one of an interrupt and a transition from the application to the operating system.Example 13 is the method executed by the processor. The method includes receiving a partial width load instruction. A partial-width load instruction indicates a memory location of memory as the source operand and a register as the destination operand. The method includes loading data from an indicated memory location to a processor in response to a partial width load instruction. The method includes writing at least a portion of the loaded data to a partial width of the register in response to a partial width load instruction. The method includes completing a write to the register using a bit group stored in the remaining width of the register having a bit value that is dependent on a partial width load mode of the processor. The partial-width loading directive does not indicate partial-width loading mode.Example 14 includes the method of Example 13, and optionally wherein completing writing to the register includes writing zeros as the group of bits to a remaining width of the register in the partial width load mode. Optionally, in a second different partial width load mode, the sign extension bits are written to the remaining width of the register instead of the zeros.Example 15 includes the method of Example 13, and optionally wherein completing writing to the register includes writing zeros as the group of bits to a remaining width of the register in the partial width load mode. Optionally, in a second, different, partial width load mode, the group of bits originally located in the remaining width is stored in the remaining width of the register.Example 16 includes the method of any preceding example, and optionally further includes accessing at least one bit to determine a partial width load mode. The method may also optionally include a method of selecting a partial width loading pattern corresponding to the bit values of the determined bit group.Example 17 includes the method of any preceding example, and optionally wherein accessing the at least one bit includes accessing at least one bit in an application-visible register of the processor.Example 18 includes the method of any preceding example, and optionally wherein writing to a partial width of the register includes writing to only a subset (eg, half) of the width of the register. Optionally, the register is at least as wide as the memory address used by the processor to access the memory.Example 19 includes the method of any preceding example, and optionally wherein writing to a portion of the register's width includes one of the following: Optionally writing to a portion of the register with zeros 32 bits of the register, and 64 bits of the memory address used by the processor to access the memory; or optionally, write 64 bits of the register with zeros in the remainder of the register, and 64 bits of the memory address, used by the processor The memory address to access the memory is 128 bits.Example 20 includes the method of any preceding example, and optionally further includes: receiving a control signal from a user-level application for changing the partial width loading mode; and after receiving the control signal, changing the Describes partial width loading mode.Example 21 includes the method of any preceding example, and optionally further includes: receiving a control signal from one of an operating system and a virtual machine monitor to change the partial width loading mode; and upon receiving The partial width loading mode is changed following a control signal from the one of the operating system and the virtual machine monitor. The method may also optionally include preventing user-level applications from changing the partial width loading mode.Example 22 includes the method of any preceding example, and optionally further includes changing the partial width loading mode in response to user-level instructions of an instruction set of the processor.Example 23 is an article of manufacture including a non-transitory machine-readable storage medium storing a set of instructions that, if executed by a machine, are operable to cause the machine to perform operations. These operations include checking the metadata of the software module, including checking for an indication of the partial-width load mode of the processor that will be used by the processor to execute the partial-width load instruction. Partial-width load instructions are used to indicate a memory location as the source operand and a register as the destination operand. Operations further include changing the partial width loading mode of the processor to the indicated partial width loading mode. Therein, the partial width load mode is changed to control changes in bit values to be stored by the processor in portions of the partial width of the indicated register that are not used to store data loaded from memory.Example 24 includes the article of example 23, and optionally wherein the set of instructions further includes instructions that, if executed by the machine, are operable to cause the machine to perform operations, the operations comprising : Maintains metadata that indicates which different software modules will use which different partial width loading modes, including, among other things, which software modules will use the partial width loading mode and, after handling the interrupt, in conjunction with the Transitioning back to execution of code from the software module changes the partial width load mode of the processor to the indicated partial width load mode.Example 25 is a system for processing instructions, the system including an interconnect, a dynamic random access memory (DRAM) coupled to the interconnect, and a processor coupled to the interconnect. The processor includes a register with a width, a decoding unit for receiving instructions. The instruction is used to indicate a memory location of DRAM as the source operand and a register as the destination operand. The processor also includes a memory subsystem coupled with the decoding unit. The memory subsystem is configured to load data from the indicated memory location in response to the instruction. The processor also includes units coupled to the memory subsystem and registers. The unit is operable to write at least part of the loaded data to a partial width of the register in response to the instruction and to utilize bit groups having bit values stored in the remaining width of the register that are dependent on the mode of the processor to complete writing to the register. Optionally, the instructions do not indicate a mode.Example 26 includes the system of Example 25, and optionally wherein the unit is configured to write zeros as the group of bits to a remaining width of the register in the partial width load mode. Optionally, in the second partial width load mode, the unit does not write zeros as the set of bits to the remaining width of the register.Example 27 is a processor including means for receiving a partial width load instruction. Partial-width load instructions are used to indicate a memory location of memory as the source operand and a register as the destination operand. The processor also includes means for loading data from the indicated memory location to the processor in response to the partial width load instruction. The processor also includes means for writing at least a portion of the loaded data to a partial width of the register in response to a partial width load instruction. The processor also includes means for accomplishing a write to the register using a set of bits stored in the remaining width of the register having a bit value that is dependent on a partial width load mode of the processor. Optionally, the partial-width load instruction does not indicate partial-width load mode.Example 28 includes the processor of Example 27, and optionally wherein the means for completing the write to the register includes: for writing zeros as the group of bits to the register in the partial width load mode The rest of the width of the device.Example 29 is a machine-readable storage medium storing instructions that, if executed by a machine, cause the machine to perform the method of any of Examples 13-22.Example 30 is a processor for performing the method of any of Examples 13-22.Example 31 is a processor including means for performing the method of any of Examples 13-22.Example 32 is a processor comprising integrated circuits and/or logic and/or units and/or components and/or modules for performing the method of any of Examples 13-22, or any of the foregoing. combination.Example 33 is a computer system including a dynamic random access memory (DRAM) and a processor for performing the method of any of Examples 13-22.Example 34 is a processor for performing at least one operation or method substantially as described herein.Example 35 is a processor including means for performing at least one operation or method substantially as described herein.Example 36 is a processor for executing or executing instructions substantially as described herein.Example 37 is a processor including means for executing instructions substantially as described herein. |
A system and method are disclosed for utilizing a computer with a first operating system to access and perform operations on a second computer having a different operating system, using a web-based adapter routine. A Java console accesses a web based adapter routine to implement a set of Java based APIs to perform CIM operations. The adapter routine, in conjunction with a Java Native Interface and a CIM to WMI mapper enables CIM operations to be performed on a managed server having for example, a Microsoft Operating System or XML based communications. |
1. A system comprising:a processor; a machine accessible medium in communication with the processor, and instructions encoded in the machine accessible medium, wherein the instructions, when executed by the processor, cause the system to perform operations comprising: receiving a Common Information Model (CIM) communication from a JAVA based console; in response to receiving the CIM communication from the JAVA based console, automatically converting the CIM communication into a Windows Management Instrumentation (WMI) communication; and communicating the WMI communication to a managed server that uses a MICROSOFT operating system, the communicating to occur via a WMI application program interface (API) of the managed server. 2. A system according to claim 1, wherein:the CIM communication from the JAVA based console is received at an intermediate server; and the system further comprises: the managed server, the managed server to use a MICROSOFT Component Object Model (COM) based model to process the WMI compatible command. 3. A system according to claim 2, wherein the MICROSOFT COM based model comprises a model selected from the group consisting of:a Component Object Model (COM); and a Distributed Component Object Model (DCOM). 4. A system according to claim 1, further comprising:a client system to host the JAVA based console. 5. A system according to claim 1, wherein the operation of converting the CIM communication to a WMI communication comprises:converting the CIM communication to a CIM/WMI communication, wherein the CIM/WMI communication is compatible with a WMI API and a CIM format. 6. A system according to claim 1, wherein the operation of converting the CIM communication to a WMI communication comprises:mapping a method that is not compatible with WMI to a corresponding method that is compatible with WMI. 7. A system according to claim 1, wherein:the operation of receiving a CIM communication from a JAVA based console comprises receiving the CIM communication from a client processing system that hosts the JAVA based console; and the CIM communication from the JAVA based console is received at an intermediate server. 8. A system according to claim 1, wherein:the CIM communication from the JAVA based console is received via a network selected from the group consisting of a local area network (LAN), a wide area network (WAN), and an Internet. 9. An apparatus comprising:a machine accessible medium; and instructions encoded in the machine accessible medium, wherein the instructions, when executed by a processing system, cause the processing system to perform operations comprising: receiving a Common Information Model (CIM) communication from a JAVA based console; in response to receiving the CIM communication from the JAVA based console, automatically converting the CIM communication into a Windows Management Instrumentation (WMI) communication; and communicating the WMI communication to a managed server that uses a MICROSOFT operating system, the communicating to occur via a WMI application program interface (API) of the managed server. 10. An apparatus according to claim 9, wherein the operation of converting the CIM communication to a WMI communication comprises:converting the CIM communication to a CIM/WMI communication, wherein the CIM/WMI communication is compatible with a WMI API and a CIM format. 11. An apparatus according to claim 9, wherein the operation of converting the CIM communication to a WMI communication comprises:mapping a method that is not compatible with WMI to a corresponding method that is compatible with WMI. 12. An apparatus according to claim 9, wherein:the processing system comprises an intermediate server; and the operation of receiving a CIM communication from a JAVA based console comprises receiving the CIM communication from a client processing system that hosts the JAVA based console. 13. An apparatus according to claim 9, wherein the processing system receives the CIM communication from the JAVA based console via a network selected from the group consisting of a local area network (LAN), a wide area network (WAN), and an Internet.14. An apparatus according to claim 9, wherein the instructions implement an adapter program comprising:a CIM to WMI mapper, a JAVA native interface (JNI), and a WMI interface. 15. An apparatus according to claim 14, wherein the instructions comprise JAVA code.16. A method comprising:receiving, at a processing system, a Common Information Model (CIM) communication from a JAVA based console; in response to receiving the CIM communication from the JAVA based console, automatically converting the CIM communication into a Windows Management Instrumentation (WMI) communication; and communicating the WMI communication to a managed server that uses a MICROSOFT operating system, the communicating to occur via a WMI application program interface (API) of the managed server. 17. A method according to claim 16, wherein the operation of converting the CIM communication into a WMI communication comprises:converting the CIM communication to a CIM/WMI communication, wherein the CIM/WMI communication is compatible with a WMI API and a CIM format. 18. A method according to claim 16, wherein the operation of converting the CIM communication to a WMI communication comprises:mapping a method that is not compatible with WMI to a corresponding method that is compatible with WMI. 19. A method according to claim 16, wherein:the processing system comprises an intermediate server; and the operation of receiving a command from a JAVA based console comprises receiving the command from a client processing system that hosts the JAVA based console. 20. A method according to claim 16, wherein the operation of receiving the CIM communication from the JAVA based console comprises:receiving the CIM communication via a network selected from the group consisting of a local area network (LAN), a wide area network (WAN), and an Internet. |
FIELD OF THE INVENTIONThe invention described herein relates generally to methods for accessing hardware and software component information. Specifically, the invention relates to methods and apparatuses for accessing hardware and software component information remotely via a large, distributed public computer network, such as the Internet.BACKGROUND OF THE INVENTIONEfforts exist in the computer server industry to develop standards for accessing components in a computer system, such as peripherals or boards in a computer. The focus of these efforts is to create manageable hardware building blocks that share data through a standard interface. One goal of developing such standards is to enable plug-and-play type architecture for hardware similar to that which is available for software.Several standards like the Desktop Management Interface (DMI), Common Information Model (CIM) and Windows Management Instrumentation (WMI) define standard frameworks by which management data is accessed through operating system-based services.One methodology for managing systems and networks has emerged, which methodology is termed Web based Enterprise Management (WBEM). With WBEM, both browsers and applications can be used to access information that is made available in network standard formats, such as HTML and XML. Built into Windows 98 and 2000, WBEM uses CIM as the database for information about computer systems and network devices.Notwithstanding the significant strides that have been made with regard to technologies for system and network management, however, compatibility between the various system technologies and operating systems is lacking. For example, in the current server management technology environment, servers operating with Microsoft system management infrastructure are compatible with Distributed Component Object Model (DCOM) based access to the servers. In order to access these servers, an interface accessing the server must also run on the Microsoft Operating System (OS). Thus, for example, a user interface based on the Microsoft OS would be incompatible with non Microsoft OS based consoles, such as a Java based console.This lack of compatibility is particularly disadvantageous in the WBEM context where, for example, a user desires to manage data in both a Java-based WBEM and Microsoft WMI environment using a single Java-based console.The present invention is therefore directed to the problem of developing a method and apparatus for accessing hardware component information using a Java console.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts an exemplary embodiment of a remote management application in accordance with the invention, in which a Java-based console communicates with a Windows based Management Infrastructure.FIG. 2 depicts a schematic showing how a Java Native Interface (JNI) ties the C language side of an application to the Java side.FIG. 3 depicts a second exemplary embodiment of a remote management application in accordance with the invention.DETAILED DESCRIPTIONThe present invention solves the above-mentioned problem and others by providing a Java-based user interface that is operating system (OS) independent, and that permits communication with a server based on a Microsoft OS. The Common Information Model (CIM) is used to achieve the commonality between these two disparate technologies.It is worthy to note that any reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.The embodiments of the invention enable remotely executing Java based programs to access a server containing applications written in other languages and upload component management information from the server. For example one possible implementation of the invention includes a Web based Platform Instrumentation Console (WPIC) executing in Java on a computer connected to the Internet. A second computer, also connected to the Internet via a point-of-presence (POP) server, for example, hosts the hardware and software components that the WPIC seeks to access. The operating system of the second computer controls the operation of the hardware and software components.A web based user interface executing on a server, to which the hardware and software components sought to be accessed are coupled, serves as an intermediary between the WPIC and the hardware and software components to be accessed. The format that the data is being transferred from the hardware and software components to the web based user interface is not compatible with the WPIC. The web based user interface converts data from the server being accessed to Java format, which is acceptable to the WPIC. The user interface also converts commands from the WPIC to the predetermined format required by the hardware and software components being accessed such as, for example, WMI, or XML.The invention thereby enables the WPIC to interact with the hardware and software components being accessed in a manner that was heretofore not possible.As used herein, the term "server" is used in the singular for simplicity of description, however, a "server" may be embodied as a plurality of data processing machines that form a common hardware platform. As is consistent with usage in the art, however, these plural server machines are referred to collectively as a "server" (singular).The processor controlling the computers described herein can be a general-purpose microprocessor, such as the Pentium series microprocessor manufactured by the Intel Corporation of Santa Clara Calif.The memory for the computers described herein can be any device capable of storing analog or digital information, such as a hard disk, Random Access Memory (RAM), Read Only Memory (ROM), a compact disk, a magnetic tape, a floppy disk, and any combination thereof.Each of the computers described herein includes an input/output (I/O) device, which can be an audio and/or visual device including, for example, a monitor, display, keyboard, keypad, touch pad, pointing device, microphone, speaker, video camera, camera, scanner, printer and/or port to which an I/O device can be attached or connected.Alternatively, the input/output device can be a Graphical User Interface (GUI), which includes, for example, a web browser executing on the computer accessing web pages from a server over a computer network such as the Internet. The GUI can also include a mouse, a display and a memory, such as a video memory in the computer. The combination of these elements enables the user to select and navigate through various web pages, as required, to access the user interface used in the embodiments.Exemplary Embodiment of a Java CIM Interface for WMI via COM/DCOMFIG. 1 depicts a block diagram of an OS independent system for accessing hardware and software components in a managed server 20 via a web-based adapter 14.In a preferred embodiment of the present invention, a Java WPIC 1I accesses the internet in conventional fashion, such as a point of presence (POP) server 12, which may be accessed via telephone modem, cable modem, local area network, intranet or other connection 21. The server 12 may in turn access another server 13 or servers prior to finally connecting to the server on which client adapter 14 is executing.The CIM Client Adapter 14 written in Java, implements a set of APIs used by the WPIC 11 to perform CIM operations, such as adding, modifying or deleting a CIM class, CIM instance, and CIM qualifier type in a namespace.One of the objectives of the present invention is to have the capability to access a Microsoft CIM Object manager (CIMOM). Since WMI does not support Java programming, to communicate with a Microsoft CIMOM the invention provides a Java/WMI Native Interface 15. A wrapper is provided around the WMI and is implemented through a Java Native Interface (JNI) 16. CIM operations are performed by the WMI interface 15 and communications with the managed server 20 are performed by a CIM/WMI mapper 17.CIM data is transmitted from the CIM/WMI mapper 17 to the managed server 20 via a modem, local area network, parallel, serial, or other connection 22.This embodiment includes inter alia the CIM Client adapter 14, and the CIM/WMI mapper 17.CIM Client AdapterThe CIM Client adapter 14 written in JAVA implements a set of APIs, which allow the JAVA console 11 to perform CIM operations. The adapter thereby allows a web-enabled interface to the server 20 sought to be accessed. In implementing the APIs from JAVA console 11, the CIM client adapter 14 supports the following methods: 1. open( ) 2. close( ) 3. getUserLevel( ) 4. getInstance( ) 5. enumInstance( ) 6. setInstance( ) 7. invokeMethod( ) 8. getClass( ) 9. execQuery( ) 10. deleteInstance( ) 11. createInstance( )A description of these methods follows.1. Open( ) This method creates a new client connection to the CIM object manager on the specified host and namespace, using the specified user name and password. Public void open (java.util.Hashtable connectInfo) throws CIMExceptionParameters connectInfo [in] Key and Value pairs, used to establish connection with CIM object manager.The method will look for the following keys: HOST_NAME-host address of the server at which CIMOM is running. USER_ID-The user account to connect to CIMOM USER_PASSWORD-The User password NAMESPACE-The namespace in which operations will be performedReturn Values VoidThrows: CIMException Throws a CIM exception if the connection failed.2. Close( ) Closes the client connection to the CIM Object Manager. This interface frees resources used for the client session. Public synchronized void close( ) throws CIMExceptionParameters N/AReturn Values N/AThrows:CIMException Throws a CIM Exception if the client session does not exist.3. getUserLevel Returns the user access level. Public int get UserLevel( );Parameters N/AReturn Values USERLEVEL_READ_WRITE-if the USER has Read Write permission. USERLEVEL_READONLY-if the USER has only READ permission4. getInstance Gets the CIM instance for the specified CIM object path. Public synchronized CIMInstance getInstance (CIMObjectPath name, boolean localOnly) throwsCIMExceptionParameters Name-CIM Object Path that identifies this CIM instance LocalOnly-if true, only the non-inherited properties are returned, otherwise all properties are returned.Returns: CIMInstance the CIM instance identified by the CIM object pathThrows: CIMException Throws a CIM exception if the specified CIMObjectPath cannot be found5. EnumInstances Returns all instances (the whole instance and not just the names) belonging to the class specified in the path. This could include instances of all the classes in the specified class' hierarchy. Public synchronized Enumeration enumInstances (CIMObjectPath path, boolean deep, boolean localOnly) throws CIMExceptionParameters Path-The CIMObjectPath identifying the class whose instances are to be enumerated. deep-If set to CIMClient.DEEP, the enumeration returned will contain the names of all instances of the specified class and all classes derived from it. If set to CEMClient.SHALLOW only names of instances belonging to the specified class are returned. LocalOnly-if true, only the non-inherited properties are returned, otherwise all properties are returned.Returns: Enumeration of CIMInstanceThrows: CIMException Throws a CIM exception if the object cannot be found.6. setInstance Invokes the object manager on this client to modify the specified CIM instance in the specified namespace. Public synchonied void setInstance(CIMObjectPath name, (CIMInstance ci) throws CIMExceptionParameters Name-CIM object path that identifies the CIM instance to be added ci-CIM instance to be addedThrows: CIMException Throws a CIMException error if the instance cannot be found.7. invokeMethod Executes the specified method on the specified object. A method is a declaration containing the method name, return type, and parameters in the method. Public synchronizedCIMValueinvokeMethod(CIMObjectPathname, String methodName, Vector inParams, VectoroutParams) throwsCIMExceptionParameters: Name-CIM object path that identifies the method MethodName-the string name of the method to be invoked InParams-the input parameters specified as a vector of CIMValue. OutParans-The output parameters, the CIMValue of these parameters will be appended to the out Params vector.Returns: CIMValue-The return value of the method. If the method returns nothing, the return value will be null.Throws: CIMException Throws a CIM Exception if the specified method cannot be found8. getClass Gets the CIM class for the specified CIM object path. public synchronized CIMClass getClass (CIMObjectPath name, boolean localOnly) throws CIMException.Parameters: name-the CIMObjectPath that identifies the CIM class localOnly-if true, only the non-inherited properties and methods are returned.Returns: CIMClass the CIM class identified by the CIMObjectPathThrows: CIMException Throws a CIM exception if the namespace or the model path identifying the object cannot be found.9. execQuery Executes a WQL query to retrieve objects. Public java.util.Enumeration execQuery (CIMObjectPath reINS Java.lang.String query, Int ql)Throws: CIMException10. deleteInstance Deletes the CIM instance specified by the CIM object path, a name that uniquely identifies a CIM object. Public synchronized void deleteInstance (CIMObjectPath path) throws CIMExceptionParameters: Path-The CIMObjectPath identifying the CIM instance to deleteThrows: CIMException Throws a CIM Exception if the CIM instance does not exist12. createInstance Invokes the object manager on this client to add the specified CIM instance to the specified namespace. Public synchronized void createInstance(CIMObjectPath name, CIMInstance ci) throws CIMException.Parameters: Name-CIM object path that identifies the CIM instance to be added ci-CIM instance to be added.Throws: CIMException Throws a CIM exception if the CIM instance already exists in the namespaceCIM/WMI MapperCommunications with a Microsoft WBEM require compatability with WMI. Since WMI does not support JAVA programming, the invention achieves compatibility with the Microsoft OS based server by implementing a Java WMI interface 15 and CIM/WMI mapper 17 operating in conjunction with a JNI 16. Communication between the CIM Client adapter 14, the CIM/WMI mapper 15, JNI 16, WMI interface 17 and the managed server 20 is achieved via connection 22. Windows management APIs 18 facilitate communication with a CIMOM 19 and CIM repository 23 using COM/DCOM interfaces 24 as the access mechanism to CIMOM 19. The CIM/WMI mapper of the invention supports the following CIM operations:1. CIMClientAdapter::open( ) Corresponding CIM/WMI method: IwbemLocator2. CIMClientAdapter::close( ) Corresponding CIM/WMI method: N/A3. CIMClientAdapter::getuserLevel( ) Corresponding CIM/WMI method: N/A4. CIMClientAdapter::getInstance( ) Corresponding CIM/WMI method: IwbemServices::GetObject( )5. CIMClientAdapter::enumInstances( ) Corresponding CIM/WMI method: IwbemServices::GetObject( )6. CIMClientAdapter::setInstance( ) Corresponding CIM/WMI method: IwbemServices::PutInstance( )7. CIMClientAdapter::invokeMethod( ) Corresponding CIM/WMI method: IwbemServices::ExecMethod8. CIMClientAdapter::getClass( ) Corresponding CIM/WMI method: Iwbemservices::GetObject9. CIMClientAdapter::execQuery( ) Corresponding CIM/WMI method: IwbemServices::ExecQuery10. CIMClientAdapter::deleteInstance Corresponding CIM/WMI method: IwbemServices::DeleteInstanceJava Native InterfaceThe Java Native Interface (JNI) is the native programming interface for Java. The JNI allows Java code to be portable across various platforms. The JNI framework permits the use of native methods to perform many operations. Native methods may represent legacy applications or they may be written explicitly to solve a problem that is best handled outside of the Java programming environment.In the present invention, the JNI is used to make the library of the managed server 20 accessible to Java code. The JNI allows code that runs within a Java Virtual Machine (VM) to operate with applications and libraries written in other languages, such as C, and C++ and allows the JNI to be embedded into native applications. The Java Virtual machine is responsible for interpreting Java byte code, and translating this into actions or operating system calls.With reference to FIG. 2, an application 100 typical of the invention is depicted. A JNI 101 is implemented in conjunction with a VM 102 to serve as a translator between Exceptions 103, and classes 104 on the Java side and Functions 105 and Libraries 106 written in C, on the managed server side.Exemplary Embodiment of a Java CIM Interface for WMI via XMLEmbodiments of the invention are also applicable to management infrastructues on operating systems using XML based communications. In a second embodiment of the present invention and with reference to FIG. 3, a Java WPIC 301 accesses the internet in conventional fashion, such as a point of presence (POP) server 303, which may be accessed via telephone modem, cable modem, local area network, intranet or other connection 302. The server 303 may in turn access another server 304 or servers prior to finally connecting to the server on which client adapter 305 is executing.The CIM Client Adapter 305 written in Java, implements a set of APIs used by the WPIC 301 to perform CIM operations such as adding, modifying or deleting a CIM class, CIM instance, and CIM qualifier type in a namespace.To communicate with a WBEM system operating on a server that does not support Java programming, the invention provides a Java interface 306. A wrapper is provided around the WBEM system and is implemented through a Java Native Interface (JNI) 397. CIM operations are performed by the WBEM interface 306 and communications with the managed server 311 are performed by a CIM/WDEM mapper 308. The CIM Client Adapter 305, Java WBEM interface 306, JNI 397, and CIM/WBEM mapper 308 operate in a similar fashion to the first embodiment described above and depicted in FIG. 1.Similarly, CIM data is transmitted from the CIM/WBEM mapper 308 to the managed server 311 comprising WBEM management APIs 310, a CIMOM 312 and CIM repository 313, via a modern, local area network, parallel, serial, or other connection 309. In this embodiment, however, WBEM APIs 310, facilitate communication with CIMOM 312 and CIM repository 313 via XML instead of COM/DCOM.Although various embodiments are specifically illustrated and described herein, it will be appreciated that modifications and variations of the invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention. For example, while the embodiments depict the use of specific data management and interface standards, other data management and interfaces will suffice. Moreover, while specific program and protocols are included, other protocols (including subsequently developed protocols) may be sufficient to implement the embodiments described herein. These examples should not be interpreted to limit the modifications and variations of the invention covered by the claims, but are merely illustrative of possible variations. |
Disclosed are integrated circuits (700) having multiple electromagnetically emissive devices, such as LC oscillators (702, 204). The devices are formed on an integrated circuit substrate and are given different planar orientations from each other. Particular integrated circuit packages disclosed are "flip-chip" packages, in which solder bumps (604) are provided on the integrated circuit substrate for flipping and mounting of the finished integrated circuit upon a printed circuit board or other substrate. The solder bumps provide conductive connections between the integrated circuit and the substrate. The orientations and positioning of the emissive devices are such that one or more of the solder bumps (604) are interposed between neighboring emissive devices (702, 704) to act as an electromagnetic shield between them. |
CLAIMS 1. An integrated circuit comprising: a substrate having a surface; a first electromagnetically emissive semiconductor device formed on the substrate; a second electromagnetically emissive semiconductor device formed on the substrate; and a plurality of elements located on the surface of the substrate, the elements comprising material for at least partially shielding against cross-coupling, due to electromagnetic emissions, of the first and second electromagnetically emissive semiconductor devices. 2. A circuit according to claim 1, wherein the electromagnetically emissive semiconductor devices are LC oscillators. 3. A circuit according to claim 1 or 2, wherein the plurality of elements are elements for providing electrical connection of circuits on the integrated circuit with circuits external to the integrated circuit. 4. A circuit according to claim 3, wherein the elements are solder bumps. 5. A circuit according to claim 4, wherein the integrated circuit is a flip-chip integrated circuit and wherein the plurality of elements are solder bumps that are used to mount the flip-chip integrated circuit to a printed circuit board or other circuit substrate. 6. A circuit according to any of claims 1 - 5, wherein the first semiconductor device is a macro circuit subcomponent cell having a first planar orientation on the substrate, and the second semiconductor device is a like macro circuit subcomponent cell having a second planar orientation that is 180-degree rotated from the first planar orientation. 7. A circuit according to claims 1 - 6, wherein the elements have a height above the surface which is greater than a corresponding height of the first and second semiconductor devices above the surface. 8. A method of fabricating an integrated circuit comprising: forming a first electromagnetically emissive semiconductor device on an integrated circuit substrate; forming a second electromagnetically emissive semiconductor device on the integrated circuit substrate; 1 locating a plurality of elements on the surface of the integrated circuit substrate, the elements comprising material for at least partially shielding against cross-coupling, due to electromagnetic emissions, of the first and second electromagnetically emissive semiconductor devices. 9. A method according to claim 8, wherein the plurality of elements are conductive solder bumps configured to mount the flip-chip integrated circuit to a printed circuit board or other external circuitry. 10. A method according to claim 8 or 9, wherein the first semiconductor device is formed by orienting a macro circuit subcomponent with a first planar orientation onto the substrate, and the second semiconductor device is formed by orienting a like macro circuit subcomponent with a second planar orientation onto the substrate, the second planar orientation being rotated by 180-degrees from the first planar orientation. 11. A method according to any of the above claims, wherein: the first semiconductor device comprises a first phase locked loop circuit macro having a first pair of inductors formed on the integrated substrate; the second semiconductor device comprises a second phase locked loop circuit macro having a second pair of inductors formed on the substrate; and the plurality of elements comprise a plurality of conductive elements for connecting ground circuits of the integrated circuit to a ground plane, the conductive elements having at least a certain height greater than at least the first and second pair of inductors and adjacent to and at least partially interposed between the first and second pair of inductors, whereby the at least partially interposed plurality of conductors at least partially electromagnetically shields the first and second pair of inductors from each other. |
REDUCED ELECTROMAGNETIC COUPLING IN INTEGRATED CIRCUITS The embodiments disclosed herein relate to packaging strategies and layouts for reduced coupling of neighboring LC oscillator circuits in semiconductor integrated circuit (IC) chip designs. BACKGROUND OF THE INVENTIONPhase-locked loops are often designed using LC oscillators, which have a certain fundamental frequency. The LC oscillator is a resonant frequency circuit comprising an inductor (L component) and a capacitor (C component). In a quadrature phase-locked loop (PLL) circuit, two separate inductors are positioned close to each other, where the first inductor provides in-phase oscillation whereas the second inductor provides quadrature oscillation. Together, these two inductors and their corresponding capacitors provide four phases of a clock, CLKO, CLK90, CLKl 80, and CLK270, where CLKO and CLKl 80 are the in-phase oscillation signals and CLK90 and CLK270 are the quadrature oscillation signals. The first and second inductors are positioned close enough to each other such that they are self-coupled.These circuit elements are often placed on a single substrate with other circuit elements such that they may be integrated within a single chip. There may be multiple such phase-locked loop circuits provided within a certain chip design. These circuits may, for example, take the form of fully routed predefined subcomponent designs (so-called macro cell circuit layouts or "macros") made available directly (or indirectly, such as in HDL hardware description language software form) from cell libraries for printing onto lithography masks of larger application-specific integrated circuits (ASICs). It is important for the phase-locked loop circuits to provide steady clock signals with minimum jitter. SUMMARY As multiple phase-locked loop circuits are placed in a same integrated circuit device, problems can arise from electromagnetic coupling of adjacent inductors. The coupling may arise not only between paired inductors within a given phase-locked loop, but also to the inductors of adjacent phase-locked loop macro circuits. As the strength of an inductive coupling is related to the distance between the coupled inductors, one way of reducing the coupling between the inductors of adjacent phase-locked loop macro circuits is to place more space in between those macros. Such approach, however, results in a corresponding increase in circuitry size, thereby limiting the number or PLLs that can be placed on a single chip and/or increasing the chip die size.When phase-locked loop circuits are employed with flip-chip packaging technology, characteristics of the flip-chip packaging technology can be used to help mitigate the problem of electromagnetic cross-coupling between inductor circuits of adjacent circuit macros. Specifically, the ground-bumps that are typically used in a flip-chip design can be used to provide field isolation between adjacent circuit macros. BRIEF DESCRIPTION OF THE DRAWINGSExample embodiments are described, with reference to accompanying drawings, wherein:FIG. 1 (Prior Art) is a schematic diagram of a phase-locked loop circuit;FIG. 2 (Prior Art) is a timing diagram of the phase-locked loop clock signals in an intermediate portion of the circuitry of FIG. 1;FIG. 3 is an example layout of adjacent first and second inductors within an individual phase-locked loop circuit;FIG. 4 is a layout diagram of adjacent inductor pairs on adjacent phase-locked loop circuit macros;FIG. 5 is a conceptual diagram showing the inductive coupling that can occur between adjacent inductors in neighboring phase-locked loop circuit macros; FIG. 6 shows two inductors mounted on a silicon substrate above a VSSA package, shown separated at least in part by a ground bump; andFIG. 7 is a layout diagram showing adjacent macros, in which neighboring macros are flipped and have conductive bumps positioned such that the inductors between neighboring macros are well separated. DETAILED DESCRIPTION OF THE EMBODIMENTSFIG. 1 is a circuit diagram showing a phase-locked loop circuit 100. The phase- locked loop circuit 100 includes a PLL core 102, which is the circuit in which the voltage controlled oscillator (VCO) circuit 104 is located. The PLL core 102 receives differential reference clock signals REFCLKN and REFCLKP 106 and it receives PLLX division signals 108 which determine the division factor to set the number of clock-multiples of the phase- locked loop circuit relative to the reference clock signal 106. Additional circuitry 1 10 is provided within the PLL core 102 in order to effect the division of the reference clock signals 106, whereby the VCO circuit 104 receives a properly divided clock signal upon which the VCO circuit 104 will lock.Outputs of the VCO circuit 104 include the in-phase clock signals, CLKO and CLKl 80, and the quadrature phase or out-of-phase clock signals, CLK90 and CLK270. A PLL clock transmission block 120 is provided for external generation of the PLL clock outputs, as well as providing output signals for feedback to the input to the PLL core circuit 104.The PLL circuit 100 has an operation that is commonly understood in the art. The specific implementations and embodiments disclosed herein can be used effectively in any application in which it is desirable to effect an electrical isolation between adjacent circuits and particularly inductive circuits where inductive coupling can occur.FIG. 2 shows an example of the clock signals that would be generated out of the PLL circuit 100 based on a given division factor and reference clock REFCLK signal. As shown in FIG. 3, there can be self-coupling between the inductive circuits that generate these clock signals, which in this case is indicated by the K showing the coupling between the two model LC circuits drawn beneath the adjacent inductors. In the case of the quadrature and in-phase phase-locked loop signals, the self-coupling is desirable in that it maintains the lock in-phase between the adjacent inductive circuits. For example, in the circuit shown in FIG. 3, the first inductor 302 and second inductor 304 are separated 100 micrometers by a distance D. Both of these inductors 302, 304 would be located within the VCO circuit 104 as shown in FIG. 1. Each of the inductive circuits can be modeled as a two-port device having a certain capacitance, resistance, and inductance. The self-conductance between them can be designed so as to maintain a magnetic coupling factor that is dependent on the amount of magnetic flux linking the two inductor coils.FIG. 4 shows adjacent portions of VCO circuits 104 A, 104B in which the pairs of inductors are located. There is an inductive self-coupling Ks between the first set of inductors 302A and 304A and another inductive self-coupling Ks between the second pair of inductors 302B and 304B. This is normal and expected. But there is also a parasitic cross- inductive coupling Km between the second inductor of the first circuit 304 A and the first inductor 302B of the second circuit. This would happen in a situation where there are adjacent PLL circuits on adjacent macro circuits. The cross-coupling between these adjacent inductors may cause clock jitter, such that instead of having a steady clock frequency, the clock frequency can be changed by inductive coupling to the neighboring circuit and by the corresponding oscillation occurring in that neighboring circuit's inductor. FIG. 5 conceptually illustrates the cross-coupling between a first inductor 502 and a second inductor 504. A certain resonance oscillation occurring at a certain frequency due to one LC circuit in a first inductor 502 can electromagnetically cross-couple with a second resonance oscillation occurring at another frequency due to another LC circuit in an adjacent inductor 504, and vice versa. This coupling may result in frequency jitter occurring in the oscillations of the neighboring LC circuits. There is thus a need to be able to electromagnetically isolate the neighboring inductors of the different, neighboring LC circuits (that is, for the given example, the neighboring PLL macros). This can be done by placing them farther apart, which may require a corresponding increase in die size of the integrated circuit. Alternatively, this can be done, at least in part, using electromagnetic shielding techniques in accordance with the principles of the invention.FIG. 6 shows an example of how electromagnetic isolation can be carried out in a flip-chip technology approach. FIG. 6 shows a silicon substrate 602 upon which first and second inductors 502 and 504 are formed. The first and second inductors 502 and 504 constitute embodiments of electromagnetically emissive semiconductor devices. In some embodiments, the first and second inductors 502 and 504 can be spiral inductors of any shape. In some embodiments, the first and second inductors 502 and 504 comprise wound conductors that are the inductive elements of respective LC oscillators, which can constitute embodiments of electromagnetically emissive semiconductor devices. Between these neighboring inductors 502 and 504 is a bump 604 that is used to separate the substrate 602 with circuitry on it from a VSSA package plane 606 (e.g., an analog ground plane). The bump 604 may be a conductive element, for example a solder bump, that connects the ground circuits of the circuitry of the substrate 602 to the VSSA package plane 606. The bumps extend a greater height above the surface of the chip than the inductor circuits. Multiple bumps can serve to form a radiation shield (similar to a partial "Faraday Cage") that will at least partially block the electromagnetic energy that would otherwise connect the neighboring inductors 502 and 504. FIG. 7 provides a perspective view of circuit 700 having neighboring PLL circuit macros 702, 704. The bump 604 provides the shielding effect that was discussed with respect to the cross-sectional view of FIG. 6. Further, in this embodiment the first macro 702 has a first orientation (e.g., planar orientation) as illustrated in the FIG. 6, whereas the second macro 704 has an orientation that is flipped, or opposed 180[deg.] from, the orientation of the first circuit macro 702. In this way, a greater volume or cross-sectional area of bumps 604 are provided between the inductors of the neighboring circuit macros 702 and 704.Integrated circuit devices incorporating the principles disclosed herein can be fabricated according to known fabrication techniques, such as those relating to flip-chip forms, and techniques other than flip-chip semiconductor architectures and fabrication techniques. The principles disclosed herein may also be adapted for use with any size or shaped semiconductor package and alternative configurations of bond pads or conductive elements. The flip-chip semiconductor devices may be fabricated on the surface of the silicon substrate 602, or alternately on the surface of a semiconductor wafer of gallium arsenide (GaAs) or indium phosphide (InP).Although the embodiments shown here are described with respect to neighboring PLL circuits on integrated circuit devices, the flip-chip technology and its conductive ground bumps can be used for isolation of adjacent circuits and the opposed orientations of those circuits can be used in any context in which it is desirable to maximize the electromagnetic isolation between adjacent circuits.The above principles can be incorporated into an integrated circuit having a substrate having a surface; a first electromagnetically emissive semiconductor device formed on the substrate; a second electromagnetically emissive semiconductor device formed on the substrate; and a plurality of elements located on the surface of the substrate, the elements comprising material for at least partially shielding against cross-coupling, due to electromagnetic emissions, of the first and second electromagnetically emissive semiconductor devices. The electromagnetically emissive semiconductor devices are LC oscillators. The plurality of elements may be elements, such as solder bumps, for providing electrical connection of circuits on the integrated circuit with circuits external to the integrated circuit. The integrated circuit could, for example, be a flip-chip integrated circuit and the elements could be solder bumps that are used to mount the flip-chip integrated circuit to a printed circuit board or other circuit substrate.In a beneficial implementation in which the first and second semiconductor devices are formed using like macro circuit subcomponent cells, such as from an ASIC cell library, the two macros can be given planar orientations which are 180-degrees rotated from each other relative to a planar surface of the substrate. The elements preferably have a height above the surface which is greater than a corresponding height of the first and second semiconductor devices above the surface. The elements can be conductive elements, such as solder bumps, for connecting ground circuits of the integrated circuit to a ground plane. Those skilled in the art to which the invention relates will appreciate that modifications, substitutions and additions can be made to the described embodiments, without departing from the scope of the invention. |
A method is provided for architectural integrated circuit power estimation. The method may include receiving a plurality of respective energy events, receiving a plurality of base-level energy models, and generating a plurality of power models. Each power model may hierarchically instantiate one or more of the base-level energy models. The method may further include mapping each respective energy event to one or more of the plurality of power models. The method may further include hierarchically evaluating a particular base-level energy model corresponding to a given respective energy event, estimating an energy associated with evaluation of the particular base-level energy model, and accumulating the energy in a power estimate corresponding to the given respective energy event. |
What is claimed is:1. A method comprising:an architecture simulation model processing an application trace, wherein said architecture simulation model is configured to model operation of an integrated circuit;in response to said processing, said architecture simulation model generating a given one of a plurality of energy events, wherein each of said plurality of energy events corresponds to an aspect of the operation of said integrated circuit that consumes energy;in response to said generating a given energy event, mapping said given energy event to one or more corresponding ones of a plurality of power models, wherein each of said plurality of power models hierarchically instantiates one or more of a plurality of base-level energy models, and hierarchically evaluating each of said one or more of said corresponding ones of said plurality of power models to identify each instantiated one of said plurality of base-level energy models;for each given one of said instantiated ones of said plurality of base-level energy models, evaluating said given instantiated base-level energy model to estimate energy associated with activation of said given instantiated base-level energy model for said given energy event; andaccumulating said energy in a power estimate corresponding to said given energy event.2. The method of claim 1, wherein a first subset of said plurality of base-level energy models includes a plurality of parameters configured for selecting physical energy model characteristics.3. The method of claim 2, wherein at least one of said first subset includes an aspect ratio parameter configured for scaling an energy estimate according to a value of said aspect ratio parameter.4. The method of claim 1, wherein at least one of said plurality of base-level energy models includes a plurality of parameters configured for selecting microarchitectural energy model characteristics.5. The method of claim 1, wherein at least one of said plurality of base-level energy models includes a data value parameter configured for scaling an energy estimate according to a value of said data value parameter.6. The method of claim 1, further comprising deriving a respective energy estimate corresponding to each of a subset of said plurality of base-level energy models from data extracted from a previously completed design.7. The method of claim 1, wherein at least some of said instantiated ones of said plurality of base-level energy models are configured to estimate static energy consumption associated with said given energy event.8. A computer-readable medium comprising instructions, wherein the instructions are executable to implement:an architecture simulation model processing an application trace, wherein said architecture simulation model is configured to model operation of an integrated circuit;in response to said processing, said architecture simulation model generating a given one of a plurality of energy events, wherein each of said plurality of energy events corresponds to an aspect of the operation of said integrated circuit that consumes energy;in response to said generating a given energy event, mapping said given energy event to one or more corresponding ones of a plurality of power models, wherein each of said plurality of power models hierarchically instantiates one or more of a plurality of base-level energy models, and hierarchically evaluating each of said one or more of said corresponding ones of said plurality of power models to identify each instantiated one of said plurality of base-level energy models;for each given one of said instantiated ones of said plurality of base-level energy models, evaluating said given instantiated base-level energy model to estimate energy associated with activation of said given instantiated base-level energy model for said given energy event; andaccumulating said energy in a power estimate corresponding to said given energy event.9. The computer readable medium of claim 8, wherein a first subset of said plurality of base-level energy models includes a plurality of parameters configured for selecting physical energy model characteristics.10. The computer readable medium of claim 9, wherein at least one of said first subset includes an aspect ratio parameter configured for scaling an energy estimate according to a value of said aspect ratio parameter.11. The computer readable medium of claim 8, wherein at least one of said plurality of base-level energy models includes a plurality of parameters configured for selecting microarchitectural energy model characteristics.12. The computer readable medium of claim 8, wherein at least one of said plurality of base-level energy models includes a data value parameter configured for scaling an energy estimate according to a value of said data value parameter.13. The computer readable medium of claim 8 further comprising deriving a respective energy estimate corresponding to each of a subset of said plurality of base-level energy models from data extracted from a previously completed design.14. The computer-readable medium of claim 8, wherein at least some of said instantiated ones of said plurality of base-level energy models are configured to estimate static energy consumption associated with said given energy event.15. A system comprising:an architecture simulation model, wherein said architecture simulation model is configured to model operation of an integrated circuit;a plurality of base-level energy models; anda plurality of power models each configured to hierarchically instantiate one or more of said plurality of base-level energy models;wherein said architecture simulation model is further configured to:process an application trace;in response to said processing, generate a given one of a plurality of energy events, wherein each of said plurality of energy events corresponds to an aspect of the operation of said integrated circuit that consumes energy;in response to said generating a given energy event, map said given energy event to one or more corresponding ones of said plurality of power models and hierarchically evaluate each of said one or more of said corresponding ones of said plurality of power models to identify each instantiated one of said plurality of base-level energy models;for each given one of said instantiated ones of said plurality of base-level energy models, evaluate said given instantiated base-level energy model to estimate energy associated with activation of said given instantiated base-level energy model for said given energy event; andaccumulate said energy in a power estimate corresponding to said given energy event.16. The system of claim 15, wherein a first subset of said plurality of base-level energy models includes a plurality of parameters configured for selecting physical energy model characteristics.17. The system of claim 16, wherein at least one of said first subset includes an aspect ratio parameter configured for scaling an energy estimate according to a value of said aspect ratio parameter.18. The system of claim 15, wherein at least one of said plurality of base-level energy models includes a plurality of parameters configured for selecting microarchitectural energy model characteristics.19. The system of claim 15, wherein at least one of said plurality of base-level energy models includes a data value parameter configured for scaling an energy estimate according to a value of said data value parameter.20. The system of claim 15, wherein a respective energy estimate corresponding to each of a subset of said plurality of base-level energy models is derived from data extracted from a previously completed design.21. The system of claim 15, wherein at least some of said instantiated ones of said plurality of base-level energy models are configured to estimate static energy consumption associated with said given energy event. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to integrated circuit design methodologies and, more particularly, to methodologies for architectural power estimation.2. Description of the Related ArtImprovements in integrated circuit manufacturing technology have resulted in steady and dramatic increases in integrated circuit performance, particularly in terms of clock speeds. Historically, microprocessor clock speeds have doubled every two to three years, and feature size reductions have enabled the number of devices per unit of circuit area to keep pace. However, such advances have come at the cost of similarly dramatic increases in integrated circuit power consumption. Increases in clock speeds have not been offset by decreases in operating voltage and total circuit size. Further, to achieve higher clock speeds, faster transistor designs are frequently employed, which offer increased switching speed at the expense of increased leakage current (i.e., power consumed even when the transistor is not switching).Increased integrated circuit power consumption directly affects the cost of building a system including such circuits. As power consumption increases, more costly cooling systems such as larger fans, heat sinks, and refrigeration must be employed to remove excess heat from the system and prevent integrated circuit failure. However, intense system price competition in high-volume market segments often limits the budget available for cooling systems. Increasingly, integrated circuits are at risk of becoming thermally constrained-forced to run at less than their designed operating frequency due to an inability to sufficiently cool the circuit at that frequency.Given a particular clock frequency design goal and a particular design process voltage, power consumption can only be mitigated through careful integrated circuit design. For example, circuit structures unused during a given clock cycle may be disabled, and the global integrated circuit floorplan may be optimized to reduce the lengths (and thus the total capacitance) of wide buses. However, accurately estimating the impact of such approaches on actual circuit power consumption is difficult. Current techniques for power estimation require detailed circuit schematics, design code (such as register transfer language (RTL) code), and a floorplan from which to extract circuit geometries and bus lengths. Current power estimation tools are slow due to the amount of design detail they must take into account, which consequently limits the number of representative execution workloads that can be analyzed to assess power consumption.As a result, for current techniques power estimation occurs very late in the integrated circuit development cycle, after a substantial amount of design work has been completed. Further, the accuracy of current techniques of power estimation is constrained by the limited amount of analysis that can be performed, increasing the risk that the effect of some power-intensive workloads may be overlooked. Consequently, integrated circuit architects and designers do not have the opportunity to analyze and select the appropriate design tradeoffs and optimizations at the beginning of the development cycle, when rework is least expensive. Current power estimation techniques result in longer development cycles, increased design resource requirements, and increased risk that an integrated circuit may not meet its design and marketing goals, all contributing to the expense of the integrated circuit design process.SUMMARY OF THE INVENTIONVarious embodiments of a method for architectural integrated circuit power estimation are disclosed. In one embodiment, the method may include receiving a plurality of respective energy events, receiving a plurality of base-level energy models, and generating a plurality of power models. Each power model may hierarchically instantiate one or more of the base-level energy models. The method may further include mapping each respective energy event to one or more of the plurality of power models.In one specific implementation, the method may further include hierarchically evaluating a particular base-level energy model corresponding to a given respective energy event, estimating an energy associated with evaluation of the particular base-level energy model, and accumulating the energy in a power estimate corresponding to the given respective energy event.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram illustrating one embodiment of a methodology for estimating power consumption in an integrated circuit.FIG. 2 is a flow diagram illustrating operation of one embodiment of a methodology for estimating power consumption in an integrated circuit.FIG. 3 is a block diagram illustrating one embodiment of an instruction cache.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTIONPower Estimation MethodologyTurning now to FIG. 1, a block diagram illustrating one embodiment of a methodology for estimating power consumption in an integrated circuit is shown. Methodology 10 includes an architecture simulation model 100 configured to receive architectural parameters and an application trace and to produce a performance estimate as well as energy events. Methodology 10 also includes a plurality of energy models 120 configured to receive technology and configuration parameters. Mapping file 140 is configured to receive energy events from architecture simulation model 100 to be mapped onto a plurality of chip power models 160. Chip power models 160 are configured to receive energy models 120 and to produce a power estimate.Architecture Simulation ModelArchitecture simulation model 100 may include a microarchitecture performance model of an integrated circuit, such as an Athlon(TM) microprocessor, for example. Such a performance model may include high-level representations of various microarchitectural features of an integrated circuit, such as caches, register files, and execution units, for example. Further, a microarchitecture performance model may include representations of how such microarchitectural features are interconnected and their relative operational latency. Architecture simulation model 100 is configured to receive microarchitectural parameters such as cache line size, cache associativity, number of functional units, and branch misprediction penalties, for example. Such parameters may allow rapid reconfiguration of the features represented in the model.Architecture simulation model 100 is also configured to receive an application trace. In one embodiment, such a trace may include machine readable instructions and may represent all or a portion of a computer program, such as an operating system, a software benchmark program, or an application program, for example. Architecture simulation model 100 may be configured to model (i.e., simulate the execution of) the received application trace given the received microarchitectural parameters and to collect performance events during such modeling, such as instances of cache accesses, cache hits, branch mispredictions, and particular execution unit operations, for example. Using such collected events, architecture simulation model 100 may produce a performance estimate for a given microarchitectural configuration, including metrics such as cache hit rates, instructions executed per unit of time, and total units of time per trace, for example. Architecture simulation model 100 may thereby enable integrated circuit designers to experiment with various microarchitectural configurations and tune implementation decisions to the simulated performance of a desired set of application traces, prior to committing substantial design resources to implementation of a particular microarchitectural configuration.In addition to performance events, architecture simulation model 100 is also configured to produce energy events. As used herein, an energy event refers to an event occurring during modeling of an application trace, which event corresponds to an aspect of integrated circuit operation that consumes energy. For example, a given application trace may include a load operation that generates a load event during modeling. In terms of integrated circuit operation, a load operation may include activating an address generation unit to produce a load address as well as activating a cache to access data, each of which consumes energy. A load event may thus be an energy event.It is noted that, depending on modeling convention, an energy event may be configured to correspond to positive (i.e., accumulated) or negative (i.e. subtracted) energy. For example, in one embodiment an initial estimate of zero load-related energy consumption may be assumed prior to the occurrence of any load energy events, which may remain zero in the absence of load energy events. Each occurrence of a load energy event may then contribute a positive energy estimate to the total energy consumed during a period of measurement. In contrast, in one embodiment an initial estimate of clock-related energy consumption may be a fixed positive value prior to the occurrence of any clock energy events, which may remain constant in the absence of clock energy events. Each occurrence of a clock energy event may then contribute a negative energy estimate to the total energy consumed during a period of measurement. Thus, in these embodiments, load energy may be regarded as accumulating from zero, while clock energy may be regarded as a fixed value unless modulated (e.g. by disabling a clock).Many of the performance events produced by architecture simulation model 100 may correspond to energy events, and many microarchitectural configuration parameters that affect integrated circuit performance may correspondingly affect integrated circuit energy consumption. For example, cache size and the number of functional units implemented may influence cache hit rate and instruction throughput respectively, with a corresponding effect on integrated circuit performance. Additionally, cache size and the number of functional units implemented may influence the energy associated with activating these features. However, some microarchitectural features may be directed specifically towards reducing energy consumption and may have negligible impact on performance. For example, a cache energy filter structure may be included in the integrated circuit microarchitecture to reduce the energy consumption associated with cache access operations. Such a filter may include a small buffer including a plurality of entries, each of which is checked against an incoming cache access operation. If data corresponding to the cache access operation is present in the buffer, the cache access may be satisfied from the buffer and the main cache array may not be activated, thus reducing energy consumption. Operation of such a filter may not impact the critical path of the cache access operation and thus may not be significant as a performance event. However, the filter's operation may be significant as an energy event. Therefore, architecture simulation model 100 may produce energy events in addition to those energy events that correspond to performance events.Energy and Power ModelsAs described above, architecture simulation model 100 may include abstract performance representations of various microarchitectural features configured to estimate the performance of an integrated circuit including such features. It may be desirable to include abstract energy representations of such microarchitectural features to estimate the power consumption of an integrated circuit including such features. Therefore, methodology 10 includes a plurality of energy models 120 and a plurality of chip power models 160, each described in further detail below.In a given integrated circuit design, many individual logic circuits may fall into one of several broad circuit categories, such as random access memories (RAMs), read-only memories (ROMs), register files, content-addressable memories (CAMs), input/output (I/O) drivers, buses, standard cells (e.g., random synthesized logic), and custom logic circuits (e.g., clock drivers), for example. Each circuit category may include one or more base-level circuits. As used herein, a base-level circuit refers to a fundamental circuit type representative of one or more instances of that circuit, which circuit type is nonhierarchical for the purposes of energy modeling. A base-level circuit may be instantiated by itself or may be included as part of a hierarchy of circuit elements. For example, the RAM circuit category may include a base-level static RAM (SRAM) array circuit as well as a cache circuit that instantiates multiple instances of the base-level SRAM array circuit (e.g., as a data array and a tag array).It is noted that a base-level circuit may itself include circuit hierarchy; for example, the base-level SRAM array circuit may include subarrays, buses, transistors, and other structures. However, representing detailed circuit structure in a power estimation methodology such as methodology 10 may require considerable effort relative to the precision gained. Further, during early-stage design when various microarchitectural configurations are still being evaluated, detailed information on circuit structure may not be available. Therefore, the base-level circuit may serve as a fundamental, nonhierarchical element for the purposes of energy modeling.Energy models 120 may include a plurality of base-level energy models. As used herein, a base-level energy model refers to an abstract representation of the energy consumption of a corresponding base-level circuit. Characteristics of base-level energy models are described in greater detail below, and specific examples are described below in conjunction with the description of FIG. 3.When a base-level energy model is evaluated, it may provide an estimate of the energy consumed by a corresponding base-level circuit during circuit operation. The energy estimate may correspond to dynamic energy consumption (i.e., energy consumed while the circuit is operating), idle energy consumption (i.e., energy consumed while the circuit is clocked, but not performing a specific operation), static energy consumption (i.e. energy consumed while the circuit is quiescent and its clocks are disabled, also referred to as leakage), or a combination of these. For example, in one embodiment, a base-level energy model may provide an energy estimate corresponding to idle energy consumption whenever it is evaluated and no active circuit operation is specified, whereas such a model may provide an energy estimate corresponding to dynamic energy consumption whenever it is evaluated and a particular circuit operation is specified.In one embodiment, the energy estimate may be obtained by estimating the physical configuration and/or device characteristics of the base-level circuit and applying the appropriate technology parameters, each shown as inputs to energy models 120 in FIG. 1. For example, a base-level SRAM energy model may assume an approximate area per bit of storage, and use this area coupled with wire and transistor unit capacitance parameters to provide an energy estimate for a base-level SRAM circuit comprising a given number of bits. In an alternative embodiment, the energy estimate may be obtained by extracting actual electrical data from a previously completed design and scaling such data to correspond to the manufacturing process to be used. For example, the electrical characteristics of a base-level SRAM circuit completed for a previous design may be extracted either from detailed circuit schematics or from actual silicon. A base-level SRAM energy model may then use the extracted electrical data as the basis for energy estimation, scaling the extracted data based on the desired SRAM size as well as any differences in technology characteristics between the process used for the previous design and the process planned for the current design.In one embodiment, a base-level energy model may include a plurality of parameters configured for selecting physical energy model characteristics. As used herein, a physical energy model characteristic refers to an aspect of an energy model that varies according to the value of some physical quantity. For example, the energy consumption of a base-level bus interconnect circuit may vary based on the length of the bus, the width of each wire in the bus, and the total number of wires comprising the bus. Thus a base-level bus interconnect energy model may include parameters for physical energy model characteristics corresponding to bus length, wire width, and wire count, for example.While the area of a given circuit can always be determined from its length and width, during early stages of integrated circuit design specifying circuit dimensions in terms of length and width may be too inflexible for estimation and planning purposes. Rather, a total area budget for a given base-level circuit may be used in conjunction with an estimate of the aspect ratio of the circuit (i.e. the ratio of the circuit's length to its width). However, given a fixed area, the energy consumption of a given base-level circuit may change dramatically as its aspect ratio varies, due to factors such as the changing lengths of critical buses internal to the circuit. Therefore, in one embodiment, a base-level energy model for such a circuit may include an aspect ratio and/or an area parameter corresponding to the respective physical energy model characteristic and configured for scaling the energy estimate provided by the base-level energy model according to the parameter value.In one embodiment, a base-level energy model may include a plurality of parameters configured for selecting microarchitectural energy model characteristics. As used herein, a microarchitectural energy model characteristic refers to an aspect of an energy model that varies according to some microarchitectural feature. For example, the energy consumption of a base-level SRAM circuit may vary based on the width and number of its read and write ports, as well as how the circuit is organized with respect to logical lines and columns. Thus a base-level SRAM energy model might include parameters for microarchitectural energy model characteristics corresponding to port number, port width, total number of bits of storage, and logical line size, for example.Certain types of base-level circuits may consume varying amounts of energy depending on their mode of operation. For example, a base-level SRAM circuit may consume different amounts of energy depending on whether it is idle, being read, or being written. Similarly, a base-level random logic circuit may consume different amounts of energy depending on its switching factor (i.e., the percentage of gates switching logical output state at any given time, on average). Therefore, in one embodiment, a base-level energy model may include a plurality of parameters configured for selecting the operation mode and/or switching factor of the modeled circuit.Similar to the operation mode-dependent energy consumption described above, certain types of base-level circuits may consume varying amounts of energy depending on a data value provided to the circuit, owing to differing numbers of gates and devices switching logical output state for different data values. For example, a base-level adder circuit may consume different amounts of energy depending on whether its input operands constitute all logical zeros, all logical ones, or a combination of logical zeros and logical ones. Therefore, in one embodiment, a base-level energy model may include a data value parameter configured for scaling the energy estimate provided by the base-level energy model according to the parameter value.With any parameterized energy model, the possibility exists that a user could provide a parameter value that exceeds the range over which the energy model can provide a reliable energy estimate. For example, a model user could attempt to instantiate a base-level SRAM energy model with both a total size and a logical line size of one gigabyte. Such an SRAM organization, if it could be constructed at all, would be suboptimal in the extreme. Therefore, in one embodiment, a base-level energy model may include a means for performing limit or valid range testing on various input parameters in order to provide feedback to the user when specious or out-of-range parameters are specified.As noted above, energy models 120 may correspond to commonly used basic circuits in an integrated circuit design. A given circuit may include a number of different base-level circuits in a hierarchical fashion. For example, a cache circuit may hierarchically include several base-level SRAM array circuits corresponding to tag and data arrays as well as several base-level bus interconnect circuits corresponding to data fill and read paths. Such a design approach may be advantageous in that it may enable such base-level circuits to be reused multiple times within different circuits with minimal modification. Correspondingly, the base-level energy models of energy models 120 may be hierarchically instantiated by chip power models 160 to generate power models for more complex circuit structures. As used herein, a chip power model refers to an abstract representation of the energy or power consumption of a corresponding hierarchical circuit. A chip power model may also be referred to simply as a power model.A given chip power model 160 may instantiate only one base-level energy model, or may instantiate more than one base-level energy model. As described above, in various embodiments a given base-level energy model may include various parameters defining physical energy model characteristics, microarchitectural energy model characteristics, or other such features. When instantiating such a base-level energy model, a given chip power model 160 may define such parameters internally as constants, or may pass such parameters through as chip power model parameters. For example, it may be desirable to use a single cache power model for each of several cache circuits in an integrated circuit design. Each such cache circuit may include the same number of base-level SRAM circuits and base-level bus interconnect circuits and may use the same logical line size, but the cache size and number of ways used by each cache circuit may differ. In such an example, the cache power model may instantiate the appropriate number of base-level SRAM energy models and base-level bus interconnect energy models, specifying the logical line size parameter of the SRAM energy models as a constant, but passing through the cache size and number of ways as parameters of the cache power model. The cache power model may then be instantiated several times with the parameters corresponding to the various cache circuits to be modeled. It is noted that any given chip power model may include an arbitrary number of levels of hierarchy. Specific examples of power model generation and instantiation are described below in conjunction with the description of FIG. 3.It is noted that the electrical power (measured in Watts, W) consumed by a circuit is equivalent to the electrical energy (measure in Joules, J) consumed by that circuit over a period of time. In an integrated circuit embodiment that includes a clock signal, it may be desirable to normalize power consumption relative to one clock cycle. For modeled operations that occur during a single clock cycle, energy and normalized power quantities are thus identical, and normalized power (Joules/cycle) may be easily converted into Watts by multiplying normalized power by the clock frequency (cycles/second). For modeled operations that occur over a span of multiple clock cycles, in one embodiment the relevant power model may simply report the total energy consumed by the entire operation as a single power value normalized to a single clock cycle. In another embodiment, the relevant power model may average the total energy consumed by the entire operation over the total number of clock cycles and return one power value per clock cycle. In yet another embodiment, the relevant power model may be cycle-accurate across multiple cycle operations, returning one power value per clock cycle that approximates the actual power consumed during each clock cycle, without averaging.Mapping File and Power EstimationThe energy events produced by architecture simulation model 100 may be mapped to one or more of the plurality of chip power models 160 by mapping file 140. In one embodiment, mapping file 140 may include the instantiation of particular chip power models and the specification of parameters of instantiated power models, as such parameters are described in the previous section. Additionally, mapping file 140 may include the definition of specific mappings between each of a plurality of energy events and one or more corresponding power models. For example, mapping file 140 may include the instantiation of power models corresponding to an instruction cache, a first-level data cache, and a second-level data cache. Mapping file 140 may also map an instruction fetch energy event to the instruction cache power model, a first-level data cache access energy event to the first-level data cache power model, and a second-level data cache access energy event to the second-level data cache power model.In one embodiment, a single mapping file may include all power model instantiations and energy event mappings defined for use in methodology 10. In an alternative embodiment, several mapping files may be used; as an example, one mapping file may be used for models and energy events related to instruction fetch and decode, while another may be used for models and energy events related to instruction execution and cache accesses. Specific examples of instantiating power models and mapping energy events are described below in conjunction with the description of FIG. 3.As described in greater detail below in conjunction with the description of FIG. 2, mapping file 140 may be used in conjunction with the other elements of methodology 10 to estimate power consumption of an integrated circuit. Energy events produced during the course of execution of architecture simulation model 100 may evaluate particular power models as specified by mapping file 140. The results of such evaluation may be accumulated to form a power estimate for a given simulated clock cycle of an integrated circuit.It is noted that any or all of the elements of methodology 10 illustrated in FIG. 1 may be implemented in software, hardware, or a combination of both. In one embodiment, each element of methodology 10 may be implemented in the C++ programming language, compiled, and run as one or more application programs on a computer system. In another embodiment, one or more elements of methodology 10 may be implemented in a hardware description language such as Verilog RTL and compiled to run on a hardware-based emulation system such as a Quickturn(TM) system or synthesized onto a collection of programmable logic devices. Other embodiments are contemplated in which various combinations of hardware and software are used to implement the elements of methodology 10.Operation of MethodologyTurning now to FIG. 2, a flow diagram illustrating operation of one embodiment of a methodology for estimating power consumption in an integrated circuit is shown. Referring collectively to FIG. 1 and FIG. 2, operation begins in block 200 where architecture simulation model 100 is simulating the execution of a particular application trace. In the course of simulating a given execution cycle of the application trace, architecture simulation model 100 determines whether an energy event has been detected (block 202). For example, architecture simulation model 100 may detect that a read access to the instruction cache has occurred. If no energy event is detected, simulation of the application trace proceeds (block 200).If an energy event is detected, architecture simulation model 100 may reference mapping file 140 to determine which instantiation of chip power models 160 corresponding to the detected energy event is to be evaluated (block 204). For example, in response to the instruction cache read access energy event previously detected, an instruction cache data array power model may be selected. The parameters corresponding to the selected power model may then be evaluated, and the hierarchy of the power model traversed to ascertain the base-level energy models instantiated in the power model (block 206). For example, mapping file 140may instantiate the instruction cache data array power model with certain parameters specifying the instruction cache size and number of ways. These parameters may be evaluated, and the hierarchy of the instruction cache data array power model traversed to identify its constituent base-level energy models, such as instances of base-level SRAM array and bus interconnect energy models, for example.Once the base-level energy models instantiated by the selected power model have been identified, they may be evaluated using parameters passed from higher levels of hierarchy, and the energy associated with such energy model evaluation may be estimated (block 208). For example, an instance of a base-level SRAM array energy model included in the instruction cache data array power model may be evaluated using parameters defining a particular array size and logical line size as well as specifying a read operation, and the energy consumed by the base-level SRAM array energy model for this parameters may be estimated. It is noted that in one embodiment, the power models and base-level energy models may be implemented using object-oriented software techniques wherein hierarchy traversal functions and energy model evaluation may be implemented as methods of energy model objects. In such an embodiment, hierarchy traversal as described in block 206 and energy model evaluation as described in block 208 may occur as one step or may occur as alternating, iterative steps depending on the implementation of the methods.It is noted that in an alternative embodiment, the base-level energy models corresponding to a given power model may be identified before a simulation is initiated, and energy estimates corresponding to all or a subset of the parameter permutations of the given power model and its corresponding base-level energy models may be precomputed for various energy events. For example, prior to initiating a simulation, microarchitectural parameters of a cache power model such as cache size, number of ways, and line size may be specified. Energy estimates for a cache read operation, a cache write operation, and a cache idle operation, for example, may then be precomputed prior to simulation initiation. In such an embodiment, when a power model is selected during simulation as described above in block 204, the precomputed energy estimate corresponding to the particular detected energy event may be selected, and operation may proceed directly to block 210, thereby potentially decreasing simulation computation time.After the energy associated with the evaluation of the instantiated base-level energy models has been estimated, it may be accumulated in a power estimate corresponding to the energy event first detected in block 202. Additionally, other statistics associated with the energy event may be recorded, such as the current clock cycle being simulated and the operation mode of the energy event (e.g., read, write), for example (block 210). Mapping file 140 may then be referenced to determine if additional power models are associated with the detected energy event (block 212). If additional power models are defined, operation continues from block 204 wherein another power model is selected to be evaluated. If no additional power models are defined, architecture simulation model 100 may determine whether the end of the application trace has been reached (block 214). If the end of the application trace has not been reached, operation proceeds to block 200 where trace simulation continues. If the end of the application trace has been reached, operation terminates, and performance and power statistics gathered during simulation may be processed and reported (block 216). It is noted that in an alternative embodiment, performance and power statistics gathered during simulation may be processed and reported on a periodic basis while simulation proceeds, such as on a per-cycle basis, for example.Instruction Cache Modeling ExampleTurning now to FIG. 3, a block diagram illustrating one embodiment of an instruction cache is shown. In the illustrated embodiment, instruction cache 300 includes instruction data array 310 coupled to data array control logic 320 and to data read and write buses, tag array 340 coupled to tag control logic 350 and to tag read and write buses, and predecode array 360 coupled to predecode control logic 370 and to predecode read and write buses. Tag array 340 is also coupled to instruction data array 310 and predecode array 360. Instruction cache 300 also includes cache control logic coupled to receive instruction requests and coupled to data array control logic 320, tag control logic 350, and predecode control logic 370.Instruction data array 310 may include one or more SRAM arrays configured to store instruction data received from the data write bus and to retrieve instruction data through the data read bus in response to instruction requests received by instruction cache 300. Data array control logic 320 may include control logic configured to coordinate reads and writes of instruction data array 310.Predecode array 360 may include one or more SRAM arrays configured to store predecoded instruction data received from the predecode write bus and to retrieve predecoded instruction data through the predecode read bus in response to instruction requests received by instruction cache 300. In one embodiment, such predecoded instruction data may include the estimated length of each instruction stored in the instruction data array, for example. Predecode control logic 370 may include control logic configured to coordinate reads and writes of predecode array 360.Tag array 360 may include one or more SRAM arrays configured to store cache tag data received from the tag write bus and to retrieve cache tag data through the tag read bus in response to instruction requests received by instruction cache 300. In one embodiment, instruction cache 300 may be a set-associative cache, and tag array 360 may be configured to determine which way of instruction data array 310 and predecode array 360 should be selected in response to a cache hit. In such an embodiment, tag control logic 350 may include control logic configured to coordinate reads and writes of tag array 340. Additionally, tag control logic 350 may be configured to determine which way of data array 310, tag array 340, and predecode array 360 is to be selected in the event of an array write, for example in accordance with a least-recently-used replacement policy.Cache control logic 330 may include control logic configured to coordinate operation of data array control logic 320, tag control logic 350, and predecode logic 370, as well as other aspects of instruction cache 300 operation (not shown) in response to receiving an instruction request.It may be desirable to estimate the power consumption of instruction cache 300 utilizing methodology 10 of FIG. 1. To do so, base-level energy models, power models, and a mapping of energy events to power models may be defined. In one embodiment, the base-level energy models used to model instruction cache 300 may include an SRAM array energy model, a bus interconnect energy model, and a standard cell energy model.Base-Level Energy Model ExamplesOne embodiment of code implementing a base-level SRAM array energy model may be configured as follows:<tb>//***************************************************************************************************************************<tb>// SRAM energy model<tb>//***************************************************************************************************************************<tb>c_sramModule::c_sramModule(const string $x_name, c_physModule*x_par,<tb><sep>const uInt32 p_width, const uInt32 p_rows,<tb><sep>const uInt32 p_readWidth,const uInt32 p_writeWidth) :<tb><sep>c_physModule(x_name, x_par) {<tb><sep>// Parameter bounds checking<tb><sep>ASSERT(p_width≤320,assert::e_warn,''SramModule model '' << x_name << '' invoked with Width > 320'');<tb><sep>ASSERT(p_rows≤130,assert::e_warn,''SramModule model '' << x_name << '' invoked with Rows > 128'');<tb><sep>ASSERT(p_rows≥4,assert::e_warn,''SramModule model '' << x_name << '' invoked with Rows < 4'');<tb><sep>ASSERT(p_width≥16,assert::e_warn,''SramModule model '' << x_name << '' invoked with Width < 16'');<tb><sep>ASSERT(p_readWidth≤280,assert::e_warn,''SramModule model '' << x_name << '' invoked with ReadWidth > 280 '' << p_readWidth );<tb><sep>ASSERT(p_writeWidth≤280,assert::e_warn,''SramModule model '' << x_name << '' invoked with WriteWidth > 280 '' << p_writeWidth );<tb><sep>m_width = p_width;<tb><sep>m_rows = p_rows;<tb><sep>m_readWidth = p_readWidth;<tb><sep>m_writeWidth = p_writeWidth;<tb><sep>m_physType = e_physTypeMacro;<tb><sep>m_physSubType = e_physSubTypeSRAM;<tb>}<tb>// Return dynamic energy cost for an access event to SRAM<tb>// Energy is modelled as follows:<tb>// -> wordline capacitance (all access types)<tb>//<sep>( ) total columns scaled by the wline wire cap<tb>//<sep>( ) gate cap per bit cell on wline<tb>// -> bitline energy (customized per access type)<tb>//<sep>( ) per row energy cost<sep>(read,write)<tb>//<sep>( ) fixed cost for pre-charge, col circuitry, sense<sep>(read,write)<tb>// -> decoder energy (all access types)<tb>//<sep>( ) per row energy cost<tb>double c_sramModule::getDynEnergyEvent(e_energyEvent p_energyEvent) {<tb><sep>// compute wordline cap<tb><sep>m_decoderEnergy = p_sramDecoderFixedEnergy;<tb><sep>switch(p_energyEvent) {<tb><sep>case e_enEvIdle:<tb><sep>m_blineEnergy = 0.0;<tb><sep>m_senseEnergy = 0.0;<tb><sep>break;<tb><sep>case e_enEvRd:<tb><sep>m_wlineEnergy = (p_sramWLineGateColCap*p_sramActiveCols+p_sramCols*p_sramWLineWireColCap)*m_readWidth*p_vdd*p_vdd;<tb><sep>m_blineEnergy = ((p_sramReadRowEnergy*m_rows)+p_sramReadFixedEnergy)*m_readWidth*p_sramActiveCols;<tb><sep>m_senseEnergy = p_sramSenseEnergy*m_readWidth;<tb><sep>break;<tb><sep>case e_enEvWr:<tb><sep>m_wlineEnergy = (p_sramWLineGateColCap*p_sramActiveCols+p_sramCols*p_sramWLineWireColCap)*m_writeWidth*p_vdd*p_vdd;<tb><sep>m_blineEnergy =((p_sramWriteRowEnergy*m_rows)+p_sramWriteFixedEnergy)*m_writeWidth;<tb><sep>m_senseEnergy = p_sramWriteDriverEnergy*m_writeWidth;<tb><sep>break;<tb><sep>case e_enEvRdWr:<tb><sep>if (m_readWidth>m_writeWidth) {<tb><sep>m_wlineEnergy = (p_sramWLineGateColCap*p_sramActiveCols+p_sramCols*p_sramWLineWireColCap)*m_readWidth*p_vdd*p_vdd;<tb><sep>m_blineEnergy = ((p_sramReadRowEnergy*m_rows)+p_sramReadFixedEnergy)*m_readWidth*p_sramActiveCols;<tb><sep>m_senseEnergy = p_sramSenseEnergy*m_readWidth;<tb><sep>}<tb><sep>else {<tb><sep>m_wlineEnergy = (p_sramWLineGateColCap*p_sramActiveCols+p_sramCols*p_sramWLineWireColCap)*m_writeWidth*p_vdd*p_vdd;<tb><sep>m_blineEnergy =((p_sramWriteRowEnergy*m_rows)+p_sramWriteFixedEnergy)*m_writeWidth;<tb><sep>m_senseEnergy = p_sramWriteDriverEnergy*m_writeWidth;<tb><sep>}<tb><sep>break;<tb><sep>default:<tb><sep>ASSERT(0,assert::e_warn,''Trying to get dynamic energy for unknown energy event - assuming 0 energy cost!'');<tb><sep>return 0;<tb><sep>}<tb><sep>return ((m_blineEnergy+m_decoderEnergy+m_wlineEnergy+m_senseEnergy)*p_sramDynamicAf);<tb>}<tb>uInt64 c_sramModule::estArea( ) {<tb><sep>// calculate width, height, area<tb><sep>m_lambdaWidth = (m_width+p_cacheRedundantCols)*p_sramBitCellWidth + p_sramBankDecoderWidth;<tb><sep>m_lambdaHeight = (m_rows+p_cacheEmulationRows)*p_sramBitCellHeight + p_sramBankColHeight;<tb><sep>m_lambdaArea = m_lambdaWidth * m_lambdaHeight;<tb><sep>ASSERT(m_lambdaArea≥0,assert::e_warn,''Negative area estimated in sramModule'');<tb><sep>return m_lambdaArea;<tb>}In the illustrated code embodiment, the base-level SRAM array energy model is configured to receive several parameters, such as SRAM width, the number of rows, read port width, and write port width. The illustrated code embodiment also includes parameter bounds checking that may determine when a supplied parameter is out of range. The illustrated code embodiment provides energy estimates for various types of SRAM array operations, such as idle operation (no operation), array read, array write, and simultaneous array read and write. In the illustrated embodiment, SRAM array energy consumption is modeled as separate components for the array bit line, array word line, sense amplifiers, and bit line/word line decoders. Additionally, the illustrated code embodiment provides a method for estimating SRAM array area. This method may be used for estimating bus lengths by a power model, for example. It is noted that in other embodiments, the base-level SRAM array energy model may receive different numbers and types of parameters, may model different types of SRAM array operations, and may model SRAM array energy consumption using different techniques.One embodiment of code implementing a base-level bus interconnect energy model may be configured as follows:<tb>//***************************************************************************************************************************<tb>// Bus energy model<tb>//***************************************************************************************************************************<tb>c_busModule::c_busModule(const string &x_name, c_physModule *x_par,<tb><sep>const uInt32 p_bits, const uInt32 p_length,<tb><sep>e_physSubType p_physSubType) : c_physModule(x_name,<tb><sep>x_par) {<tb><sep>// Parameter bounds checking<tb><sep>ASSERT(p_length>0,assert::e_warn,''bus module '' << x_name << '' invoked with <0 length'');<tb><sep>ASSERT(p_length<1000000,assert::e_warn,''bus module '' << x_name << '' invoked with >1,000,000 length'');<tb><sep>ASSERT(p_bits>0,assert::e_warn,''bus module '' << x_name << '' invoked with <0 bits'');<tb><sep>ASSERT(p_bits<1024,assert::e_warn,''bus module '' << x_name << '' invoked with >1024 bits'');<tb><sep>m_bits = p_bits;<tb><sep>m_length = p_length;<tb><sep>m_physType = e_physTypeBus;<tb><sep>m_physSubType = p_physSubType;<tb>}<tb>double c_busModule::getDynEnergyEvent(e_energyEvent p_energyEvent) {<tb><sep>switch(p_energyEvent) {<tb><sep>case e_enEvActive:<tb><sep>// bus energy is modeled as 4u/4u (width/spacing) M3 for local buses<tb><sep>// and 6u/6u M5 for global buses. Repeaters are inserted for every<tb><sep>// segment.<tb><sep>m_segments = (uInt32) (floor(m_length/p_busRepeaterFreq));<tb><sep>if (m_physSubType==e_physSubTypeLocalBus) {<tb><sep>m_energy = (p_busDynamicAF*m_bits*m_length*p_busLocalCap*p_vdd*p_vdd)+ (m_segments*p_busRepeaterEnergy);<tb><sep>}<tb><sep>else if (m_physSubType==e_physSubTypeGlobalBus) {<tb><sep>m_energy = (p_busDynamicAF*m_bits*m_length*p_busGlobalCap*p_vdd*p_vdd)+ (m_segments*p_busRepeaterEnergy);<tb><sep>}<tb><sep>else {<tb><sep>ASSERT(0,assert::e_warn,''Bus module invoked with invalid energy event'');<tb><sep>}<tb><sep>break;<tb><sep>default:<tb><sep>ASSERT(0,assert::e_warn,''Bus module invoked with invalid energy event'');<tb><sep>}<tb><sep>return m_energy;<tb>}In the illustrated code embodiment, the base-level bus interconnect energy model is configured to receive several parameters, such as the number of bits and the bus length. Like the base-level SRAM array energy model embodiment illustrated above, the illustrated base-level bus interconnect energy model embodiment also includes parameter bounds checking that may determine when a supplied parameter is out of range. The illustrated bus energy model code embodiment provides energy estimates for various types of bus interconnect, distinguishing between local and global buses and modeling the effect of regularly spaced repeater devices. It is noted that in other embodiments, the base-level bus interconnect energy model may receive different numbers and types of parameters, may model different types of bus interconnect, and may model bus interconnect energy consumption using different techniques.A base-level standard cell energy model (not shown) may be implemented in a manner similar to that of the models illustrated above. In one embodiment, such a model may include parameters indicating the gate count of the standard cell block as well as the functional type of the standard cell block, such as instruction cache logic or floating point logic, for example. Such an embodiment may provide energy estimates for various operation modes of standard cell logic, such as idle operation and active operation, using activity factors appropriate to each operation mode. It is noted that in other embodiments, the base-level standard cell energy model may receive different numbers and types of parameters and may model standard cell energy consumption using different techniques.Power Model ExamplesHierarchical power models may also be defined for the purposes of using methodology 10 of FIG. 1 to model the power consumption of instruction cache 300. As noted above in conjunction with the description of FIG. 1, power models may hierarchically instantiate one or more base-level energy models. In one embodiment, power models may be defined for cache data arrays such as instruction data array 310 and predecode array 360, as well as cache tag arrays such as tag array 340.One embodiment of code implementing a cache data array power model may be configured as follows:<tb>//***************************************************************************************************************************<tb>// Cache Data RAM(s) energy and area models<tb>//***************************************************************************************************************************<tb>c_cachedataModule::c_cachedataModule(const string &x_name, c_physModule<tb><sep>*x_par, const uInt32 p_logSize, const uInt32 p_assoc,<tb><sep>const uInt32 p_logWidth, const uInt32 p_logLineSize) :<tb><sep>c_physModule(x_name, x_par) {<tb><sep>// Parameter bounds checking<tb><sep>ASSERT( (p_logSize≥10)&&(p_logSize≤21),assert::e_warn,<tb><sep>''cachedataModule model '' << x_name << '' invoked with size out-of-bounds '' << p_logSize);<tb><sep>ASSERT( (p_assoc==0)(p_assoc==1)(p_assoc==2)(p_assoc==4)<tb><sep>(p_assoc==8), assert::e_warn,''cachedataModule model '' <<x_name << '' invoked with associativity out-of bounds'' << p_assoc);<tb><sep>ASSERT( (p_logWidth≥3)&&(p_logWidth≤7),assert::e_warn,<tb><sep>''cachedataModule model '' << x_name << '' invoked with read width out-of-bounds '' << p_logWidth);<tb><sep>ASSERT( (p_logLineSize≥3)&&(p_logLineSize≤8),assert::e_warn,<tb><sep>''cachedataModule model '' << x_name << '' invoked with linesize out-of-bounds '' << p_logLineSize);<tb><sep>m_logSize<sep≥ p_logSize;<sep>//2{circumflex over ( )}{circumflex over ( )}m_size cache size (bytes)<tb><sep>m_size<sep≥ (uInt32) pow(2,m_logSize);<sep>// cache size (bytes)<tb><sep>m_assoc<sep≥ p_assoc;<sep>// associativity of cache<tb><sep>m_logLineSize<sep≥ p_logLineSize;<sep>// 2{circumflex over ( )}{circumflex over ( )}m_linesize linesize (bytes)<tb><sep>m_lineSize<sep≥ (uInt32) pow(2,p_logLineSize);<sep>// linesize (bytes)<tb><sep>m_logWidth<sep≥ p_logWidth;<sep>// 2{circumflex over ( )}{circumflex over ( )}m_width of access (bits)<tb><sep>m_width<sep≥ (uInt32) pow(2,p_logWidth);<sep>// access width (bits)<tb><sep>m_physType<sep≥ e_physTypeCacheData;<tb><sep>m_physSubType<sep≥ e_physSubTypeSRAM;<tb>}<tb>void c_cachedataModule::init( ) {<tb><sep>// ** Construct the Data RAMs<tb><sep>//<tb><sep>// 128KBytes and less caches, use the ''fast'' SRAM configuration<tb><sep>// (64 rows * 128b)<tb><sep>if (m_logSize≤17) {<tb><sep>m_cache[e_physData] = new c_sramModule(''cache-Data'',this,128,64,64,64);<tb><sep>m_dataBanks = (uInt32) m_size/1024;<tb><sep>}<tb><sep>// bit cache configuration, use the ''big'' SRAM configuration<tb><sep>// (128 rows * 128b)<tb><sep>else {<tb><sep>m_cache[e_physData] = new c_sramModule(''cache-Data'',this,128,128,64,64);<tb><sep>m_dataBanks = (uInt32) m_size/1024;<tb><sep>}<tb><sep>m_cache[e_physData]->init( );<tb><sep>// **<sep>Construct the buses<tb><sep>// -><sep>average bus length is a function of total data area.<tb><sep>//<tb><sep>// -><sep>cache floorplan is assumed to be ''low energy'' and organized into<tb><sep>//<sep>banks. Each bank contains a private bus network so that only one<tb><sep>//<sep>bank needs to be driven per cache access.<tb><sep>//<tb><sep>// -><sep>total bus length is multiplied by p_cache_areafactor to account<tb><sep>//<sep>for internal bus, clock, power routing, and for the increased<tb><sep>//<sep>area when the tags are added to the cache module. The cache is<tb><sep>//<sep>assumed to be square, so that the length of one side is<tb><sep>//<sep>sqrt(area). Average bus length is approximately 1.625* the<tb><sep>//<sep>length of one dimension - on average .625 (=4/(4+3+2+1)) for the<tb><sep>//<sep>column and 1.0 for spine.<tb><sep>m_avgBusLength = (uInt32) ((sqrt((m_dataBanks*m_cache[e_physData]->estArea( )))*p_cache_areafactor)*1.625);<tb><sep>m_indexBits = (uInt32) (m_logSize - m_logLineSize)/m_assoc;<tb><sep>m_cache[e_physDataBus] = new c_busModule(''tlb-DataBus'',this,m_width,m_avgBusLength,e_physSubTypeLocalBus);<tb><sep>m_cache[e_physIndexBus] = new c_busModule(''tlb-IndexBus'',this,m_indexBits,m_avgBusLength,e_physSubTypeLocalBus);<tb><sep>m_cache[e_physDataBus]->init( );<tb><sep>m_cache[e_physIndexBus]->init( );<tb>}<tb>double c_cachedataModule::getDynEnergyEvent(e_energyEvent p_energyEvent)<tb>{<tb><sep>m_energyEvent = p_energyEvent;<tb><sep>switch(m_energyEvent) {<tb><sep>// *** linefill accesses - refill the entire line and write the cache.<tb><sep>case e_enEvCacheLineFill:<tb><sep>m_beats = (8*m_lineSize)/m_width;<tb><sep>m_dataEnergy = m_beats*<tb><sep>m_cache[e_physData]->getDynEnergyEvent(e_enEvWr);<tb><sep>m_busEnergy = m_beats*<tb><sep>(m_cache[e_physDataBus]->getDynEnergyEvent(e_enEvActive) +<tb><sep> m_cache[e_physIndexBus]->getDynEnergyEvent(e_enEvActive));<tb><sep>break;<tb><sep>// *** all other accesses (normal reads, writes).<tb><sep>case e_enEvRd:<tb><sep>case e_enEvWr:<tb><sep>m_busEnergy =<tb><sep>m_cache[e_physDataBus]->getDynEnergyEvent (e_enEvActive) +<tb><sep>m_cache[e_physIndexBus]->getDynEnergyEvent (e_enEvActive);<tb><sep>m_dataEnergy = m_assoc*<tb><sep>m_cache[e_physData]->getDynEnergyEvent(m_energyEvent);<tb><sep>break;<tb><sep>default :<tb><sep>ASSERT(0,assert::e_warn,''Trying to get dynamic energy for unknown energy event - assuming 0 energy cost!'');<tb><sep>return 0;<tb><sep>}<tb><sep>return(m_dataEnergy+m_busEnergy);<tb>}In the illustrated code embodiment, the cache data array power model is configured to receive several parameters, such as the cache size, width, and line size (each here represented as a log base 2 quantity) as well as the cache associativity. Like the base-level energy model embodiments illustrated above, the illustrated cache data array power model embodiment also includes parameter bounds checking that may determine when a supplied parameter is out of range. The illustrated cache data array power model embodiment instantiates a base-level SRAM array energy model such as the SRAM array model illustrated above, distinguishing between small and large cache configurations. The illustrated cache data array power model embodiment further instantiates several instances of a base-level bus interconnect energy model such as the bus model illustrated above, distinguishing between data and address index buses and taking into account the area of the circuit represented by the base-level SRAM array energy model. The illustrated cache data array power model embodiment provides energy estimates for various types of cache data array operations, such as cache line fills, cache reads, and cache writes. It is noted that in other embodiments, the cache data array power model may receive different numbers and types of parameters, may instantiate different base-level energy models, and may use different techniques for modeling different types of cache data array operations.A cache tag array power model (not shown) may be implemented in a manner similar to that of the cache data array power model illustrated above. In one embodiment, such a model may include additional base-level bus interconnect energy models to model cache way selection signals driven to cache data arrays as well as other base-level energy models to model cache tag matching logic. It is noted that in other embodiments, the cache tag array power model may include other types and configurations of base-level energy models and may model different operations.Mapping File ExampleIn applying methodology 10 of FIG. 1 to model the power consumption of instruction cache 300, instruction cache-related energy events produced by architecture simulation model 100 may be mapped onto instruction cache-related power models via mapping file 140. One embodiment of mapping file 140 code implementing such a mapping may be configured as follows:<tb># Power model instantiation<tb>power_model macro_instCacheData<sep>[ physTypeCacheData<tb><sep>logSize=P_ICSIZE assoc=P_ICASSOC logWidth=P_ICFETCHWIDTH<tb><sep>logLineSize=P_ICLINESIZE ]<tb>power_model macro_pdCacheData<sep>[ physTypeCacheData<tb><sep>logSize=M_PDSIZE assoc=P_ICASSOC logWidth=M_PDFETCHWIDTH<tb><sep>logLineSize=M_PDLINESIZE ]<tb>power_model macro_instCacheTag<sep>[ physTypeCacheTag<tb><sep>logSize=P_ICSIZE assoc=P_ICASSOC logWidth=P_ICFETCHWIDTH<tb><sep>logLineSize=P_ICLINESIZE ]<tb>power_model stdcell_icLogic<sep>[ physTypeStdCell<tb><sep>physSubType=GenericCtl gates=P_IF_PWR_ICGATES ]<tb>power_model stdcell_icCtlLogic<sep>[ physTypeStdCell<tb><sep>physSubType=GenericCtl gates=P_IF_PWR_ICCTLGATES ]<tb>power_model stdcell_tagLogic<sep>[ physTypeStdCell<tb><sep>physSubType=GenericCtl gates=P_IF_PWR_TAGGATES ]<tb>power_model stdcell_pdCacheLogic<sep>[ physTypeStdCell<tb><sep>physSubType=GenericCtl gates=P_IF_PWR_PDCGATES ]<tb># Event mapping<tb>instCacheDataRd<sep>[Instruction Cache Data Array Read]<tb><sep>powerall<sep>[<sep>[ macro_instCacheData enEvRd ]<tb><sep>[ macro_pdCacheData enEvRd ]<tb><sep>[ stdcell_pdCacheLogic enEvActive ]<tb><sep>[ stdcell_icLogic enEvActive ]<tb><sep>[ stdcell_icCtlLogic enEvActive ] ]<tb>instCacheDataWr<sep>[Instruction Cache Data Array Write]<tb><sep>powerall<sep>[<sep>[ macro_instCacheData enEvWr ]<tb><sep>[ macro_pdCacheData enEvWr ]<tb><sep>[ stdcell_pdCacheLogic enEvActive ]<tb><sep>[ stdcell_icLogic enEvActive ] ]<tb>instCacheTagRd<sep>[Instruction Cache Tag Array Read]<tb><sep>powerall<sep>[<sep>[ macro_instCacheTag enEvMatch ]<tb><sep>[ stdcell_tagLogic enEvActive ] ]In the illustrated mapping file code embodiment, power models corresponding to the various circuit elements of FIG. 3 are first instantiated with various parameter definitions. Specifically, cache data array power models such as the model illustrated above are instantiated corresponding to instruction data array 310 and predecode array 360 and are labeled macro_instCacheData and macro_pdCacheData, respectively. A cache tag array power model corresponding to tag array 340 is instantiated and labeled macro_instCacheTag. A base-level standard cell energy model is multiply instantiated as a power model corresponding to each of data array control logic 320, cache control logic 330, tag control logic 350 and predecode control logic 370 and labeled stdcell_icLogic, stdcell_icCtlLogic, stdcell_tagLogic, and stdcell_pdCacheLogic, respectively.Subsequent to instantiation of power models, the illustrated mapping file code embodiment maps various energy events to the instantiated power models. In this embodiment, three energy events are mapped: an instruction cache data array read event, an instruction cache data array write event, and an instruction cache tag array read event, labeled instCacheDataRd, instCacheDataWr and instCacheTagRd, respectively. In this embodiment, the "powerall" keyword may be used to map a single event to multiple power models; the relevant models are specified along with the specific operating mode to be selected for each power model. For example, the instCacheDataRd event maps to five power models: macro_instCacheData, macro_pdCacheData, stdcell_pdCacheLogic stdcell_icLogic and stdcell_icCtlLogic. A read operation is specified for the two cache power models, and an active operation is specified for the three standard cell logic power models. It is noted that in alternative mapping file embodiments, different numbers and kinds of power models may be instantiated, and different numbers and kinds of energy events may be mapped to one or more power models. It is further noted that in alternative mapping file embodiments, different languages and syntax may be used to establish energy event mapping.After base-level energy models, power models, and a mapping file such as the embodiments described above have been specified, methodology 10 may be operated as indicated in the description of FIG. 2 to estimate the power consumption of instruction cache 300 with respect to a given application trace. With respect to the illustrated mapping file embodiment, instCacheDataRd, instCacheDataWr and instCacheTagRd energy events produced during a given simulated execution cycle by architecture simulation model 100 may cause the corresponding mapped power models to be evaluated and an instruction cache power estimate to be determined for that execution cycle.It is noted that all of the preceding code embodiments represent only exemplary embodiments and are not intended to limit the structure or content of such embodiments. Other embodiments are contemplated in which different coding languages or implementation styles may be used. Further, in other embodiments it is contemplated that the base-level energy models and power models may be defined at different levels of abstraction. For example, in one embodiment a cache may be defined as a base-level energy model, while in another embodiment, an array bit cell may be defined as a base-level energy model. Still further, in other embodiments it is contemplated that different behaviors may be modeled by base-level energy models and power models. For example, a given base-level energy model or power model may model a different set of circuit operations and may use different techniques and formulas for estimating energy consumption.Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. |
In one embodiment, a node includes at least one core to independently execute instructions; a first host device to receive information from the at least one core and to include the information in a first packet of a first communication protocol; a selection logic coupled to the first host device to receive the first packet and to provide the first packet to a conversion logic or a first interface to communicate with a first device via a first interconnect of the first communication protocol; the conversion logic to receive the first packet under selection of the selection logic and to encapsulate the first packet into a second packet of a second communication protocol; and a second interface coupled to the conversion logic to receive the second packet and to communicate the second packet to a second device via a second interconnect of the second communication protocol. |
1.A system-on-chip SoC, including:Multiple cores, including a first core and a second core;A cache memory coupled to the plurality of cores;A first host device, configured to receive information from the first core, and include the information in one or more first packets of the first communication protocol;Selection logic, coupled to the first host device, for receiving the one or more first packets and providing the one or more first packets to the conversion logic or the first interface,A first interface coupled to the selection logic to communicate with a first external device via a first interconnection of the first communication protocol;The conversion logic, coupled to the selection logic, is configured to receive the one or more first packets from the selection logic, and transmit the one or more in one or more second packets of a second communication protocol Multiple first groups; andA second interface, coupled to the conversion logic, for receiving the one or more second packets from the conversion logic and connecting the one or more second packets via the second interconnection of the second communication protocol The second packet is transmitted to the second external device;A graphics processor coupled to the multiple cores.2.The SoC according to claim 1, wherein the first communication protocol includes an enhanced serial peripheral interface protocol, and the second communication protocol includes a rapid peripheral component interconnection protocol.3.The SoC according to claim 1, wherein the second external device includes a second SoC or a central component coupled between the SoC and a shared resource, the shared resource communicating according to the first communication protocol.4.The SoC according to claim 3, wherein the shared resource includes a non-volatile storage device for storing a basic input/output system BIOS, wherein the SoC is connected from the non-volatile memory device via the second interconnection during initialization. The volatile storage device receives the boot code of the BIOS.5.The SoC of claim 4, wherein the second interconnect operates in a default configuration before executing the boot code.6.The SoC of claim 1, wherein the first interface is disabled when the first interface is not coupled to an external device.7.The SoC according to claim 1, wherein the SoC is incorporated into a multi-node system including a plurality of SoCs, wherein a first SoC of the plurality of SoCs includes a first interface for communicating according to the first communication protocol , The first SoC is directly coupled to the first shared resource via the interconnection of the first communication protocol, and the other SoCs in the plurality of SoCs include those for communicating according to the unconnected first communication protocol The first interface.8.The SoC of claim 1, wherein the first host device is coupled to the first bus and is enumerated with a first external device identifier.9.The SoC of claim 8, wherein the second interface is coupled to the first bus and is enumerated with a second external device identifier.10.The SoC according to claim 1, wherein the selection logic receives one or more third packets of the first communication protocol from the conversion logic, and sends the one or more third packets to the The first host device.11.The SoC according to claim 10, wherein the conversion logic receives one or more fourth packets of the second communication protocol from the second interface, and receives one or more fourth packets from the second communication protocol Four packets decapsulate the one or more third packets of the first communication protocol.12.The SoC of claim 1, wherein the conversion logic includes a tunnel indicator in a header of the one or more second packets of the second communication protocol to indicate the encapsulation of the first communication protocol The existence of one or more first groups.13.The SoC of claim 3, wherein the first external device includes a memory.14.The SoC of claim 1, wherein the first external device is shared by the SoC and a second SoC coupled to the SoC.15.The SoC of claim 1, wherein the SoC supports communication via an enhanced serial peripheral interface protocol.16.The SoC of claim 6, wherein the SoC supports communication via a fast peripheral component interconnection protocol.17.The SoC of claim 1, wherein the first interconnection of the first communication protocol includes a first off-chip link.18.A method for performing communication processing, including:Receive the first packet in the selection logic of the integrated circuit from the host device of the integrated circuit, the host device is used to communicate according to the first communication protocol, and when the integrated circuit is adapted When the first interconnection of a communication protocol is coupled to the system of the first external device of the first interface of the integrated circuit, the first packet is selectively provided to the first interface, otherwise, the The first communication packet is provided to the first logic of the integrated circuit;When the first packet is provided to the first logic, encapsulate the first packet in the first logic into a second packet conforming to a second communication protocol; andThe second packet is transferred to a second interface of the integrated circuit, and the second interface transfers the second packet to a second interface that conforms to the first communication protocol via a second interconnect coupled to the integrated circuit. The second external device, the second interconnection conforms to the second communication protocol.19.The method of claim 18, further comprising: disabling the first interface when the integrated circuit is implemented in a system that does not couple the first external device to the first interface.20.The method of claim 18, wherein encapsulating the first packet into the second packet comprises:Setting a tunnel indicator of the header of the second packet;Merge the period type of the header of the first packet into the period type field of the header of the second packet; andPut the data of the first group into the data part of the second group.21.The method of claim 18, further comprising:Initializing the second interconnection to a default configuration after resetting;Receiving a boot code from the second external device via the second interconnect; andIn response to the execution of the boot code, the second interconnect is reinitialized to a second configuration.22.A machine-readable storage medium, comprising machine-readable instructions, which, when executed, implement the method according to any one of claims 18 to 21. |
Method, device and system for packaging information in communicationTechnical fieldThe embodiment relates to communication within a computing system.Background techniqueOver time, integrated circuits such as system-on-chip (SoC) including a higher degree of integration have been created. Higher integration levels will increase the number of external components that the integrated circuit has to interact with.Conventional single-node computing systems, such as client computer systems or stand-alone server computer systems, are usually formed of various integrated circuits and other components, and usually have dedicated resources. As the system moves from a single-node design to a multi-node topology such as in a server space, providing dedicated resources for each node can become very expensive. Therefore, in some multi-node/multi-host/multi-cluster systems (usually referred to as multi-node systems), some amount of platform resource sharing occurs to reduce overall costs and further reduce power consumption. However, resource sharing solutions often lead to design trade-offs and/or limitations in defining different multi-node topologies (including due to electrical and routing issues).Description of the drawingsFig. 1 is a block diagram of a system according to an embodiment of the present invention.Fig. 2 is a block diagram of a system according to another embodiment of the present invention.Fig. 3 is a block diagram of a part of a node according to an embodiment of the present invention.Fig. 4 is a block diagram of a part of the central component according to an embodiment of the present invention.Fig. 5 is an illustration of an eSPI packet and its encapsulation into a PCIe packet according to an embodiment.Fig. 6 is a flowchart of a method according to an embodiment of the present invention.Fig. 7 is an embodiment of a system-on-chip design according to the embodiment.Fig. 8 is a block diagram of a system according to an embodiment of the present invention.Detailed waysIn various embodiments, a technique is provided to enable information packets of one communication protocol to be tunneled, encapsulated, or otherwise transmitted within information packets of another communication protocol. For the purpose of this discussion, packets conforming to the Enhanced Serial Peripheral Interface (eSPI) protocol (such as those described in the Enhanced Serial Peripheral Interface (eSPI) Basic Specification (June 2013, revision 0.74)) can be Tunneling is carried out in packets conforming to the Peripheral Component Interconnect Express (PCIe) protocol (as described in the PCIe Basic Specification Version 3.0 (November 10, 2010)). The embodiments can of course be applied to other communication protocols.Embodiments that operate to tunnel eSPI cycles across PCIe interfaces facilitate coupling to the eSPI resources of a central component (also referred to herein as a central controller) and/or a specific configuration node (e.g., a given SoC) of a multi-node system Shared by multiple nodes. In this way, one or more eSPI devices can be shared while eliminating the dependence and limitations of communication via the eSPI interface. Therefore, a flexible multi-node topology with a higher resource sharing radius and lower latency can be designed.Referring now to FIG. 1, there is shown a block diagram of a system according to an embodiment of the present invention. As shown in FIG. 1, the system 100 is a multi-node system including a plurality of nodes 1101-1104. It should be understood that the node 110 may be implemented differently in various embodiments. In some cases, the nodes 110 may each be a central processing unit (CPU) of a given multi-core processor such as the multi-node system 100, and the multi-node system 100 may be a given server computer, such as a blade, a micro server, etc. In other cases, each node 110 itself may be a complete computing system (for example, including processing resources, I/O resources, memory resources, and networking resources).In various embodiments, the nodes 110 may each include at least one dedicated eSPI interface (including one or more ports) for connecting to an external device via an interconnection that complies with the eSPI specification (hereinafter referred to as eSPI interconnection), such as External flash memory or Trusted Platform Module (TPM). Each of the nodes 110 may further include one or more PCIe interfaces (including one or more ports) for connecting to an external device via an interconnect that conforms to the PCIe specification (hereinafter referred to as PCIe interconnect). Embodiments may use one or more PCIe interfaces of nodes to tunnel eSPI cycles to facilitate multi-node sharing of eSPI resources.In any case, as further shown in FIG. 1, node 110 is coupled to structure 120, which in one embodiment is a PCIe structure. More specifically, each node 110 is coupled to the structure 120 via a corresponding interconnect 1151-1154 (each may be a PCIe interconnect). In this way, each node 110 may include at least one interface having a port to implement PCIe-based communication. As will be described in more detail here, in an embodiment, in addition to PCIe communication, the port may also provide communication of eSPI information via tunneling of eSPI packets in one or more PCIe packets as described herein.Still referring to FIG. 1, node 1104 is coupled to one or more peripheral devices 130 via interconnect 125. In various embodiments, the peripheral device 130 may provide communication in accordance with the eSPI specification. Therefore, the interconnect 125 may be an eSPI interconnect. Note that various types of peripheral devices such as flash memory, trusted platform module, and baseboard management controller (BMC) can be implemented as eSPI devices. In addition, such devices can be shared among multiple nodes, reducing overall system cost and complexity by allowing each of the nodes 110 to share functions available in these devices.In this arrangement, the node 1104 acts as a master node for interfacing with the device 130. In this way, the eSPI interface in the other nodes 1101-1103 can be disconnected (and configured to be disabled and/or in a power-down state), because any eSPI-based communication is instead via the given PCIe interface And the corresponding interconnect 115 is routed to node 1104 through structure 120.In some cases, the external management controller 140 may be coupled to one or more additional eSPI devices 150 (via corresponding eSPI interconnects 145). In this case, the external management controller 140 can communicate with the node 1104 via another eSPI interconnect 128. It should be understood that although the embodiment of FIG. 1 is shown at this high level, many variations and alternatives are possible. For example, although four nodes are shown in FIG. 1, it should be understood that the number of nodes supported in the system topology is not limited in this respect.Referring now to FIG. 2, there is shown a block diagram of a system 200 according to another embodiment of the present invention. As shown in FIG. 2, the central component 220 is configured as a master interface to one or more eSPI devices 230. Therefore, the nodes 2101-2104, which may be any type of node described above, are coupled to the central component 220 via corresponding interconnects 2151-2154 (which may be PCIe interconnects in one example). In this case, the node 210 can disable its internal eSPI interface and/or put it in a low-power state in other ways, because eSPI-based communication is instead tunneled to via the internal PCIe interface and along the corresponding PCIe interconnect 215 The central component 220 to interact with the corresponding eSPI device 230.As further shown in Figure 2, the external management controller 240 may be coupled to one or more additional eSPI devices 250 (via the corresponding eSPI interconnect 245). In this case, the external management controller 240 can communicate with the central component 220 via another eSPI interconnect 228. Note that the type of eSPI device and the connection topology with the central component 220 can be specified by eSPI specifications and system requirements. In some cases, the central component 220 may provide other resource sharing functions.Therefore, the SoC, other nodes, and central components can be configured to route the encapsulated or decapsulated eSPI packet to a designated PCIe interface or eSPI interface based on the topology of the system configuration. In addition, such a circuit includes an eSPI-Express interface for encapsulating eSPI packets into PCIe packets and decapsulating eSPI packets from PCIe packets.Referring now to FIG. 3, a block diagram of a part of a node according to an embodiment of the present invention is shown. As shown in FIG. 3, the node 300 is a part of the SoC. Included in the SoC 300 is the eSPI host device 310, which can be coupled to the enumeration bus 0 as a given device (device X in FIG. 3). The host device 310 is coupled to the multiplexer 315 or another selection logic, which in turn is controlled to selectively provide communication from the node 300 to the outside via the eSPI interface 320 or a given PCIe interface (that is, the designated PCIe root port 330) . The designated root port 330, in addition to other functions of its design, can also provide eSPI tunnel transmission. Therefore, the root port 330 may be configured to allow tunneling of eSPI packets.The multiplexer 315 of SoC 300 therefore allows eSPI to access eSPI interconnection or PCIe interconnection, depending on the specific system configuration (or dynamically controllable based on the source/destination of a specific service). To this end, the multiplexer 315 is also coupled to the eSPI-Express interface 325 coupled between the multiplexer 315 and the root port 330. In various embodiments, the eSPI-Express interface 325 is configured to perform tunneling of eSPI information to a PCIe-based format to enable communication via the root port 330, which in turn is coupled to the PCIe interconnect (which In the illustrated embodiment, it is a non-node link). Therefore, the interface 325 is configured to generate and deconstruct packets formatted to encapsulate eSPI packets within PCIe packets.Assume that, as an alternative, the node 300 is configured to be directly coupled to an external eSPI device via an eSPI interface 320, which in turn is coupled to an eSPI interconnect (which in the illustrated embodiment is a non-node link). In this instance, the multiplexer 315 is controlled to provide communication between the host device 310 and the eSPI interface 320. In this configuration of the node 300, the eSPI-Express interface 325 can be controlled to be disabled or otherwise powered off in order to reduce power consumption when the interface is not in use.Further, incoming communication is received in the root port 330. The root port 330 may be configured to determine whether the packet includes tunneled eSPI information based on the encoding present in a given packet. If so, the packet can be transferred to the eSPI-Express interface 325, which can parse the eSPI information, and provide it to the host device 310 or the eSPI interface 320 via the multiplexer 315. Therefore, via the arrangement in FIG. 3, eSPI information can be transmitted to locally connected eSPI devices for sharing with remote nodes and/or central components. If the eSPI device is locally connected with respect to the SoC 300, the local access is passed directly to the connected eSPI interconnect without any format changes. All remote accesses to such locally connected eSPI devices are via the eSPI-Express interface 325 to decapsulate PCIe packets into eSPI packets for transmission to the local eSPI device. If the eSPI device is connected to the SoC 300 remotely, the local access is encapsulated by the eSPI-Express interface 325 and passed through the root port 330 to the node or central component that locally supports the eSPI device. It should be understood that although shown at this high level in the embodiment of FIG. 3, many variations and alternatives are possible.Referring now to FIG. 4, a block diagram of a part of the central component according to an embodiment of the present invention is shown. Although the scope of the present invention is not limited in this regard, exemplary central components include a server peripheral controller center (sPCH), a baseboard management controller, and the like. As shown in FIG. 4, the central component 400 includes an eSPI host device 410, which can be coupled to an enumeration bus as a given device (device N in FIG. 4). The host device 410 is coupled to a multiplexer 415 (or other selection logic), which in turn is controlled to selectively off-chip via the eSPI interface 420 or one of the PCIe endpoints 450N-450X ( off-chip) output information. In other words, the designated endpoint 450 can provide eSPI tunnel transmission in addition to other functions designed by it.The multiplexer 415 of the central component 400 thus allows eSPI to access eSPI interconnection or PCIe interconnection, depending on the specific system configuration (or dynamically controllable based on the source/destination of a specific service). To this end, the multiplexer 415 is also coupled to the eSPI-Express interface 425. In various embodiments, the eSPI-Express interface 425 is configured to perform tunneling of eSPI information to a PCIe-based format to enable communication via a given PCIe endpoint 450. Therefore, the interface 425 is configured to generate and deconstruct packets formatted to encapsulate eSPI packets within PCIe packets.It is assumed that, as an alternative, the central component 400 is configured to be directly coupled to an external eSPI device via an eSPI interface 420, which in turn is coupled to an eSPI interconnect (which in the illustrated embodiment is a non-node link). In this example, the multiplexer 415 is controlled to provide communication between the host device 410 and the eSPI interface 420.If the eSPI device is locally connected with respect to the central component 400, the local access is passed directly to the connected eSPI interconnect without any format change. All remote accesses to such locally connected eSPI devices are via the eSPI-Express interface 425 to decapsulate PCIe packets into eSPI packets for transmission to the local eSPI device.Referring now to FIG. 5, an illustration of an eSPI packet 510 and its encapsulation into a PCIe packet 520 according to an embodiment is shown. As shown in FIG. 5, the eSPI packet 510 includes a header portion 512 including various information and a data portion 514 including a plurality of data bytes. In turn, the PCIe packet 520, which is a PCIe type 1 message with data packets, includes a header part 522 and a data part 524. Generally, a part of the header part 512 of the eSPI packet 510 may be incorporated into the header part 522 of the PCIe packet 520. The address part of the header part 512 and the data part 514 may be merged into the data part 524 of the PCIe packet 520. In some embodiments, except for the short packet format, all eSPI packet formats defined in the eSPI specification can be enabled for tunneling, as described herein. In an embodiment, such an eSPI packet is encapsulated by a PCIe type 1 message having a data format.As shown in FIG. 5, Byte0 and Bit7 of the fourth double word (Dword) of the PCIe packet header 522 are set to "1" to be used as a tunnel indicator. The "period type" field of the header 512 of the eSPI packet 510 may be converted to be incorporated into the "period type Xlat" field of the header 522 of the PCIe packet 520. Furthermore, the label and length fields of the header 512 may be incorporated into the corresponding fields of the header 522. The eSPI specification defines multiple channels for eSPI interconnection, namely flash memory, peripherals, out-of-band messages, and virtual lines, all of which can be supported via the PCIe package as described in this article.Note that the PCIe interface in the node usually receives programming (for example, through a basic input output system (BIOS)) after reset to configure a port with an appropriate width and speed so that the link can train and work normally. Using the embodiment, although the flash memory device containing the BIOS code is placed after the PCIe interface, an initial configuration can be implemented to implement functional operations before the link training occurs. The component reset sequence may enable a specified PCIe interface (for example, the root port 330 of FIG. 3 and/or the PCIe endpoint 450 of FIG. 4) to exit the reset state early in the reset sequence. The specified PCIe interface can then be specified with a default configuration (e.g., single port, link width) based on the device it is connected to. As an example, the link can be configured as x4 or x4 width at PCIe Gen1 speed 4 (its link width is 4 or 1, and the operating frequency is 2.5 GHz). After the initial operation at the basic link width and speed, the BIOS or other boot code (for example, stored in the eSPI device) can operate to reinitialize the designated PCIe interface to a higher level of functionality.Using the power management handshake technology, any device seeking access to an eSPI device can wake up the designated PCIe interface of its own device to allow access. In addition, embodiments can handle error conditions in eSPI devices by encapsulating such errors into PCIe packets. Conversely, you can define registers or other storage devices to record tunneled eSPI packet errors. The selected error can be mapped to the equivalent eSPI error. As an example, a PCIe link failure error can be aliased as an eSPI link error (which is the master abort equivalent).Referring now to FIG. 6, there is shown a flowchart of a method according to an embodiment of the present invention. As shown in FIG. 6, as an example, the method 600 may be performed by various logics of the multi-node system during initial boot and configuration operations. More specifically, the method 600 may be executed by the logic of the SoC during the reset operation of the reset sequence, because the SoC is powered on when the system is reset to discover its internal circuits and external interconnections. As can be seen, the method 600 begins by discovering the interface of a device that can be a SoC (block 610). Next, it is determined whether any interface (diamond 615) is specified for tunnel transmission operation, which can be indicated via internal fuse settings or through external belts. If this is not the case, control passes to block 620, where the reset sequence can be completed and the device exits the reset with the negotiated configuration. For example, various interfaces of the device can be configured with its given runtime configuration width, link speed, and so on. Next, in block 625, communication with the local device can be performed according to the first communication protocol. For example, in this example, the device can communicate with the local flash memory to obtain the boot code. Therefore, at block 630, the boot code is received in the device, stored in the local memory for lower latency access, and the boot code is executed.According to the example here, suppose that the PCIe interface is specified to tunnel eSPI information in the system, where a given SoC communicates with one or more eSPI devices that are not locally connected to the SoC. In this case, in diamond 615, at least one interface is designated for tunnel operation, and therefore control then passes to block 640, where the designated interface may be allowed to exit the reset sequence earlier. More specifically, this early exit is to have a default configuration, such as a basic configuration for an interface, so that communication via connected interconnects can occur with low link widths and speeds.Next, at block 650, the request of the first communication protocol may be tunneled via the designated interface of the second communication protocol. More specifically, the request may be for access to a shared resource such as an externally connected flash memory that the SoC will share access to. In response to the tunneling request, a boot code can be received from the shared device at block 660. The boot code is unpackaged, and the boot code can be stored and executed at block 670. Thereafter, at block 680, the designated interface may be reinitialized to the selected configuration, which may enable the interface to operate at a higher link speed and/or width. It should be understood that although shown at this high level in the embodiment of FIG. 6, many variations and alternatives are possible.The embodiments can be used in various system topologies, including rack systems that share I/O resources among multiple nodes, providing total cost of ownership (TCO) advantages. Using an embodiment, direct routing of eSPI signals between remote shared nodes can be avoided. In addition, with the reduction in the number of eSPI devices, lower package pin impact and lower power consumption have been achieved. Embodiments can also facilitate PCIe-based topology and dense form factor placement.Turning next to FIG. 7, an embodiment of a system on chip (SOC) design according to the embodiment is depicted. As a specific illustrative example, the SOC 2000 can be configured to plug into any type of computing device from portable devices to server systems. Here, SOC 2000 includes two cores, 2006 and 2007. Similar to the above discussion, the cores 2006 and 2007 may conform to the instruction set architecture, such as Architecture CoreTM-based processors, Advanced Micro Devices, Inc. (AMD) processors, MlPS-based processors, ARM-based processor designs or their customers , And its license holders or adopters. The cores 2006 and 2007 are coupled to the cache control 2008 associated with the bus interface unit 2009 and the L2 cache 2010 to communicate with other parts of the system 2000. The interconnection 2010 includes an on-chip interconnection, and can implement eSPI-PCIe tunnel transmission as described herein.Interconnect 2010 provides communication channels for other components such as: Subscriber Identity Module (SIM) 2030 that interfaces with the SIM card, saves the boot code executed by the cores 2006 and 2007 to initialize and boot the boot ROM 2035 of the SOC 2000, and external memory ( SDRAM controller 2040 connected with non-volatile memory (for example, flash memory 2065), peripheral controller 2050 connected with peripheral devices (for example, eSPI interface), display and receive input (for example, DRAM 2060) For example, touch-enabled input) video codec 2020 and video interface 2025, GPU 2015 that performs graphics-related calculations, and so on. Any of these interfaces can include the aspects described herein.In addition, the system shows peripheral devices for communication, such as Bluetooth module 2070, 3G modem 2075, GPS 2080, and WiFi 2085. The system also includes a power controller 2055.Referring now to FIG. 8, there is shown a block diagram of a system according to an embodiment of the present invention. As shown in FIG. 8, the multi-processor system 1500 includes a first processor 1570 and a second processor 1580 that are coupled via a point-to-point interconnect 1550. As shown in FIG. 8, each of the processors 1570 and 1580 may be a multi-core processor including representative first and second processor cores (ie, processor cores 1574a and 1574b and processor cores 1584a and 1584b) . Each processor 1570 and 1580 may also include an eSPI interface circuit as described herein to reduce the number of components in the system.Still referring to FIG. 8, the first processor 1570 further includes a memory controller center (MCH) 1572 and point-to-point (P-P) interfaces 1576 and 1578. Similarly, the second processor 1580 includes an MCH 1582 and P-P interfaces 1586 and 1588. As shown in Figure 8, the 1572 and 1582 of the MCH couple the processor to the corresponding memory, that is, the memory 1532 and the memory 1534. The memory 1532 and the memory 1534 may be part of the system memory (for example, DRAM) connected to each processor locally. . The first processor 1570 and the second processor 1580 may be coupled to the chipset 1590 via P-P interconnects 1562 and 1564, respectively. As shown in Figure 8, the chipset 1590 includes P-P interfaces 1594 and 1598.In addition, the chipset 1590 includes an interface 1592 that couples the chipset 1590 with a high-performance graphics engine 1538 through a P-P interconnect 1539. In turn, the chipset 1590 may be coupled to the first bus 1516 via the interface 1596. As shown in FIG. 8, various input/output (I/O) devices 1514 may be coupled to the first bus 1516, and a bus bridge 1518 that couples the first bus 1516 to the second bus 1520. Various devices may be coupled to the second bus 1520. In one embodiment, the second bus 1520 includes, for example, a keyboard/mouse 1522, a communication device 1526, and a data storage unit such as a disk drive or other mass storage device that may include code 1530. 1528. In addition, the audio I/O 1524 may be coupled to the second bus 1520.In one example, the SoC includes: at least one core that independently executes instructions; a first host device for receiving information from the at least one core and including the information in one or more first communication protocols of the first communication protocol; Selection logic, which is coupled to the first host device to receive the one or more first packets and provide the one or more first packets to the conversion logic or the first interface to pass through the first The first interconnection of a communication protocol communicates with the first device; the conversion logic is configured to receive the one or more first packets under the selection of the selection logic, and combine the one or more first packets Packet encapsulated into one or more second packets of the second communication protocol; and a second interface coupled to the conversion logic to receive the one or more second packets and via the second communication protocol The interconnection transfers the one or more second packets to the second device.In the example, the first communication protocol includes the eSPI protocol, and the second communication protocol includes the PCIe protocol.In an example, the second device includes a second SoC or central component coupled between the SoC and a shared resource, the shared resource communicating according to a first communication protocol.In an example, the shared resource includes a non-volatile storage device for storing the BIOS, where the SoC receives the boot code of the BIOS from the non-volatile storage device via the second interconnection during initialization.In the example, the second interconnect will operate with the default configuration before executing the boot code.In an example, when the first interface is not coupled to an external device, the first interface is disabled.In the example, the SoC is incorporated into a multi-node system including a plurality of SoCs, wherein the first SoC of the plurality of SoCs includes a first interface that communicates according to a first communication protocol, and the first SoC communicates via the first communication protocol. The interconnect is directly coupled to the first shared resource, and other SoCs in the plurality of SoCs include a first interface that communicates according to a first communication protocol that is not connected.In the example, the first host device is coupled to the first bus and is enumerated with the first device identifier.In the example, the second interface is coupled to the first bus and is enumerated with the second device identifier.In an example, the selection logic receives one or more third packets of the first communication protocol from the conversion logic, and sends the one or more third packets to the first host device.In an example, the conversion logic receives one or more fourth packets of the second communication protocol from the second interface, and decapsulates one or more fourth packets of the first communication protocol from the one or more fourth packets of the second communication protocol. Three groups.In an example, the conversion logic includes a tunnel indicator in the header of one or more second packets of the second communication protocol to indicate the presence of one or more first packets encapsulated by the first communication protocol.Note that various means can be used to implement the above-mentioned SoC.In an example, the SoC can be incorporated into the touch-enabled device of the user device.In another example, the system includes a display and a memory, and includes a processor of one or more of the above examples.In another example, a method includes: receiving a first packet in the selection logic of the integrated circuit from a host device of the integrated circuit, the host device communicating according to a first communication protocol, and when the integrated circuit is in When a system having a first device coupled to a first interface via a first interconnect that conforms to the first communication protocol is adapted, the first packet is selectively provided to the first interface of the integrated circuit , Otherwise, the first communication packet is selectively provided to the first logic of the integrated circuit; when the first packet is provided to the first logic, the first packet is encapsulated in the first logic to conform to the first logic A second packet of a communication protocol; and transmitting the second packet to a second interface of the integrated circuit, the second interface transmitting the second packet via a second interconnect coupled to the integrated circuit To a second device conforming to the first communication protocol, the second interconnection conforms to the second communication protocol.In an example, the method further includes disabling the first interface when the integrated circuit is implemented in a system that does not have a first device coupled to the first interface.In an example, encapsulating the first packet into the second packet includes: setting a tunnel indicator of the header of the second packet; and merging the period type of the header of the first packet into the period type of the header of the second packet Field; and placing the data of the first packet into the data portion of the second packet.In an example, the method further includes: initializing a second interconnection to a default configuration after resetting; receiving a boot code from the second device via the second interconnection; and resetting the boot code in response to the execution of the boot code The second interconnection is reinitialized to the second configuration.In another example, a computer-readable medium including instructions performs the method of any of the above examples.In another example, a computer-readable medium including data is used by at least one machine to manufacture at least one integrated circuit to perform the method of any of the above examples.In another example, the apparatus includes a unit for performing the method of any one of the above examples.In another example, the system includes: a plurality of nodes, each node including a processor, the plurality of nodes communicating with each other via a second communication protocol; and a device shared by at least some of the plurality of nodes, wherein The device communicates according to a first communication protocol, and the first node of the plurality of nodes is adapted to transfer the data of the first communication protocol received from the device when the device is locally coupled to the first node The first packet is routed to the second node of the plurality of nodes via the second packet of the second communication protocol, and when the device is not locally coupled to the first node, the first node receives the first node And decapsulate the third packet of the second communication protocol from the third packet of the second communication protocol and decapsulate the second packet of the first communication protocol from the device to the first node.In an example, the system also includes a structure that couples a plurality of nodes via a first set of interconnections of a second communication protocol, wherein the first node and the second node are coupled to the structure.In an example, the system also includes a central controller coupled to the first node and the second node, wherein the device is locally coupled to the central controller.In an example, the central controller is adapted to provide shared access to the device through multiple nodes, the central controller includes conversion logic that receives communications from the multiple nodes according to the second communication protocol, and converts the communications into the first communication protocol. Decapsulate communications and provide decapsulated communications to the device.Embodiments can be used in many different types of systems. For example, in one embodiment, the communication device may be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to communication equipment, but other embodiments may involve other types of devices for processing instructions, or one or more machine-readable media, which includes instructions that respond to Executed on the device, causing the device to perform one or more of the methods and techniques described herein.The embodiments may be implemented in code and may be stored on a non-transitory storage medium on which instructions that can be used to program the system to execute the instructions are stored. The embodiments may also be implemented as data and may be stored on a non-transitory storage medium, which, if used by at least one machine, causes the at least one machine to manufacture at least one integrated circuit to perform one or more operations. The storage medium may include, but is not limited to, any type of disk including floppy disk, optical disk, solid state drive (SSD), compact disk read-only memory (CD-ROM), compact disk rewritable (CD-RW) and magneto-optical disk, such as read-only Memory (ROM) semiconductor devices, such as dynamic random access memory (DRAM), static random access memory (SRAM), random access memory (RAM), erasable programmable read-only memory (EPROM), flash memory , Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical card or any other type of medium suitable for storing electronic instructions.Although the present invention has been described with respect to a limited number of embodiments, those skilled in the art will understand many modifications and changes therein. The appended claims are intended to cover all these modifications and changes that fall within the true spirit and scope of the present invention. |
A memory hub for a memory module has a DMA engine for performing DMA operations in system memory. The memory hub includes a link interface for receiving memory requests for access at least one of the memory devices of the system memory (240), and further including a memory device interface for coupling to the memory devices, the memory device interface coupling memory requests to the memory devices for access to at least one of the memory devices. A switch for selectively coupling the link interface and the memory device interface is further included in the memory hub. Additionally, a direct memory access (DMA) engine is coupled through the switch to the memory device interface to generate memory requests for access to at least one of the memory devices to perform DMA operations. |
1、A storage module includes:Multiple storage devices; andStorage hub, including:A link interface, configured to receive a memory request, where the memory request is used to access at least one of the storage devices;A storage device interface connected to the storage device, the storage device interface coupling a memory request to the storage device to access at least one of the storage devices;A switch for selectively connecting the link interface and the storage device interface; andA direct memory access (DMA) engine connected to the storage device interface through the switch, the DMA engine generating a memory request for accessing at least one of the storage devices for a DMA operation.2、The memory module according to claim 1, wherein the memory hub is an embedded system having the link interface, the memory device interface, the switch, and the DMA engine existing in a single device.3、The storage module of claim 1, wherein the storage device interface comprises:A memory controller connected to the switch through a memory controller bus, and further connected to the storage device through a storage device bus;A write buffer connected to the memory controller for storing a memory request directed to at least one of the storage devices to which the memory controller is connected; andA cache memory connected to the memory controller and configured to store data provided to or retrieved from the storage device.4、The memory module of claim 1, wherein the switch comprises a crossbar switch.5、The storage module of claim 1, wherein the plurality of storage devices are a group of storage devices that are accessed simultaneously during a memory operation.6、The storage module of claim 1, wherein the plurality of storage devices include synchronous dynamic random access memory.7、The memory module of claim 1, wherein the DMA engine comprises:Address register, used to store the start memory address of DMA operation;A target address unit for storing a target address of a storage unit to which data is to be moved during the DMA operation;A count register for storing a count value indicating the number of memory cells to be accessed in the DMA operation; andA next register for storing a value representing completion of the DMA operation or a value representing a memory address corresponding to a linked list, the linked list including a start memory address to be loaded into the address register, the count register, and the next register, Count value and next memory address.8、A storage hub for a storage module having multiple storage devices includes:A link interface, configured to receive a memory request to access at least one of the storage devices;A storage device interface for connecting to the storage device, the storage device interface coupling a memory request to the storage device to access at least one of the storage devices;A switch for selectively connecting the link interface and the storage device interface; andA direct memory access (DMA) engine is connected to the storage device interface through the switch, and the DMA engine generates a memory request for accessing at least one of the storage devices for a DMA operation.9、The storage hub of claim 8, wherein the link interface, the storage device interface, the switch, and the DMA engine are embedded systems existing in a single device.10、The storage hub of claim 8, wherein the storage device interface comprises:A memory controller connected to the switch through a memory controller bus, and further connected to the storage device through a storage device bus;A write buffer connected to the memory controller for storing a memory request directed to at least one of the storage devices to which the memory controller is connected; andA cache memory connected to the memory controller and configured to store data provided to or retrieved from the storage device.11、The storage hub of claim 8, wherein the switch comprises a crossbar switch.12、The storage hub of claim 8, wherein the DMA engine comprises:Address register, used to store the start memory address of DMA operation;A target address unit for storing a target address of a storage unit to which data is to be moved during the DMA operation;A count register for storing a count value indicating the number of memory cells to be accessed in the DMA operation; andA next register for storing a value representing completion of the DMA operation or a value representing a memory address corresponding to a linked list, the linked list including a start memory address of the count register and the next register to be loaded into the address register , Count value, and next memory address.13、A storage system includes:A memory bus on which memory requests are provided; andAt least one storage module connected to the memory bus, the storage module having a plurality of storage devices and a storage hub, the storage hub comprising:A link interface, configured to receive a memory request for accessing at least one of the storage devices of the storage module where the link interface is located;A storage device interface connected to the storage device, the storage device interface sending a memory request to the storage device to access at least one of the storage devices;A switch for selectively connecting the link interface and the storage device interface; andA direct memory access (DMA) engine is connected to the storage device interface and the link interface through the switch, and the DMA engine generates a memory request for accessing at least one of the storage devices to perform a DMA operation.14、The storage system of claim 13, wherein the storage hub is an embedded system having the link interface, the storage device interface, the switch, and the DMA engine existing in a single device.15、The memory system of claim 13, wherein the memory bus comprises a high-speed memory bus.16、The memory system of claim 13, wherein the memory bus includes a high-speed optical memory bus, and wherein the link interface includes an optical memory bus interface circuit for converting optical signals and electrical signals.17、The storage system according to claim 13, wherein the storage system includes a plurality of storage modules, a first storage module of the plurality of storage modules is connected to the memory bus, and The remaining storage modules are connected in series with the first storage module.18、The storage system of claim 13, wherein a plurality of storage modules are included in the storage system, and each of the plurality of storage modules is directly connected to the memory bus through a respective link interface.19、The storage system of claim 13, wherein the storage device interface of the storage hub comprises:A memory controller connected to the switch through a memory controller bus, and further connected to the storage device through a storage device bus;A write buffer connected to the memory controller for storing a memory request directed to at least one of the storage devices to which the memory controller is connected; andA cache memory connected to the memory controller and configured to store data provided to or retrieved from the storage device.20、The storage system of claim 13, wherein the switch of the storage hub comprises a crossbar switch.21、The storage system of claim 13, wherein the plurality of storage devices of the storage module represent a group of storage devices that are accessed simultaneously during a memory operation.22、The storage system of claim 13, wherein the plurality of storage devices of the storage module include a synchronous dynamic random access storage device.23、The storage system of claim 13, wherein the DMA engine of the storage hub comprises:An address register for storing a start memory address of a memory unit where a DMA operation starts in the storage system;A target address unit for storing a target address of a storage unit in the storage system to which data is to be moved during the DMA operation;A count register for storing a count value indicating the number of memory cells to be accessed in the DMA operation; andA next register for storing a value representing completion of the DMA operation or a value representing a memory address corresponding to a linked list, the linked list including a start memory address to be loaded into the address register, the count register, and the next register, Count value and next memory address.24、A computer system includes:Central Processing Unit ("CPU");A system controller connected to the CPU, the system controller having an input port and an output port;An input device connected to the CPU through the system controller;An output device connected to the CPU through the system controller;A storage device connected to the CPU through the system controller;At least one storage module, the storage module comprising:Multiple storage devices; andStorage hub, including:A link interface, configured to receive a memory request for accessing at least one of the storage devices of the storage module where the link interface is located;A storage device interface connected to the storage device, the storage device interface coupling a memory request to the storage device to access at least one of the storage devices;A switch for selectively connecting the link interface and the storage device interface; andA direct memory access (DMA) engine connected to the storage device interface and the link interface through the switch, the DMA engine generating a memory request for accessing at least one of the storage devices of the plurality of storage modules For DMA operations; andA communication link is connected between the system controller and at least one of the plurality of storage modules, and is configured to couple a memory request and data between the system controller and the storage module.25、The computer system of claim 24, wherein the communication link comprises a high-speed memory bus.26、The computer system of claim 24, wherein the storage hub is an embedded system having the link interface, the storage device interface, the switch, and the DMA engine existing in a single device.27、The computer system of claim 24, wherein the communication link includes a high-speed optical memory bus, and wherein the link interface of the storage hub includes an optical memory bus interface circuit for converting optical signals and electrical signals.28、The computer system of claim 24, wherein the computer system includes a plurality of storage modules, a first storage module of the plurality of storage modules is connected to the communication link, and the plurality of storage modules are The remaining storage modules are connected in series with the first storage module.29、The computer system of claim 24, wherein the computer system includes a plurality of memory modules, and each of the plurality of memory modules is directly connected to the memory bus through a respective link interface.30、The computer system of claim 24, wherein the storage device interface of the storage hub comprises:A memory controller, connected to the switch through a memory controller bus, and further connected to the storage device through a storage device bus;A write buffer connected to the memory controller for storing a memory request directed to at least one of the storage devices to which the memory controller is connected; andA cache memory connected to the memory controller and configured to store data provided to or retrieved from the storage device.31、The computer system of claim 24, wherein the switch of the storage hub comprises a crossbar switch.32、The computer system of claim 24, wherein the plurality of storage devices of the storage module represent a group of storage devices that are accessed simultaneously during a memory operation.33、The computer system of claim 24, wherein the plurality of storage devices of the storage module include a synchronous dynamic random access storage device.34、The computer system of claim 24, wherein the DMA engine of the storage hub comprises:An address register for storing a start memory address of a memory unit where a DMA operation starts in the storage system;A target address unit for storing a target address of a storage unit in the storage system to which data is to be moved during the DMA operation;A count register for storing a count value indicating the number of memory cells to be accessed in the DMA operation; andA next register for storing a value representing completion of the DMA operation or a value representing a memory address corresponding to a linked list, the linked list including a start memory address to be loaded into the address register, the count register, and the next register, Count value and next memory address.35、A method of performing a memory operation in a computer system having a processor, a system controller connected to the processor, and a system memory having at least one storage module connected to the system controller through a memory bus, the method include:Writing direct memory access (DMA) information to a unit in the system memory, the direct memory access (DMA) information representing instructions for performing a memory operation in the system memory without processor intervention;Obtaining control of the memory bus from the processor and a system controller;Accessing the unit in which the DMA information is written in the system memory; andExecute the memory operation represented by the instruction.36、The method of claim 35, further comprising isolating the system memory during execution of the memory operation.37、The method of claim 35, wherein writing the DMA information comprises:Write a start memory address of a storage unit in the system memory, and the memory operation starts at the start memory address;Write a target address of a storage unit in the system memory, and data will be moved to the target address during the memory operation;Write a count value indicating the number of memory cells to be accessed in the storage operation; andWrite the next memory address value representing the completion of the memory operation or the memory address corresponding to the linked list, and the linked list contains the start memory address, the count value, and the next memory address value.38、The method of claim 35, wherein the system memory includes a plurality of storage modules, and wherein performing the memory operation includes accessing a storage unit in a first storage module of the plurality of storage modules to read therefrom. Data, and access a storage unit in a second storage module of the plurality of storage modules to write the data.39、A method for transmitting data in a system memory included in a computer system, the computer system having a processor, a system controller connected to the processor, and a memory bus connecting the system controller to the system memory, the method comprising :Write a DMA instruction to a storage unit in the system memory, the DMA instruction representing an instruction for performing a memory operation to transfer data, the data including corresponding to the first and second storage units in the system memory Memory addressGaining control over the memory bus; andWithout the intervention of the processor, access the storage unit in which the DMA instruction is written in the system memory, read data from the first storage unit in the system memory, and access the system memory The second storage unit in is written with the data.40、The method of claim 39, wherein gaining control of the memory bus includes isolating the system memory from the processor and system controller when transferring data within the system memory.41、The method of claim 39, wherein writing the DMA instruction comprises:Write a start memory address of a storage unit in the system memory, and data transmission starts at the start memory address;Write a target address of a storage unit in the system memory, and data will be transferred to the target address;Write a count value indicating the number of memory cells to be accessed during the transmission of the data; andThe writing represents the completion of the data transfer, or the next memory address value representing the memory address corresponding to the linked list, and the linked list includes a start memory address, a count value, and a next memory address value.42、The method of claim 39, wherein the system memory includes a plurality of storage modules, and wherein reading data from the first storage unit in the system memory includes accessing a first of the plurality of storage modules A storage unit in a module to read data therefrom, and writing the data to the second storage unit in the system memory includes accessing a storage unit in a second module of the plurality of storage modules to write Enter the data. |
Device and method for direct memory access in a hub-based storage systemTechnical fieldThe present invention relates to a computer system, and more particularly, the present invention relates to a computer system including a system memory having a storage hub structure.Background techniqueComputer systems use memory devices such as dynamic random access memory ("DRAM") to store data accessed by the processor. These storage devices are generally used as system memory in computer systems. In a typical computer system, a processor communicates with system memory through a processor bus and a memory controller. The processor issues a memory request including a memory command such as a read command, and an address indicating from which memory unit to read data or instructions. The memory controller uses this command and address to generate the appropriate command signals and the addresses of the rows and columns, which will be applied to the system memory. In response to the command and address, data is transferred between the system memory and the processor. The memory controller is typically part of a system controller that also includes a bus bridge circuit for connecting the processor bus to an expansion bus such as a PCI bus.Although the operating speed of storage devices has been increasing, this increase in operating speed has not been able to keep up with the increase in processor operating speed. Memory controllers that connect processors to storage devices increase even more slowly. The relatively slow speed of memory controllers and storage devices limits the data bandwidth between processors and storage devices.In addition to the limited bandwidth between the processor and the storage device, the performance of the computer system is also limited by response time issues that increase the time required to read data from the system storage device. More specifically, when a memory device read command is coupled to a system memory such as a synchronous DRAM ("SDRAM") device, read data can be output from the SDRAM only after a delay of several clock cycles. Therefore, although an SDRAM device can output burst data synchronously at a high data rate, the delay in initially providing data will greatly reduce the operating speed of a computer system using such an SDRAM device.One way to mitigate memory latency is to use multiple storage devices connected to the processor via a storage hub. In a storage hub architecture, the system controller or memory controller is connected to several memory modules through a high-speed data link. Typically, the memory modules are connected in a point-to-point or daisy chain structure so that the memory modules are connected in series one after another. In this way, the memory controller is connected to the first storage module through the first high-speed data link, the first storage module is connected to the second storage module through the second high-speed data link, and the second storage module is connected through the third high-speed data link To the third storage module, continue in the form of daisy chain.Each storage module includes a storage hub connected to a corresponding high-speed data link and many storage devices on the module, and the storage hub efficiently routes memory requests and responses between the controller and the storage device through the high-speed data link. A computer system with this structure has a higher bandwidth, because when one storage device responds to a previous memory access, the processor can access another storage device. For example, when a storage device in the system is ready to provide the processor with the read data, the processor can output the written data to one of the storage devices in the system. In addition, this structure can also make the system memory easy to expand without having to consider the problem of signal quality degradation when adding more memory modules, such as occurs in a traditional multipoint transmission bus structure.Although computer systems using storage hubs can provide excellent performance, they can often fail to run at optimal speeds for a variety of reasons. For example, even though storage hubs can provide computer systems with greater storage bandwidth, they are still affected by the types of latency issues described above. More specifically, although a processor can communicate with another storage device while one storage device is ready to transfer data, sometimes it is necessary to receive data from another storage device before data from one storage device can be used. Where data must be received from one storage device before it can be used, the intervention of the processor will continue to reduce the operating speed of such computer systems. Another reason this computer system cannot run at optimal speeds is that traditional storage hubs are essentially single-channel systems because all control, address, and data signals must pass through a common storage hub circuit. As a result, when a storage hub circuit is busy communicating with one storage device, it cannot idle to communicate with another storage device.One technique that has been used in computer systems to overcome the problems caused by processor intervention when moving data in and out of memory and single channel bottlenecks is the use of direct memory access (DMA) operations. DMA operation is achieved by using a DMA controller included in a computer system, which enables data to be moved into and out of memory without the intervention of a system processor. Such DMA operations and DMA controllers are well known in the art and are often used in conventional computer systems. The DMA controller eliminates the need for a processor and manages the required data transfers to and from system memory. For example, when an DMA-enabled entity transfers data to the system memory, the DMA controller gains control of the bus and coordinates data transfer from the DMA-supported entity to the system memory without the intervention of a processor. In this way, the problem of delays caused by processor intervention can be avoided during data transmission over the system bus. However, in many cases, even after data has been transferred to system memory through a DMA operation, the processor must still move data blocks from one memory unit to another memory unit in the system memory. For example, the operating system instructs a DMA operation to transfer data from a large-capacity memory to system memory, which only causes the processor to move the data to another storage unit in the memory, so the data can be used. As a result, the value of DMA operations is reduced to a certain extent because, although DMA operations are used to move data into and out of system memory, the processor is ultimately used to move data in memory.Therefore, there is a need for a computer structure that has the advantages of a storage hub structure and can also minimize the latency issues common in such systems.Summary of the inventionThe present invention relates to a storage hub for a storage module having a DMA engine for performing DMA operations in a system memory. The storage hub includes a link interface for receiving a memory request from at least one of the storage devices accessing system memory; and a storage device interface for connecting to the storage device, the storage device interface coupling the memory request to the storage device to Accessing at least one of the storage devices. The storage hub also includes a switch for selectively connecting the link interface and the storage device interface. In addition, a direct memory access (DMA) engine is connected to the storage device interface through a switch to generate a memory request for accessing at least one of the storage devices, thereby performing a DMA operation.In one aspect of the present invention, a method for performing a memory operation in a computer system is provided, the computer system having a processor, a system controller connected to the processor, and at least System memory for one storage module. The method includes writing DMA information indicating an instruction to perform a memory operation without processor intervention in a system memory to a storage unit in the system memory, obtaining control of the memory bus from the processor and the system controller, and accessing the write This storage unit in the system memory of the DMA information and performs a memory operation represented by the instruction.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram of a computer system according to an example of the present invention, where each of a plurality of storage modules includes a storage hub;2 is a block diagram of a storage hub used in the computer system of FIG. 1;3 is a block diagram of a portion of a DMA engine of the storage hub of FIG. 2 according to an embodiment of the present invention;4 is a block diagram of a tag structure used by the DMA engine of FIG. 3 according to an embodiment of the present invention; andFIG. 5 is an operation flowchart of the DMA engine of FIG. 3 according to an embodiment of the present invention.detailed descriptionEmbodiments of the present invention are directed to a system memory having a storage hub structure having direct memory access (DMA) capability, which can transfer data in the system memory without intervention by a system processor. In order to fully understand the present invention, certain details will be explained below. However, those skilled in the art should understand that the present invention can be applied without these specific details. In other examples, to avoid obscuring the invention, well-known circuits, control signals, and clock protocols are not specifically shown.FIG. 1 illustrates a computer system 100 according to an example of the present invention. Computer system 100 includes a processor 104 for implementing various computing functions, such as executing specific software to perform specific calculations or tasks. The processor 104 includes a processor bus 106 that generally includes an address bus, a control bus, and a data bus. The processor bus 106 is typically connected to a cache memory 108, which, as previously described, is typically a static random access memory ("SRAM"). Finally, the processor bus 106 is connected to a system controller 110, which is also sometimes referred to as a "North Bridge" or "memory controller."The system controller 110 serves as a communication channel to the processor 104 for a variety of other components. More specifically, the system controller 110 includes a graphics port, which is typically connected to a graphics controller 112, which in turn is connected to a video terminal 114. The system controller 110 is also connected to one or more input devices 118 such as a keyboard or mouse to enable an operator to interface with the computer system 100. Typically, the computer system 100 also includes one or more output devices 120, such as a printer, which are connected to the processor 104 through the system controller 110. One or more data storage devices 124 are also typically connected to the processor 104 through the system controller 110 to allow the processor 104 to store data or obtain data from an internal or external storage medium (not shown). Examples of typical storage devices 124 include hard disks, floppy disks, cassette tapes, and read-only optical disk memory (CD-ROM).The system controller 110 includes a storage hub controller 128 connected to several storage modules 130 a, 130 b,... 130 n, which are used as the system memory of the computer system 100. The storage module 130 is preferably connected to the storage hub controller 128 through a high-speed link 134, which may be an optical or electrical communication channel or some other type of communication channel. In the case where the high-speed link 134 is implemented with an optical communication channel, the optical communication channel = can take the form of, for example, one or more optical fibers. In this case, the storage hub controller 128 and the storage module will include optical input / output ports or separate input and output ports connected to the optical communication channel.The illustrated storage module 130 is connected to the storage hub controller 128 in a point-to-point setting, in which the high-speed link 134 is formed by connecting the storage hubs 140 of the storage module 130 together. That is, the high-speed link 134 is a bi-directional bus of the serial storage hub 140. Thus, the information on the high-speed link must pass through the storage hub 140 of the "upstream" storage module 130 to reach the "downstream" destination. For example, referring specifically to FIG. 1, the information sent from the storage hub controller 128 to the storage hub 140 of the storage module 130c will pass through the storage hubs 140 of the storage modules 130a and 130b. However, it should be understood that other topologies are also possible, such as a coupling arrangement in which each of the storage modules 130 described is connected to the storage hub controller 128 through a high-speed link. It is also possible to use a switching topology, in which the storage hub controller 128 is selectively connected to each of the storage modules 130 through a switch (not shown in the figure). Other topologies that may be used will be apparent to those skilled in the art.As also shown in FIG. 1, the storage hubs are connected to four sets of storage devices 148 through respective bus systems 150. Each group includes four storage devices 148, so that each storage module 130 has a total of 20 storage devices 148. As is known in the art, the bus system 150 generally includes a control bus, an address bus, and a data bus. However, one of ordinary skill in the art would understand that other bus systems may be used without departing from the scope of the present invention, such as a bus system using a shared command / address bus. It should also be understood that the arrangement of the storage devices 148 and the number of storage devices 148 may be modified without departing from the scope of the present invention. In the example shown in FIG. 1, the storage device 148 is a synchronous dynamic random access memory ("SDRAM") device. However, in addition to SDRAM devices, other storage devices may of course be used.FIG. 2 illustrates an embodiment of a storage hub 200 according to an embodiment of the present invention, which may be used in place of the storage hub 140 of FIG. 1. The storage hub 200 shown in FIG. 2 is connected to four storage devices 240a-d, which in this example are conventional SDRAM devices. In an alternative embodiment, the storage hub 200 is connected to four different storage devices in a bank, rather than just four different storage devices 240a-d. Typically, there are multiple storage devices in each group. However, for purposes of example, this description will refer to a storage hub 200 connected to four storage devices 240a-d. It should be appreciated that modifications made to the storage hub 200 to accommodate multiple sets of storage devices are within the knowledge of one of ordinary skill in the art.Further included in the storage hub 200 are link interfaces 210a-d and 212a- for connecting a storage module on which the storage hub 200 is provided to the first high-speed data link 220 and the second high-speed data link 222, respectively. d. As previously discussed with respect to FIG. 1, the high-speed data links 220, 222 may be implemented using optical or electrical communication channels or some other type of communication channel. The link interfaces 210a-d, 212a-d are conventional and include circuitry for transmitting data, commands, and address information to and from the high-speed data links 220, 222, such as transmitters and receivers as is well known in the art Logic circuit. It should be understood that those of ordinary skill in the art fully understand that the link interfaces 210a-d, 212a-d can be modified to use with specific types of communication channels, and without departing from the scope of the present invention, Such modifications are made to the link interfaces 210a-d, 212a-d. For example, in the case where the high-speed data links 220, 222 are implemented using an optical communication channel, the link interfaces 210a-d, 212a-d will include optical input / output ports and convert optical signals coupled through the optical communication channel Into an electrical signal.The link interfaces 210a-d, 212a-d are connected to a plurality of buses and signal lines on the switch 260 through a plurality of buses and signal lines indicated by a bus 214. The bus 214 is conventional and includes a write data bus and a read data bus, although it may be possible to replace the use of a single bidirectional data bus to couple data through the link interfaces 210a-d, 212a-d in both directions. Those of ordinary skill in the art should understand that the bus 214 is provided as an example, and the bus 214 may include fewer or more signal lines, for example, a request line and a snoop line that can be used to maintain cache consistency. .The link interfaces 210a-d, 212a-d include circuits that enable the storage hub 140 to be connected in a variety of configurations in the system memory. For example, by connecting each storage module to the storage hub 128 through the link interfaces 210a-d or 212a-d, an arrangement structure capable of multipoint transmission can be realized. Alternatively, the storage modules can be connected in series to achieve a point-to-point or daisy-chain structure, as shown in Figure 1. For example, link interfaces 210a-d may be used to connect the first storage module, and link interfaces 212a-d may be used to connect the second storage module. The storage module will be connected to the processor or system controller through one set of link interfaces, and further connected to another storage module through another set of link interfaces. In one embodiment of the present invention, the storage hub 200 of the storage module is connected to the processor in a point-to-point arrangement. In this arrangement, no other device is connected to the connection between the processor 104 and the storage hub 200. This type of interconnection provides better signal coupling between the processor 104 and the storage hub 200 for several reasons, including relatively low capacitance, less line interruptions in reflected signals, and shorter signal paths.The switch 260 is also connected to four storage interfaces 270a-d, which are in turn connected to the system memory devices 240a-d, respectively. By providing separate and independent storage interfaces 270a-d for each system memory device 240a-d, the storage hub 200 can avoid bus or memory bank conflicts typically occurring in a single-channel storage structure. The switch 260 is connected to each of the memory interfaces through a plurality of buses and signal lines indicated by a bus 274. The bus 274 includes a write data bus, a read data bus, and a request line. However, it should be understood that a single bidirectional data bus may alternatively be used instead of separate write data buses and read data buses. Further, the bus 274 may include a larger or smaller number of signal lines than previously described.In an embodiment of the invention, each storage interface 270a-d is specifically adapted to the system storage devices 240a-d connected thereto. More specifically, each storage interface 270a-d is specifically adapted to provide and receive specific signals received and generated by system storage devices 240a-d connected thereto, respectively. Moreover, the storage interfaces 270a-d are capable of working with system storage devices 240a-d operating at different clock frequencies. As a result, storage interfaces 270a-d isolate possible changes at the interface between processor 104 and storage hub 200 and storage devices 240a-d connected to storage hub 200, and it provides possible interface connections for storage devices 240a-d To a more controlled environment.The switches 260 connecting the link interfaces 210a-d, 212a-d and the storage interfaces 270a-d may be any of a variety of conventional or further explained switches below. For example, the switch 260 may be a cross-bar switch, which can simultaneously connect the link interfaces 210a-d, 212a-d and the storage interfaces 270a-d to each other in a variety of settings. The switch 260 may also be a set of multiplexers, which does not provide the same level of connectivity as the crossbar, but may connect some or all of the link interfaces 210a-d, 212a-d to the storage interfaces 270a-d Each of them. The switch 260 may also include arbitration logic (not shown) to determine which memory access should receive a higher priority than other memory accesses. Bus arbitration that performs this function is well known to those skilled in the art.With further reference to FIG. 2, each storage interface 270 a-d includes a respective memory controller 280, a respective write buffer 282, and a respective cache memory unit 284. The memory controller 280 performs the same functions as a conventional memory controller by providing control, address, and data signals to the system memory devices 240a-d connected thereto, and receiving data signals from the system memory devices 240a-d connected thereto. The write buffer 282 and the cache memory unit 284 include common components of a buffer and a cache memory, such as a tag memory, a data memory, a comparator, and the like, which are well known in the art. The storage device used in the write buffer 282 and the cache memory unit 284 may be a DRAM device, a static random access memory ("SRAM") device, other types of storage devices, or a combination of the three. In addition, any or all of these storage devices, as well as other components for the cache unit 284, may be embedded or stand-alone devices.A write buffer 282 in each of the storage interfaces 270a-d is used to store write requests while processing read requests. In such a system, the processor 104 may issue a write request to the system memory 240a-d, even if the storage device targeted by the write request is busy processing a previous write or read request. With such a method, memory requests can be processed out of order, because earlier write requests can be stored in the write buffer 282 when processing subsequent read requests. The ability to buffer write requests to allow read requests can greatly reduce memory read latency because read requests can be given first priority regardless of their chronological order. For example, a series of write requests interspersed with read requests may be stored in the write buffer 282 to allow the read requests to be processed in a pipelined manner, and then the write requests to be processed in a pipelined manner. As a result, the lengthy processing time for alternating write and read requests between coupling write requests to storage devices 270a-d and subsequent coupling of read requests to storage devices 270a-d can be avoided.The use of a cache memory unit 284 in each of the storage interfaces 270a-d enables the processor 104 to receive data in response to read commands for each system storage device 240a-d, without the need for data just from the storage devices 240a-d In the case of reading or writing, wait for the data provided by the storage devices 240a-d. In this way, the cache memory unit 284 reduces the read latency of the system memory devices 240a-d and maximizes the memory bandwidth of the computer system. Similarly, the processor 104 may store the write data in the cache memory unit 284, and then the memory controller 280 in the same memory interface 270a-d transfers the write data from the cache memory unit 284 to the system storage connected thereto The devices 240a-d perform other functions.The storage hub 200 also includes a DMA engine 286 connected to the switch 260 through the bus 288. This DMA engine enables the storage hub 200 to move data blocks from one storage unit in the system memory to another storage unit in the system memory without processing. Of the device 104. The bus 288 is used for processing data transmission in a system memory, and includes a plurality of conventional bus lines and signal lines, such as an address, a control, a data bus, and the like. As will be described in detail below, the DMA engine 286 can read a linked list in system memory to perform DMA memory operations without the intervention of a processor, thereby removing the processor 104 and the bandwidth-limited system bus from the execution of a storage operation Free yourself. The DMA engine 286 is preferably an embedded circuit in the memory hub 200. However, a separate DMA device connected to the storage hub 200 is also within the scope of the present invention. In addition, the DMA engine 286 may include circuitry to accommodate DMA operations on multiple channels. Such multi-channel DMA engines are well known in the art and can be implemented using conventional techniques.In an embodiment of the present invention, the processor 104 writes a list of instructions to be executed by the DMA engine 286 in the system memory. The instructions include information used by the DMA engine 286 for DMA operations, such as the start address, end address or count of the block to be moved, the destination address, the address of the next command block, and so on. The DMA engine 286 will execute a series of consecutive commands and then jump to the next command list if indicated. The DMA engine 286 is program controlled by data structures that exist in one or more memory spaces. The data structure consists of a certain number of command blocks that provide the information required for data transfer operations in system memory. Each command block can be connected by a series of address pointers to form a linked list. The address of the first command block in the linked list is controlled by the program through the I / O space. The DMA engine 286 is instructed to fetch and execute the first command block through the I / O space command register. After performing the requested data operation, the address pointer of the first command block is used to point the DMA engine 286 to the next command block. The address pointers in each successive command block are used to fetch and execute the next command block. These address pointers form a linked list. Each command block in the linked list is executed until a NULL pointer is encountered. For example, a NULL pointer is defined as an address consisting of all 1. Once a NULL pointer is detected, execution of the command block is aborted and a status bit is set to indicate that the command stream has ended. The I / O register in the storage hub 200 may include a completion status. In addition, a start flag can also be used to indicate that the DMA engine 286 has started performing DMA operations. Other status bits may indicate whether the command stream ended normally without error or due to an error and not ended normally. Status information may optionally generate an interrupt to the host.In an alternative embodiment of the invention, the DMA engine 286 may also be used for diagnostics in the system. A known good data pattern may be loaded into the memory of the storage hub 200 or a known good system memory, and the data pattern may be used to test the system memory. For a detailed description of this type of application, please refer to the co-assigned, co-pending US patent application No. _____, entitled "SYSTEMAND METROD FORON-BOARD DIAGNOSTICS OF MEMORYMODULES", which was filed on [Submission Date], which is incorporated herein by reference.FIG. 3 is a block diagram showing a part of the DMA engine 300, and FIG. 4 is a block diagram showing a linked command list 400 according to an embodiment of the present invention. The DMA engine 300 may replace the DMA engine 286 (FIG. 2) of the storage hub 200. It should be understood that FIG. 3 is only a representative of the DMA engine 300. In order to implement the present invention, a full description has been provided to those skilled in the art. However, it should also be appreciated that other DMA engines may be used without departing from the scope of the invention. The DMA engine 300 includes five registers for controlling DMA operations: an address register 310, a destination address register 311, a control register 312, a next register 314, and a count register 316.During operation, when a data block transfer is started, the start address of the data block is loaded into the address register 310. In addition, the destination address of the memory unit to which the data is to be loaded is loaded into the destination address register 311, and the data block length is loaded into the count register 316. The control register 312 contains transmission-related information, such as bits indicating whether the address register 310 is to be increased or decreased after each data item is transmitted. In this example, each time a data item is transferred by the DMA engine 300, the count register 316 decreases and the address register 310 increases. In addition, the destination address register 311 is increased (or decreased, depending on the control setting). When the value of the count register 316 reaches zero, the data block transfer is completed. At this time, the value in the next register 314 is checked. If it points to a valid location in system memory, the values contained in this object are loaded into registers 310, 312, 314, and 316. Then the transmission of the next data block is automatically started. However, if a NULL value exists as described previously in the next register 314, the DMA operation ends.The linked command list 400 shown in FIG. 4 contains a plurality of linked items 402, 404, and 406, each of which contains information required to reload the registers 310, 312, 314, 316. Linked items 402, 404, and 406 are stored in system memory, as previously described, and are linked together by pointers corresponding to the next register 314. Three link items 402, 404, and 406 are shown in FIG. These link items, plus initial transfers defined to write values directly into registers 310, 312, 314, and 316 of the DMA engine 300, define a single DMA transfer with four separate sections. The value NEXT contained in the next register 314 points to the first link entry 402. The first link item 402 points to the next link item 404 in the linked command, which in turn points to the final link item 406. The final link entry 406 contains a NULL value as a pointer, indicating that it is the last link entry in the DMA command list. A NULL value is a reserved pointer that does not point to a valid memory location, and the DMA engine 300 interprets it as a pointer that does not point to anything. It should be appreciated that the link items 402, 404, and 406 are provided as examples, and may be modified without departing from the scope of the present invention, such as including more or less information fields than shown in FIG.FIG. 5 is a flowchart 500 illustrating a control flow used by the DMA engine 300 (FIG. 3) for a series of continuous data block transfers. In step 502, the DMA registers 310, 312, 314, and 316 are loaded with the appropriate values for the first data transfer. At this time, all information required for the link item used for the transfer must be loaded into the linked command list 400 (FIG. 4) before or after the register is directly loaded. The loading of the registers is controlled by the processor 104 (FIG. 1), and the loading of the linked command list 400 in the system memory is also completed by the processor 104.In step 504, a data item is transmitted, and in step 506, the value in the count register 316 is decreased to indicate that a data item has been transmitted. Step 506 includes simultaneously increasing or decreasing the value of the address register 310, depending on the ideal direction set in the control register 312. In step 508, the count value is checked to determine whether the count has been completed. In one embodiment of the present invention, the determination of whether the count has been completed is implemented by checking a completion bit (not shown in the figure) from the counting register 316. If the count value indicates that data transmission is not complete, control returns to step 504. However, if the count value in the count register 316 is equal to zero, then control passes to step 510, where the value in the next register 314 is checked to see if it is equal to the NULL value as previously described. If a NULL value does not exist, then in step 512, the next tag is loaded from the linked command list 400 into the registers 310, 312, 314, and 316 in the DMA controller 300, and control returns to step 504. Once the last link item is used, an indication is given to the processor 104 at step 514 that the transmission is complete.Those of ordinary skill in the art will appreciate that the DMA engine 300 implements the ability to "scatter-gather" in system memory. When a large data block is to be read into a discontinuous memory block, the processor 104 allocates memory and establishes a linked command list 400 through the DMA engine 300. This initiates a DMA transfer, and the DMA engine 300 processes the entire transfer until it is complete. Similar techniques can be used to aggregate scattered data blocks in system memory and write them to consecutive memory blocks. The processor 104 determines which blocks are written in the system memory and their order, and then establishes a linked command list 400 through the DMA engine 300. In this way, a DMA transfer is initiated and the DMA engine 300 fully processes the transfer until it is complete. Since the linked command list 400 is stored in the system memory, for example, several linked lists can be maintained for each channel supported by the DMA engine 300. In addition, since the linked command list 400 is stored in system memory, the only limit on the number of separate transmissions that may be linked into one larger transmission for one channel is the number of free memory units reserved in system memory.It should be understood from the foregoing that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications can be made without departing from the spirit and scope of the invention. Accordingly, the invention is limited only by the following claims. |
In various embodiments, the size, shape, and arrangement of keys on a virtual keyboard may be determined based on touchscreen contacts made by the user. Further, the actual contact patch made by the user may be analyzed to interpret which point of contact was intended, and other factors such as spelling and context may also be considered. These factors may be determined based on a calibration session and/or on continuing inputs during operation of the keyboard, and applied to future operational interpretations of the touchscreen contacts. |
What is claimed is: 1. An apparatus, comprising: a handheld electronic device having a touchscreen with a virtual keyboard, wherein the virtual keyboard has first and second arc-shaped tiered rows of keys. 2. The apparatus of claim 1, wherein the virtual keyboard has two portions, the first portion having at least the first and second rows of keys arranged for operation by one of a user's thumbs, and the second portion having at least a third arc-shaped row of keys arranged for operation by the user's other thumb, when the user is gripping the device with both right and left hands. 3. The apparatus of claim 1, wherein at least one item selected from the following list is programmable by user action: a) a distance between the first and second rows; b) a distance between adjacent keys in a row; and c) which character is assigned to each key. 4. The apparatus of claim 1, wherein a display of a key on the touchscreen is to be shifted to a position visible to a user when the key is activated by a touch. 5. The apparatus of claim 1, wherein the device is to perform a keyboard calibration procedure comprising: a) prompting a user to draw a calibration arc on the touchscreen with the user's thumb; and b) generating the first row of keys for the virtual keyboard along the calibration arc drawn by the user's thumb. 6. The apparatus of claim 5, wherein the calibration procedure further comprises generating the second row of keys in relation to the first row of keys. 7. A method, comprising: entering a calibration mode;detecting a first calibration arc drawn on a touchscreen; and generating a first row of soft keys for a virtual keyboard along the first calibration arc. 8. The method of claim 7, further comprising prompting a user, prior to said detecting, to draw the first calibration arc on the touchscreen. 9. The method of claim 7, further comprising generating a second row of soft keys having a tiered relation with the first row of soft keys. 10. The method of claim 7, further comprising: detecting a second calibration arc drawn on the touchscreen; and generating a second row of soft keys for the virtual keyboard along the second calibration arc. 1 1. The method of claim 10, wherein the first and second calibration arcs are tiered with respect to each other. 12. The method of claim 10, wherein: the first and second calibration arcs are drawn by the user using different thumbs; and the first and second calibration arcs are not tiered with respect to each other. 13. An article comprising a computer-readable storage medium that contains instructions, which when executed by one or more processors result in performing operations comprising: entering a calibration mode; detecting a first calibration arc drawn on a touchscreen; and generating a first row of soft keys for a virtual keyboard along the first calibration arc. 14. The article of claim 13, wherein the operations further comprises prompting a user, prior to said detecting, to draw the first calibration arc on the touchscreen. 15. The article of claim 13, wherein the operations further comprise generating a second row of soft keys having a tiered relation with the first row of soft keys. 16. The article of claim 13, wherein the operations further comprise: detecting a second calibration arc drawn on the touchscreen; and generating a second row of soft keys for the virtual keyboard along the second calibration arc. 17. The article of claim 16, wherein the first and second calibration arcs are tiered with respect to each other. 18. The article of claim 16, wherein: the first and second calibration arcs are drawn by the user using different thumbs; and the first and second calibration arcs are not tiered with respect to each other. 19. An apparatus, comprising: a handheld device including a touchscreen for displaying a virtual keyboard, wherein the device is to: detect a touch on the touchscreen by a user; determine a location for a centroid of a contact patch for the touch; and determine a location of an active point with respect to the centroid. 20. The apparatus of claim 19, wherein said determining the location of the centroid comprises: receiving readings from multiple sensor points on the touchscreen located within the contact patch; and calculating the centroid from the multiple sensor points. 21. The apparatus of claim 19, wherein said determining the location of the active point is based at least in part on one or more factors selected from a list consisting of: the centroid of the contact patch;a size of the contact patch; a shape of the contact patch; a difference in pressure sensed at the multiple points; a location on the keyboard of the contact patch; a positional relationship between the centroid and a keyboard key that is at least partly overlapped by the contact patch; and a previous history of the positional relationship between the contact patch and the keyboard key. 22. The apparatus of claim 19, wherein said detecting a touch is to be performed during a calibration procedure for the keyboard. 23. A method, comprising: sensing a touch on a touchscreen of a handheld electronic device; determining a centroid for a contact patch of the touch; and determining an active point for the contact patch, wherein the active point is a different location than the centroid. 24. The method of claim 23, wherein said determining the active point is based at least in part on one or more factors selected from a list consisting of: the centroid of the contact patch; a size of the contact patch; a shape of the contact patch; a difference in pressure sensed at multiple points within the contact patch; a location on the keyboard of the contact patch; a positional relationship between the contact patch and a keyboard key that is at least partly overlapped by the contact patch; and a previous history of the positional relationship between the contact patch and the keyboard key. 25. The method of claim 23, wherein said sensing is performed during an initial calibration procedure for the keyboard. 26. The method claim 23, wherein said adjusting is performed during operational usage of the keyboard. 27. An article comprising a computer-readable storage medium that contains instructions, which when executed by one or more processors result in performing operations comprising: sensing a contact patch on a touchscreen of a handheld electronic device; determining a centroid for a contact patch for the touch; and determining an active point for the contact patch. 28. The article of claim 27, wherein the operation of determining the active point is based at least in part on one or more factors selected from a list consisting of: a centroid of the contact patch; a size of the contact patch; a shape of the contact patch; a difference in pressure sensed at the multiple points; a location on the keyboard of the contact patch; a positional relationship between the contact patch and a keyboard key that is at least partly overlapped by the contact patch; a previous history of the positional relationship between the contact patch and the keyboard key. 29. The article of claim 27, wherein the operation of adjusting is performed during an initial calibration procedure for the keyboard. 30. The article claim 27, wherein the operation of adjusting is performed during operational usage of the keyboard. 31. An apparatus, comprising: a handheld electronic device including a touchscreen for displaying a virtual keyboard, wherein the device is to: detect a contact patch when the touchscreen is touched by a user; determine an active point for the contact patch;determine a hot spot for each of multiple keys located near the active point; select a particular key from among the multiple keys as a key intended by the user; and include a particular character represented by the particular key as a character in an ongoing stream of text input by the user. 32. The apparatus of claim 31, wherein said selection is based on at least one criteria selected from a list consisting of: location of each of the hotspots with respect to the active point; and a probability based at least in part on a past history of the active point and at least some of the multiple keys. 33. The apparatus of claim 31, wherein the device is to relocate the hot spot for the particular key for subsequent operations, based at least in part on determinations leading to the selection of the particular key. 34. The apparatus of claim 31, wherein the device is further to: detect additional contact patches; select additional keys based on the additional contact patches; include a corresponding character from each of the selected keys into the ongoing stream of text until at least a word has been formed; determine the word is an invalid word; and change the particular character to another character represented by another of the multiple keys to form a valid word. 35. The apparatus of claim 34, wherein the device is further to move a hot spot for the particular key based at least in part on determinations leading to the changing of the particular character. 36. A method, comprising: determining an active point of a contact patch when a touchscreen of a handheld electronic device is touched by a user;determining the active point is near a hot spot for each of multiple keys on a virtual keyboard on the touchscreen; selecting a particular key from among the multiple keys as a key intended by the user; and including a particular character represented by the particular key as a character in an ongoing stream of text input by the user. 37. The method of claim 36, wherein said selecting is based on at least one criteria selected from a list consisting of: location of each of the hotspots with respect to the contact patch; and a probability based at least in part on a past history of the multiple keys and corresponding contact patches. 38. The method of claim 36, further comprising moving a hot spot for the particular key based at least in part on determinations leading to said selecting the particular key. 39. The method of claim 36, further comprising: determining additional active points for additional contact patches generated by additional touches; selecting additional keys based on the additional touches; including a corresponding character from each of the selected additional keys into the ongoing stream of text until at least a word has been formed; determining the word is an invalid word; and changing the particular character to another character represented by another of the multiple keys to form a valid word. 40. The method of claim 39, further comprising moving a hot spot for the particular key based at least in part on determinations leading to said changing of the particular character. 41. An article comprising a computer-readable storage medium that contains instructions, which when executed by one or more processors result in performing operations comprising:determining an active point of a contact patch when a touchscreen of a handheld electronic device is touched by a user; determining the active point is near a hot spot for each of multiple keys on a virtual keyboard on the touchscreen; selecting a particular key from among the multiple keys as a key intended by the user; and including a particular character represented by the particular key as a character in an ongoing stream of text input by the user. 42. The article of claim 41, wherein said selecting is based on at least one criteria selected from a list consisting of: a location of each of the hotspots with respect to the contact patch; and a probability based at least in part on a past history of the multiple keys and corresponding contact patches. 43. The article of claim 41, wherein the operations further comprise moving a hot spot for the particular key based at least in part on determinations leading to said selecting the particular key. 44. The article of claim 41 , wherein the operations further comprise: determining additional active points for additional contact patches generated by additional touches; selecting additional keys based on the additional touches; including a corresponding character from each of the selected additional keys into the ongoing stream of text until at least a word has been formed; determining the word is an invalid word; and changing the particular character to another character represented by another of the multiple keys to form a valid word. 45. The article of claim 44, wherein the operations further comprise moving a hot spot for the particular key based at least in part on determinations leading to said changing of the particular character. |
ADAPTIVE VIRTUAL KEYBOARD FOR HANDHELD DEVICE BACKGROUND As multi-purpose wireless devices become too small for standard mechanical/electronic keyboards, virtual keyboards are increasing being used as a primary input device by displaying an arrangement of keys on a touchscreen. The user enters a keystroke by simply touching the screen with a fingertip at the location where the desired key is displayed. Because of the small size and hand-held nature of these devices, many users typically use only their thumbs to enter the data. However, most of these virtual keyboards arrange the keys in either a rectangular matrix of keys, or in the standard QWERTY format. The linear nature of the rows in such arrangements makes them ill suited for used by the thumbs. Specifically, to move his thumb along the length of a row of keys or along the height of a column of keys, a user must articulate the several joints of his thumb in a relatively unnatural manner. Indeed, becoming accustomed to such arrangements can require extensive proprioceptive development from the user. While the designs of some physical keyboards on multi-purpose wireless devices do provide improved ergonomics (compared to a rectangular matrix of keys), the degree of to which the ergonomics can be tailored to a particular individual remains limited. Further, such designs do not account for the fact different users have different sizes of hands, fingers and thumbs, so a keyboard that is properly sized for one user may be more difficult for another user. BRIEF DESCRIPTION OF THE DRAWINGS Some embodiments of the invention may be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: Fig. 1 shows a multi-function handheld user device, according to an embodiment of the invention. Fig. 2 shows a virtual keyboard configured for two-handed operation, according to an embodiment of the invention.Fig. 3 shows a flow diagram of a method of calibrating the size of the keyboard to the individual user, according to an embodiment of the invention. Fig. 4 shows a flow diagram of a method for an initial contact patch calibration sequence, according to an embodiment of the invention. Fig. 5 shows a flow diagram of a method of adaptively interpreting keystrokes, according to an embodiment of the invention. DETAILED DESCRIPTION In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. References to "one embodiment", "an embodiment", "example embodiment", "various embodiments", etc., indicate that the embodiment(s) of the invention so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments. In the following description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" is used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" is used to indicate that two or more elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact. As used in the claims, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common element, merely indicate that different instances of like elements are being referred to, and are not intended to imply that the elements so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. Various embodiments of the invention may be implemented in one or any combination of hardware, firmware, and software. The invention may also be implemented as instructions contained in or on a computer-readable medium, which maybe read and executed by one or more processors to enable performance of the operations described herein. A computer-readable medium may include any mechanism for storing information in a form readable by one or more computers. For example, a computer- readable medium may include a tangible storage medium, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory device, etc. Various embodiments of the invention relate to a configuration of virtual keys on the touch-screen of a virtual keyboard. Rather than being arranged in straight horizontal rows, the keys may be arranged in arcs that are conveniently reached by the user's thumb(s) when the device is held in the user's hand(s). In some embodiments, the placement of the keys may be customized to fit the individual user's thumb and/or personal preferences. In some embodiments, adaptive sensing may be used to compensate for the contact surface of the user's thumb being off-center from a key and/or being larger than the key. Fig. 1 shows a multi-function handheld user device, according to an embodiment of the invention. The illustrated device 110 is shown with a touchscreen 120 for displaying information to the user and receiving tactile inputs from the user when the user touches the screen at one or more particular locations. Three hard buttons are also shown above the display. Other physical buttons, sensors, features, etc. may also be included but are not shown to avoid excessive clutter in the drawing. Within the context of this document, 'hard' buttons are so called because they are physical buttons, permanently located in specific areas. But the device may also contain 'soft' buttons, each consisting of an image on the touch-sensitive display screen, denoted herein as a touchscreen. When the user touches a soft button, the device may sense that touch and perform whatever function is associated with that soft button. The term 'key' is used in this document to denote a soft button that represents an individual key on a virtual keyboard shown on the touchscreen. Although the illustrated device 110 is depicted as having a particular shape, proportion, and appearance, with buttons located in particular locations, this is for example only and the embodiments of the invention may not be limited to this particular physical configuration. For example, in some embodiments various features may be located elsewhere on the same side or on different sides of the device. In some embodiments the overall shape of the device 110 may be different than shown.Device 110 may also include functionality for wireless communication, for various visual, audio, and physical inputs, and for various visual, audio, and physical outputs that are not specifically described herein. In some embodiments, the device may use this functionality in different ways depending on which mode it is in. Virtual Keyboard with Tiered Arcs Fig. 1 also shows a virtual keyboard on the touchscreen display. In a virtual keyboard, each key on the keyboard is implemented as a soft button on the touchscreen. When the user touches a particular key with his/her thumb (or finger, or stylus, or other object), the device 110 senses that touch, determines where on the screen the touch occurred, determines which key is associated with that location, and interprets this touch as a keystroke of the selected key. In some embodiments, a hysteresis effect may be used in which the user must remove his finger from the key for a minimum amount of time and touch the key again before a second touch of that key will be registered. In this example, the keys on the keyboard are arranged in three rows that each follow an arc shape. These rows are positioned for ease of reach by the user's thumb. Because of the configuration of the human thumb, the arcs may not be perfectly circular, but rather each arc might have a variable rate of curvature. For this reason, the terms 'circular' and 'concentric' are not used to describe them here, although in some instances the arcs might be circular and/or concentric. These arcs are described herein as 'tiered' arcs because each arc has a pivot point (the pivot point of the user's thumb) that is in approximately the same place, and each arc has a similar shape, with each arc being approximately the same distance from the next adjacent arc throughout the length of those arcs, when measured radially from the pivot point. Ease of reach by the user's thumb, rather than a rigid geographical shape, may be the guiding principle when determining the curvature and location of each arc. The example of Fig. 1 shows three rows of keys, but other embodiments may have one, two, four, or more rows. The keys are shown the same size in all rows, but in some embodiments, some keys may be larger or smaller than others. For example, the inner row may have smaller keys than the outer row. Not only does this allow more keys to be placed on the inner row, which has less room for keys, but it also recognizes that the user is likely to touch keys on the inner row with the end of his thumb, which presents a smaller touch area than is felt by keys on the outer row, which are touched with the thumbin an extended position. The illustrated example also shows that the three rows are spaced the same distance from each other, but other embodiments may differ. Again, the mechanics and flexibility of the human thumb may determine this spacing. Each key is shown with a somewhat rectangular shape, but the soft keys may be displayed with any convenient shape. In some embodiments, different keys may have different shapes to provide addition information to the user (for example, a square shape for upper case, and a round shape for lower case). Different colors may also be used to denote additional information about that key. Each key is shown labeled with the character it represents. These labels are all shown oriented with respect to the bottom of the device (for easy reading by the user) but other embodiments may orient the label with respect to the radial center of the arcs, or some other definable reference point. In some embodiments, the displayed character will be shown as upper- or lower-case to indicate whether the upper- or lower-case version of the character is represented by that key. . In some embodiments, whenever a key touch is registered by the device, the symbol represented by the key will be shown in an enlarged version to provide positive feedback to the user, and the position of this enlarged key may be shifted so that it won't be obscured by the user's thumb. In the example of Fig. 1, the user touches the 'M' key (which is hidden from view by the user's thumb), and an enlarged version of the 'M' key is shown just beyond the user's thumb, temporarily overlaying whatever other keys are located there. Different color, style, shape, etc. may also be used to distinguish this touch indicator from the remainder of the keys. The example of Fig. 1 shows particular characters assigned to particular key positions, but this is for example only. Other embodiments may assign characters to key positions in any desirable arrangement, such as QWERTY, Dvorak, etc. In some embodiments, the key assignments may be programmable. Because the configuration shown in Fig. 1 is designed for one-handed operation, and the keyboard is therefore limited to the space reachable by a single thumb, there may not be enough space to simultaneously represent all the characters that he user wants to type. To compensate for this, all or a portion of the key positions may be reassigned to represent other characters, and new labels representing the new characters may be produced on the touchscreen for those keys. This change may be initiated in any convenient manner, such as but not limited to: 1) touching one of the keyboard keysassigned to this function, 2) touching a particular soft key outside the keyboard area, 3) pressing a hard button, 4) dragging the thumb along a substantial part of an arc, 5) etc. In some embodiments, the full available keyboard may be thought of as an approximate wheel with four quadrants, with each quadrant having a separate set of characters, and with only one quadrant visible on the touch screen at a time (or two quadrants visible for the two-handed operation described later). The user may then call up whichever quadrant he wants displayed. For example, one quadrant might contain keys with letters, another quadrant contain keys with numbers and punctuation marks, another quadrant contain keys representing pictures, icons, letterheads, etc., that the user likes to insert into documents, and the fourth quadrant contain commonly used phrases, sentences, paragraphs, etc. Of course, more or fewer than four quadrants may also be used, since this is a virtual concept and is not constrained to an actual physical circle. The keyboard configuration shown in Fig. 1 is designed for right-handed operation by making the rows of keys concentric about the lower right-hand corner. By making the rows of keys concentric about the lower left hand corner, the device may be suitable for left-handed operation. The illustrated keyboard is also shown configured for vertical operation, i.e., the long side of the display is vertical. Some embodiments may operate with a horizontal operation, i.e., the long side of the display is horizontal. In some embodiments, the right/left handed configuration and/or the vertical/horizontal operation is selectable by the user. In some embodiments, these configurations may be automatically selected by the device (e.g., sensing gravity to select the vertical/horizontal operation, and sensing which part of the display is touched by the user's thumb to select right- or left-handed operation). Fig. 2 shows a virtual keyboard configured for two-handed operation, according to an embodiment of the invention. The primary difference between this two-handed operation and the one-handed operation of Fig. 1 is that there are two portions to the virtual keyboard, one configured for operation with the right thumb and the other configured for operation with the left thumb. More keys, and therefore more characters, can be simultaneously displayed with this configuration, as compared with the one-handed configuration of Fig. 1. The two keyboard portions may have the same or a different number of rows, the same or a different number of keys in each row, different spacing, etc. In some embodiments, the assignment of characters to the individual keys may be switched between the left and right areas for the convenience of users that have a left- orright-handed preference. All the same features, techniques, choices, etc. that are available for one-handed operation may also be applied to this two handed operation, and in some embodiments may be applied separately for each portion of the keyboard. In some embodiments the user may manually select either one- or two-handed operation. In some embodiments, the device may automatically select one- or two-handed operation, based on some automatically sensed criteria, such as device orientation or sensing touches on both sides of the touchscreen. Fig. 3 shows a flow diagram of a method of calibrating the size of the keyboard to the individual user, according to an embodiment of the invention. In flow diagram 300, at 310 the calibration process may be initiated by the user, or started automatically based on some pre-determined criteria, such as but not limited to: a) powerup of the device, b) creation of a new user account, c) changing the font size of the characters on the virtual keyboard, d) etc. The user may then be prompted at 320 to draw an arc with his thumb on the touchscreen surface, with his hand in its normal data-entry position. This arc may be referred to as a 'calibration arc' because its purpose is to calibrate the location of a keyboard row so the keys on that row will be at a convenient position for the user's thumb. In some embodiments, this arc will be visible on the touchscreen after being drawn by the user, but other embodiments may not display the calibration arc. In either case, the location of this arc on the display screen may be recorded at 330. This location may be used to determine where the corresponding row of keys will be placed on the screen. In some embodiments, the user may be prompted to enter more than one arc. For example, to calibrate the device for two-handed keyboard operation, the user may be prompted to draw separate arcs, one with each thumb. If more than one row is to be calibrated, the user may be prompted to draw a middle arc, an outer arc, and/or an inner arc. In some embodiments, the user may be prompted to retrace the same arc more than once, so the device can determine an average position for that arc. Any or all of these options may be accommodated with the operations at 320-330-340. In some embodiments, the arc for only one row of keys is input by the user's thumb, and the other arcs for the other rows for that thumb are automatically located with respect to that arc. In such a case, the locations of the other arcs are determined at 350. In one embodiment, the user may have been prompted to draw an arc with the thumb in a mid-position, neither fully extended nor fully retracted, and the other rows placed in larger and smaller arcs with respect to that one. In another embodiment the user may draw anarc with the thumb fully extended to mark the location of the outer row of keys, with the other arcs created inside that one. Conversely, the user may draw an arc with the thumb fully retracted to mark the location of the inner row of keys, with the other arcs created outside that one. Once the arc locations have been determined, at 360 the device may assign the position of each key along each arc, with each arc representing a row of keys. In some embodiments, a predetermined spacing between adjacent keys on the same row may be assumed. In other embodiments, the spacing may vary. For example, if the calibration process was able to determine the width of the user's thumbprint in various locations, this information may be used to decide the space between keys. A wide thumb print may lead to a wider spacing between keys, to reduce the chance of touching multiple keys in a manner that could lead to error. At 370, the various characters (letters, numerals, punctuation marks, etc.) may be assigned to the various key positions on the different rows. In some embodiments, this arrangement may be predetermined. In other embodiments, this arrangement may be customized, based on various criteria. For example, the most frequently used characters may be placed on the middle row, to reduce thumb travel. The next most frequently used characters may be placed on the outer row, since extending the thumb is generally considered easier than retracting the thumb. Commonly occurring sequences of characters (e.g., digraphs) can be ordered sequentially along an arc to facilitate a more natural 'inward sweep' motion of the thumb. For two-handed operation, commonly occurring sequences can be alternated between the two thumbs to facilitate alternating thumb action. Other considerations may also be used to assign characters to key positions. In some embodiments, the user may be able to assign characters to specific key positions. At 380, the device may generate the completed keyboard on the display based on these character position assignments, and exit the calibration sequence at 390. By virtue of these and other features, various embodiments of the invention may provide a virtual keyboard that is well suited to the natural motion of the user's thumb(s) and is customized to characteristics and preferences of the user. The invention may thereby improve the ease, speed, and accuracy with which the user can enter text on the device. Due to the close spacing of the keys, and the relatively large contact area of the user's thumb, it is likely that the user will frequently contact more than one key simultaneously. Various approaches may be taken to reduce the negative effects of this byinterpreting which key the user intended to touch. Some approaches involve interpreting the area contacted by the user's thumb, while others are based on context and repetitive errors. Both approaches are described below. Contact Patch Adjustments The 'contact patch' is the area in which the user contacts the touchscreen when attempting to touch a key or other icon. If a stylus is used, the contact patch may be very small and well-defined in shape. If the user's fingertip is the instrument of contact, the contact patch may be somewhat larger, and the size may vary depending on the pressure which the user applies with that finger. The size and shape of the contact patch may both vary depending on the angle with which the finger contacts the touchscreen. If the thumb is used as the instrument of contact, the same considerations apply, but the size is likely to be even larger than with the fingertip, due to the thumb's generally greater cross section, and the shape and size may vary considerably depending on the contact angle of the thumb, which may generally be shallower than the contact angle of a finger. Since the contact patch may be even larger than the key on the touchscreen that is being touched, the sensing device may have to interpret the sensor information and determine a point at which the touch was intended. Within this document, this point is called the 'active point', which may or may not be the centroid (the geographical center) of the contact patch. These calculations may be complicated by the fact that the size and shape of the contact patch varies with: 1) which row the key is on (which affect the angle of the thumb), and 2) whether the user is entering data with a one-handed or two-handed keyboard operation (the position of the hand, and therefore the angle of the thumb, may generally be different in a two-handed operation than it is in a one-handed operation). Further, the actual contact patch and/or the actual centroid of that contact patch may be different than the contact patch and centroid as perceived by the user. In some embodiments the device may assume that a particular row of keys will experience an elliptical contact patch with the ellipse oriented in a particular direction, while a different row of keys may experience an elliptical contact patch with the ellipse oriented in a different direction, or even a circular contact patch. These assumptions may be taken into consideration when calculating the active point. Assumptions made about a contact patch for a middle row may be interpolated between assumptions made about contact patches for the inner and outer rows. These are only examples; the actualassumptions may vary from this based on actual experience with the user and/or previous studies based on the general population. As an example of user perception, for some rows the user may perceive the active point to be near the end of the thumb tip, and the device may move the active point away from the centroid to accommodate this perception. Since the angle of the thumb, and therefore the shape of the contact patch, is generally different during two-handed operation than during one-handed operation, the device may sense which mode of operation is being currently used, and adjust the active point calculations accordingly. For example, the device may assume a two-handed operation is being used if a two-handed keyboard is displayed on the touchscreen. Or the device may infer the same by sensing contact in both parts of the screen. In another embodiment, the device may assume one-handed operation if the device is being held in a vertical orientation (the short dimension is horizontal), and two-handed operation if the device is being held in a horizontal orientation (the long dimension is horizontal), based on accelerometers or other sensors. However, the user may be able to manually override these assumptions if desired. In some embodiments, the type of instrument being used to contact the screen may be inferred by the device based on the size and shape of the contact patch. If the contact patch is relatively large, the device may assume the thumb is being used and adjust the active point accordingly. If the contact patch is smaller, the device may assume that a finger is being used, and adjust the active point based on that assumption. If the contact patch is very small, the device may assume a stylus is being used, and not make any adjustments. Regardless of the type of adjustments being made, in some embodiments these adjustments may be pre-defined, either based on a standard set of parameters, or based on the results of one or more calibration sessions. In other embodiments, the adjustments may be continually or frequently modified, based on the recent history of operation. In some embodiments, both initial calibration and ongoing adjustments may be incorporated. Fig. 4 shows a flow diagram of a method for a contact patch calibration sequence, according to an embodiment of the invention. The method of flow diagram 400 may be used to pre-calibrate the device to a particular user's characteristics. Operations 430 - 460 may also be used for ongoing adjustments during normal operation of the virtual keyboard. After entering the contact patch calibration sequence at 410, at 420 the devicemay prompt the user to press a selected key. In some embodiments, the keyboard (e.g., the keyboard generated by the keyboard calibration sequence of Fig. 3) may be fully displayed, but in other embodiments the key may be shown in isolation in the same location it will occupy in that keyboard. When the user touches the selected key, the device determines relevant information on the contact patch at 430. Based on readings from the individual contact sensors in the touchscreen, the device may determine the size and shape of the contact patch, and in some embodiments may record the contact readings for the different areas of the contact patch at 440. At 450 the device may determine the centroid for the contact patch, i.e., the geometric center of the contact patch. In some procedures, the centroid may be considered the initial active point for the contact patch, but may then be relocated based on other factors, such as those previously described. In some embodiments, the touchscreen may provide the device with only a calculated centroid position for the contact patch, rather than providing multiple contact sensor readings and having the device calculate the centroid. At 460, the device may then return to 420 to calibrate the contact patch for another key on the keyboard. In some embodiments this process may be repeated for every key on the keyboard. In other embodiments, only some keys or touchscreen locations may be used in this calibration procedure, and calibration data for the rest may be interpolated from the results of those keys. In some embodiments, the same key may be calibrated more than once, to obtain an average of the readings for that key. After all the selected keys or areas of the touchscreen have been calibrated, either directly or through interpolation, the calibration sequence may be exited at 470. Keyboard Adaptation When more than one key is simultaneously touched by the user, the device may use various techniques to determine which key the user intended to touch. Some techniques are contextual in nature, and may not be performed immediately. For example, the device may compare the multiple contacted characters to the rest of the word to determine which character forms a real word, based on a spelling dictionary. If more than one of the possible characters forms a real word, the context of the sentence may be examined to select which word was most likely intended, and therefore which character was intended.In some cases, there may be a previous history of striking this same combination of adjacent keys, when a particular one of those keys was usually intended, and this history may be taken into account. If the touchscreen provides local pressure measurements, the key receiving the highest pressure from the thumb may be assumed to be the intended key. Alternately, the different pressure measurements for the different contacted keys may be multiplied by a probability factor, based on predictive text analysis, to determine which key was most likely intended. The 'hot spot', i.e., the spot that the device considers the center of a key, may be shifted if the user consistently misses the hot spot in the same direction. For example, if the user consistently types below the hot spot of the 's' key, the device may move the hot spot of the 's' key downward. Determining that the user consistently misses in the same direction may be determined in various ways. For example, localized pressure sensing at various points in the contact patch may be used to determine that the highest pressure point in the contact patch consistently misses the center of the key in the same direction. In another example, it may be determined that the centroid or active point consistently misses the key or its hotspot in the same direction. In another example, words with that particular character may be frequently misspelled by substituting the same adjacent character. This may be verified, for example, when the user either manually corrects such misspellings, and/or when the user accepts automatic spelling corrections. In some embodiments, the displayed position of the key may remain unchanged, even though the hotspot for that key is relocated. In other embodiments, the displayed position of the key may be moved so that the new hotspot is centered within the displayed key. In some embodiments, the device may attempt to differentiate between simply typing the wrong character (e.g., none of the contacted characters spell a word, or maybe dyslexic letter inversion frequently occurs), and errors due to missing the hotspot, and adjust the hotspot only in response to the latter. Fig. 5 shows a flow diagram for a method of adaptively interpreting keystrokes, according to an embodiment of the invention. In flow diagram 500, at 510 the device may receive input from the touchscreen indicating a key has been touched, and determine the centroid of the contact patch. In some embodiments this input may comprise readings from multiple contact sensors that define the contact patch, which may then be convertedto the location of the centroid. In other embodiments, the input from the touchscreen may simply represent the location of the centroid, as determined by the touchscreen logic. In either case, the active point may be determined at 520 based on the location of the centroid and on previously determined differences between the centroid and active point for that part of the touchscreen. If the intended key is obvious based on the location of the active point, at 535 the device may assume that key was intended. However, if the active point is sufficiently close to multiple keys to cause uncertainty as to which key was intended, as determined at 530, the device may examine the hotspots of those multiple keys at 540. At 550, the device may determine the probability that each of these keys represents the correct key, based on various criteria. A significant criteria may be the previous history with this same grouping of keys. Based on these and other considerations, at 560 the device may select which key was most likely intended by the user, and enter that character into the typing sequence. At this point it may not be feasible to look at spelling and context considerations for that particular keystroke because the rest of the word or sentence has not been completed. The device may then return to 530 to process more keystrokes. At each pass through 565, the device may determine if it is finished with spelling/context considerations for that particular keystroke. If it is, it may possibly change the previously chosen key at 570 based on those factors. Once the chosen character and associated key have been finalized, the lessons learned from this process may be incorporated for future processing. For example, at 580 the location of the hotspot for that key may be adjusted. This information may be recorded for future use at 590. The foregoing description is intended to be illustrative and not limiting. Variations will occur to those of skill in the art. Those variations are intended to be included in the various embodiments of the invention, which are limited only by the scope of the following claims. |
A method and an apparatus for performing an I/O device access using targeted security. A software object (350) is executed. A security level for the software object (350) is established. A multi-table input/output (I/O) space access is performed using at least one of the security levels. The function of the software object (350) is executed. |
CLAIMS 1. A method, comprising: executing a software object (350); establishing a security level for said software object (350); performing a multi-table input/output (I/O) space access using at least one of said security levels; and executing said function of said object (350). 2. The method described in claim 1, wherein establishing a security level for said software object (350) further comprises assigning a security level relating to an I/O space access of at least a portion of a memory (347). 3. The method described in claim 1, wherein performing a multi-table I/O space access using at least one of said security level further comprises: establishing a secondary I/O table (430); receiving an I/O space access request based upon executing of said software object (350); performing a multi-level table access based upon said I/O space access request using said secondary I/O table (430) and at least one virtual memory table; and accessing at least a portion an I/O device (360) based upon said multi-level table access. 4. The method described in claim 3, wherein establishing a secondary I/O table (430) further comprises: dividing an I/O space (340) into a plurality of segments; determining at least one of said segment to omit from said secondary I/O table (430) and at least one un-omitted segment; assigning a default security level to said omitted segment; assigning a security level to said un-omitted segment; and correlate at least one assigned segment with an I/O space (340) location. 5. The method described in claim 3, wherein performing a multi-level table access based upon said I/O space access request further comprises: determining at least one security level that corresponds to a segment in said secondary I/O table (430); verifying a match between an execution security level to a security level associated with a segment being accessed in response to an execution of said object; determining an I/O space address based upon said secondary table (430) in response to a match between said execution security level and said security level associated with said segment being accessed; and locating an VO device (360) corresponding to said I/O space address. 6. The method described in claim 5, wherein determining at least one security level that corresponds to a segment in said secondary I/O table (430) comprises: determining a physical I/O device address from said secondary I/O table (430); <Desc/Clms Page number 12> determining a segment being executed based upon said physical I/O device address; and defming a current security level based upon said determining of said segment being executed. 7. An apparatus for performing an I/O device access using targeted security, CHARACTERIZED IN THAT, the apparatus, comprising: a processor (310) coupled to a bus (315) ; means for coupling at least one software object (350) to said processor (310) ; an input/output (I/O) device; and an I/O access interface (320) coupled to said bus (315) and a memory unit (347), said I/O access interface (320) to provide said processor (310) a multi-level table I/O space access of at least a portion of said memory unit (347) based upon at least one security level, in response to said processor (310) executing said software object (350). 8. The apparatus of claim 7, wherein said I/O space access interface comprises an I/O space access table (410) coupled with a secondary I/O table (430), said VO access interface (320) to provide a virtual memory addressing scheme to access at least one portion of said I/O device (360) based upon a security level. 9. A computer readable program storage device encoded with instructions that, when executed by a computer, performs a method, comprising: executing a software object (350); establishing a security level for said software object (350); establishing a secondary input/output (I/O) table (430); receiving an I/O space access request based upon executing of said software object (350); determining at least one security level that corresponds to a segment in said secondary I/O table (430); verifying a match between an execution security level to a security level associated with a segment being accessed in response to an execution of said software object (350); determining an I/O space addresses based upon said secondary I/O table (430) in response to a match between said execution security level and said security level associated with said segment being accessed; locating a physical I/O device (360) location (physical memory location) corresponding to said I/O space address; and accessing a portion of an I/O device (360) based upon locating said physical memory location. 10. The computer readable program storage device encoded with instructions that, when executed by a computer, performs the method described in claim 9, wherein determining at least one security level that corresponds to a segment in said secondary I/O table (430) comprises: determining a physical I/O device (360) address from said I/O space table; determining a segment being executed based upon said physical I/O device (360) address; and defining a current security level based upon said determining of said segment being executed. |
<Desc/Clms Page number 1> METHOD AND APPARATUS FOR MULTI-TABLE ACCESSING OF INPUT/OUTPUT DEVICES USING TARGET SECURITY TECHNICAL FIELD This invention relates generally to computer systems operations, and, more particularly, to a method and apparatus for performing a physical address-based security scheme to provide secure input/output (I/O) access. BACKGROUND ART Computers or computing systems are important elements in many of today's industrial and home applications. Many systems, such as manufacturing systems, power systems, product distribution systems, document systems, etc. , are powered by computer systems that utilize processors. These processors perform a variety of tests and execute a plurality of software programs that interact with each other. Many times input/output devices permit manipulation of operations of processors and software programs. A standard level of security is desirable during operation of the processor such that certain software structures (e. g., software objects, subroutines, standalone programs, etc. ) can be controlled and given priority over other software structures. Many times, access to certain software structures and certain processor functions are restricted in order to prevent unauthorized or inadvertent access or operation by processors. Current computer architectures include a scheme for utilizing virtual memory that uses several system-defined tables that are resident in the physical memory within a computer system. The entry within these system tables is generally pre-defined and includes reserved sections that restrict access to certain software structures. Computing systems have evolved from single task devices to multitask devices. A computing system employs an operating system to execute the many tasks and manage their resource utilization. Typically, when a user invokes a process (e. g., opens an application program such as a word processor), the operating system dedicates certain computing resources (e. g., portions of memory) for use by the task. Many computing resources, however, cannot or are not dedicated in this manner. Printer drivers, for example, are frequently used by multiple tasks. Operating systems therefore also usually define access rights and protocols for tasks relative to such shared resources. Thus, by virtue of the operating system's efforts, computing systems can simultaneously execute multiple tasks in an efficient manner. One important aspect in such a computing environment is"security. "Computing systems that multitask employ security and protection services to protect their operating system from user processes, and to protect the processes from each other. Without protection, a rogue program could unintentionally destroy the program code or data in the memory space belonging to the operating system or to another process. Note that, at least in this context, security does not imply thwarting intentional malicious acts, although it contemplates protecting against these as well. Many processors, such as x86 processors, provide a plurality of security levels, such as privilege levels. Turning now to Figure 1, one example of the representation of a plurality of security levels is illustrated. The inverse pyramid styled structure in Figure 1 illustrates four levels of security (privilege) level 0, level 1, level 2, and level 3 through level n. The operating system is afforded a base privilege level such as level 0. The privilege afforded by the security level 0 allows a particular software structure to obtain access provided by subsequent security levels such as levels 1-3. If a software structure is allowed only a privilege of security level 2, that particular software structure only has access and control over operations that are afforded by privilege <Desc/Clms Page number 2> levels 2 and 3. In many cases, popular operating systems, such as Microsoft Windows, do not utilize the full capabilities of the plurality of privilege levels. Some software operating systems only use two privilege levels, such as level 0 and level 3. A user application program may execute at security level 3 while the operating system services and all drivers operate at security level 0. This can open the computer system to a variety of security risks. This is particularly true since most drivers have access to all of the computer resources because they are operating at the most privileged level, security level 0. Therefore, an unauthorized access to a driver that controls an I/O device in the computer system, such as a modem device, can cause unauthorized operation of the I/O device resulting in system destruction or misuse. Furthermore, unauthorized access to system I/O devices can cause loss of valuable data and software programs. The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above. DISCLOSURE OF INVENTION In one aspect of the present invention, a method is provided for performing an I/O device access using targeted security. A software object is executed. A security level for the software object is established. A multi-table input/output (I/O) space access is performed using at least one of the security levels. The function of the object is executed. In another aspect of the present invention, an apparatus is provided performing an I/O device access using targeted security. The apparatus of the present invention comprises: a processor coupled to a bus; means for coupling at least one software object to the processor; an input/output (I/O) device; and an (I/O) access interface coupled to the bus and the memory unit, the memory access interface to provide the processor a multi- level table I/O space access of at least a portion of the memory unit based upon at least one security level, in response to the processor executing the software object. BRIEF DESCRIPTION OF THE DRAWINGS The invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which like reference numerals identify like elements, and in which: Figure 1 illustrates a stylistic representation of a plurality of privilege levels for secured access in a computer system; Figure 2 is a block diagram of a computer system that may be utilized in accordance with one embodiment of the present invention; Figure 3 is a more detailed block diagram representation of a processing unit shown in Figure 2, in accordance with one embodiment of the present invention; Figure 4 is a more detailed block diagram representation of an I/O access interface shown in Figure 3, in accordance with one embodiment of the present invention; Figures SA and 5B illustrate a block diagram representation of an 1/0-space/I/O-memory access performed by the processor illustrated in Figures 1-4; Figure 6 illustrates a flowchart depiction of a method of performing I/O-space/I/O-memory access using a security scheme in accordance with one embodiment of the present invention; Figure 7 illustrates a flowchart depiction of a method of performing a multi-table I/O-space/I/O- memory access described in Figure 6, in accordance with one embodiment of the present invention; <Desc/Clms Page number 3> Figure 8 illustrates a flowchart depiction of a method of setting up a secondary I/O table described in Figure 7, in accordance with one embodiment of the present invention; Figure 9 illustrates a flowchart depiction of a method of performing a multi-level table access described in Figure 7, in accordance with one embodiment of the present invention; Figure 10 illustrates a flowchart depiction of a method of determining a security level in the secondary I/O table, as described in Figure 9, in accordance with one embodiment of the present invention; and Figure 11 illustrates a flowchart depiction of a method of facilitating appropriate 1/0-space/l/0- memory access in response to the multi-level table access, as described in Figure 7, in accordance with one embodiment of the present invention. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. MODE (S) FOR CARRYING OUT THE INVENTION Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers'specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure. Embodiments of the present invention provide for I/O space access using security access systems. Embodiments of the present invention provide for a multiple I/O space and/or 1/0-memory access table system to provide security during an I/O space access (e. g., accessing an I/O device) initiated by one or more processors in a computer system. Embodiments of the present invention also provide an I/O space access system that utilizes an I/O space access table and a secondary I/O access table, which results is increased security during I/O spaces and/or 1/0-memory accesses. Turning now to Figure 2, one embodiment of a system 200 in accordance with the present invention is illustrated. The system 200 comprises a processing unit 210; a plurality of input/output devices, such as a keyboard 230, a mouse 240, an input pen 250; and a display unit 220, such as a monitor. The security level system disclosed by the present invention, in one embodiment, resides in the processing unit 210. An input from one of the input/output devices 230,240, 250 may initiate the execution of one or more software structures, including the operating system, in the processing unit 210. I/O space and/or memory associated with an I/O space residing in the system 200 is then accessed to execute the various software structures residing in the processing unit 210. Embodiments of the present invention restrict I/O space accesses that are initiated by one or more software structures, based upon predetermined security entries programmed into the system 200. Turning now to Figure 3, a simplified block diagram of one embodiment of the processing unit 210 in accordance with the present invention, is illustrated. The processing unit 210 in one embodiment, comprises a processor 310, an I/O access interface 320, an I/O space 340, and programmable objects 350, such as software <Desc/Clms Page number 4> objects or structures. The processor 310 may be a microprocessor, which may comprise a plurality of processors (not shown). In one embodiment, the I/O space 340 provides a"gateway"to an I/O device 360, such as a modem, disk drive, hard-disk drive, CD-ROM drive, DVD-drive, PCMCIA card, and a variety of other input/output peripheral devices. In an alternative embodiment, the I/O space 340 is integrated within the I/O device 360. In one embodiment, the I/O space 340 comprises a memory unit 347 that contains data relating to addressing and communicating with the VO space 340. The memory unit 347 comprises a physical memory section, that comprises physical memory such as magnetic tape memory, flash memory, random access memory, memory residing on semiconductor chips, and the like. The memory residing on semiconductor chips may take on any of a variety of forms, such as a synchronous dynamic random access memory (SDRAM), double-rate dynamic random access memory (DDRAM), or the like. The processor 310 communicates with the I/O space 340 through the system I/O access interface 320. In one embodiment, the I/O access interface 320 is of a conventional construction, providing I/O space addresses and logic signals to the I/O space 340 to characterize the desired input/output data transactions. Embodiments of the present invention provides for the I/O access interface 320 to perform a multi-table, security-based access system. The processor 310, in one embodiment is coupled to a host bus 315. The processor 310 communicates with the I/O access interface 320 and the objects 350 via the host bus 315. The I/O access interface 320 is coupled to the host bus 315 and the I/O space 340. The processor 310 is also coupled to a primary bus 325 that is used to communicate with peripheral devices. In one embodiment, the primary bus 325 is a peripheral component interconnect (PCI) bus (see PCI Specification, Rev. 2.1). A video controller (not shown) that drives the display unit 220 and other devices (e. g., PCI devices) are coupled to the primary bus 325. The computer system 200 may include other buses such as a secondary PCI bus (not shown) or other peripheral devices (not shown) known to those skilled in the art. The processor 310 performs a plurality of computer processing operations based upon instructions from the objects 350. The objects 350 may comprise software structures that prompt the processor 310 to execute a plurality of functions. In addition, a plurality of subsections of the objects 350, such as operating systems, user-interface software systems, such as Microsoft Word@, and the like, may simultaneously reside and execute operations within the processor 310. Embodiments of the present invention provide for a security level access and privilege for the processor 310. In response to execution of software codes provided by the objects 350, the processor 310 performs one or more I/O device accesses, including memory accesses, in order to execute the task prompted by the initiation of one or more objects 350. The I/O access performed by the processor 310 includes accessing I/O devices 360 to control the respective functions of the I/O devices 360, such as the operation of a modem. The I/O access performed by the processor 310 also includes accessing memory locations of I/O devices 360 for storage of execution codes and memory access to acquire data from stored memory locations. Many times, certain I/O devices 360, or portions of Il0 devices 360 are restricted for access by one or more selected objects 350. Likewise, certain data stored in particular memory locations of I/O devices 360 are restricted for access by one or more selected objects 350. Embodiments of the present invention provide for multi-table security access to restrict access to particular I/O devices 360, or memory locations of I/O devices 360, in the system 200. The processor 310 performs I/O space access via the I/O access interface 320. The Il0 <Desc/Clms Page number 5> access interface 320 provides access to the I/O space 340, which may comprise a gateway to a plurality of I/O devices 360. A multi-table virtual memory access protocol is provided by at least one embodiment of the present invention. Turning now to Figure 4, a block diagram depiction of one embodiment of the I/O access interface 320 in accordance with the present invention, is illustrated. In one embodiment, the I/O access interface 320 comprises an I/O access table 410 (or I/O space access table 410), a secondary I/O table 430, and an I/O space interface 345. In one embodiment, the I/O space interface 345 represents a"virtual"I/O space address that can be used to address a physical location relating to an I/O device 360, or to a portion of an I/O device 360. This virtual I/O space addressing may be defined as addressing a virtual memory that may relate to a virtual memory table. The I/O space interface 345 may comprise a virtual memory table. In an alternative embodiment, the virtual memory table may be located outside the I/O space interface 345, such as in the secondary I/O table 430 or in the I/O access table 410. The processor 310 can access the VO space 340 by addressing the I/O space interface 345. Embodiments of the present invention provide for performing I/O access using a multi-table I/O and memory access system. The multi-table I/O and memory access system utilized by embodiments of the present invention use a multilevel table addressing scheme (ie., using the I/O access table 410 in conjunction with the secondary I/O table 430) to access I/O space addresses via the I/O space interface 345. The I/O memory addresses are used by the processor 310 to locate the desired physical I/O location. The system 200 utilizes the I/O access table 410 in combination with at least one other table, such as the secondary I/O table 430, to define a virtual I/O space address that may relate to the virtual memory table. The I/O access table 410 and the secondary I/O access tables 430 are used to translate virtual I/O space addresses that lead to a physical I/O address. The physical I/O address points to a physical location of an I/O device 360 or to a memory location in the I/O device 360. The multi-level I/O access table system provided by embodiments of the present invention allows the secondary I/O table 430 to define entire sections of the I/O access table 410. In some instances, the secondary I/O table 430 may define a portion of a virtual I/O address that may not be present in the I/O access table 410. The secondary I/O table 430 can be used as a fine-tuning device that further defines a physical I/O location based upon a virtual I/O address generated by the I/O access table 410. This will result in more accurate and faster virtual I/O address definitions. In one embodiment, the secondary table 430, which may comprise a plurality of sub-set tables within the secondary table 430, is stored in the memory unit 347, or the main memory (not shown) of the system 200. The secondary 1/0 tables 430 are stored at high security levels to prevent unsecured or unverified software structures or objects 350 to gain access to the secondary I/O table 430. In one embodiment, the processor 310 requests access to a location in a physical I/O device location based upon instructions sent by an object 350. In response to the memory access request made by the processor 310, the I/O access interface 320 prompts the I/O access table 410 to produce a virtual I/O address, which is further defined by the secondary I/O table 430. The virtual I/O address then points to a location in the 1/0 space interface 345. The processor 310 then requests an access to the virtual I/O location, which is then used to locate a corresponding location in the I/O device 360. One embodiment of performing the memory access performed by the processor 310, is illustrated in Figure 5A, Figure 5B, and by the following description. Turning now to Figure 5A, one illustrative embodiment of an I/O access system 500 for storing and retrieving security level attributes in a data processor or computer system 200 is shown. In one embodiment, the I/O access system 500 is integrated into the <Desc/Clms Page number 6> processing unit 210 in the system 200. The I/O access system 500 is useful in a data processor (not shown) that uses a multi-table security scheme for accessing I/O space 340. For example, the I/O access system 500 may be used by the processor 310 when addressing I/O space 340 using a paging scheme, such as paging schemes implemented in x86 type microprocessors. In one embodiment, a single memory page in an x86 system comprises 4 Kbytes of memory. Moreover, the I/O access system 500 finds particular applications in the processor 310 that assigns appropriate security level attributes at the page level. The I/O access system 500 receives an I/O space address 553 that is composed of a page portion 510 and an offset portion 520, as opposed to a virtual, linear, or intermediate address that would be received by a paging unit in an x86 type microprocessor. In one embodiment, the page portion 510 data addresses an appropriate memory page, while the offset portion 520 data addresses a particular offset I/O location within the selected page portion 510. The I/O access system 500 receives the physical address, such as would be produced by a paging unit (not shown) in an x86 type microprocessor. A multi-level lookup table 530, which is generally referred to as the extended security attributes table (ESAT), receives the page portion 510 of the physical I/O address. The multi-level lookup table 530 stores security attributes associated with each page 510 of memory. In other words, each page 510 has certain security level attributes associated with that page 510. In one embodiment, the security attributes associated with the page 510 is stored in the multi-level lookup table 530. For example, the security attributes associated with each page 510 may include look down, security context ID, lightweight call gate, read enable, write enable, execute, external master write enable, external master read enable, encrypt memory, security instructions enabled, etc. Many of these attributes are known to those skilled in the art having benefit of the present disclosure. In one embodiment, the multi-level lookup table 530 is located in the system memory (not shown) of system 200. In an alternative embodiment, the multi-level lookup table 530 is integrated into the processor 310, which includes a microprocessor that employs the system 200. Accordingly, the speed at which the multi-level lookup table 530 is capable of operating is, at least in part, dependent upon the speed of the system memory. The speed of the system memory, as compared to the speed of the processor 310, is generally relatively slow. Thus, the process of retrieving the security attributes using the multi-level lookup table 530 may slow the overall operation of the system 200. To reduce the period of time required to locate and retrieve the security attributes, a cache 540 is implemented in parallel with the multi-level lookup table 530. The cache 540 may be located on the same semiconductor die as the processor 310 (i. e., the cache 540 and the processor 310 being integrated on one semiconductor chip) or external to the processor die. Generally, the speed of the cache 540 may be substantially faster than the speed of the multi-level lookup table 530. The cache 540 contains smaller subsets of the pages 510 and their security attributes contained within the multi-level lookup table 530. Thus, for the pages 510 stored in the cache 540, the operation of retrieving the security attributes may be substantially enhanced. Turning now to Figure 5B, one embodiment of the multi-level lookup table 530 used for storing and retrieving the security attributes associated with a page 510 in memory is illustrated. The multi-level lookup table 530 comprises a first table 550, which is generally referred to as an ESAT directory, and a second table 552, which is generally referred to as the ESAT. Generally, the first table 550 contains a directory of starting addresses for a plurality of ESATs 552 in which the security attributes for each of the pages 510 is stored. In the embodiment illustrated herein, a single ESAT directory 550 may be used to map the entire range of I/O addresses and/or memory within the I/O devices 360. <Desc/Clms Page number 7> A first portion of the I/O space address 553, which includes the highest order bits and is generally referred to as the directory (DIR) 554, is used as a pointer into the first table 550. The I/O space address 553 may also comprise a portion that contains table data 570, which can identify the table 550,552 being addressed. The Il0 space address 553 further comprises the offset 520 within a table 550,552 that leads to a particular entry 560,580. The first table 550 is located in the system memory at a base address 555. The DIR portion 554 of the I/O space address 553 is added to the base address 555 to identify an entry 560, which points to a base address of an appropriate address in one of the second tables 552. In one embodiment, a plurality of the second tables 552 may be present in the multi-level lookup table 530. Generally, each one of the entries 560 in the first table 550 points to a starting address of one of the addresses in the second tables 552. In other words, each entry 580 may point to its own separate ESAT 552. In one embodiment, the first table 550 and each of the second tables 552 occupy one page 510 in physical memory. Thus, a conventional memory management unit in an x86 type microprocessor with paging enabled is capable of swapping the tables 550,552 in and out of the system memory, as needed. That is, because of the multi-level arrangement of the tables 550,552, it is desirable that all of the tables 552 to be simultaneously present in the I/O space 340. If one of the tables 552 that is not currently located in the memory unit 347 is requested by an entry 560 in the first table 550, the conventional memory management unit (not shown) of the x86 microprocessor may read the page 510 from main memory, such as a hard disk drive, and store the requested page 510 in the system memory where it may be accessed. This one-page sizing of the tables 550,552 reduces the amount of system memory needed to store the multi-level lookup table 530, and reduces the amount of memory swapping needed to access I/O space 340 using the tables 550,552. In one embodiment, each page is 4 Kbytes in size, and the system memory totals 16 Mbytes. Thus, approximately 4000 ESAT tables 552 may reside within a page 510. In one embodiment, the 4000 ESAT tables 552 each may contain 4000 sets of security attributes. Furthermore, the ESAT directory 550 contains the starting address for each of the 4000 ESAT tables 552. The entry 560 of the first table 550 points to the base address of the appropriate second table 552. A desired entry 580 in the appropriate second table 552 is identified by adding a second portion 552 (the table portion) of the I/O space address 553 to the base address 555 contained in the entry 560. In one embodiment, the entry 580 contains predetermined security attributes associated with the identified page 510 in the I/O space 340. The multi-table scheme illustrated in Figures 5A and 5B is an illustrative embodiment, those skilled in the art having benefit of the present disclosure may implement a variety of multi-table schemes in accordance with the present invention. Turning now to Figure 6, a flowchart depiction of the methods in accordance with one embodiment of the present invention, is illustrated. An object 350 is initiated by the system 200 (block 610). The object 350, such as a particular software program (e. g. , Microsoft Word'@), can be initiated by the activation of an input/output device such as a mouse 240. When the object 350 is initiated by the system 200, the processor 310 executes the code provided by the object 350 (block 620). The system 200 then establishes a security level based upon a pre-determined security level for the object 350 (block 630). The system 200 then invokes a multi-table I/O space access (block 640). The multi-table I/O space access performed by the system 200 is described in greater detail below. Based upon the security level that is established and the multi-level I/O space access performed by the system 200, the function (s) of the objects 350 are then executed (block 650). The functions of the object 350 may include reading a stored document, execution of a communications link initiated by a modem, such as a wireless modem, and the like. <Desc/Clms Page number 8> Turning now to Figure 7, a flowchart depiction of one embodiment of performing the multi-table I/O space access, described in block 640 of Figure 6, is illustrated. The system 200 performs a secondary table set- up function (block 710). Setting-up the secondary I/O table 430 comprises placing and/or updating security level data in the secondary table 430. The secondary I/O table 430 can be used to define a plurality of sections within the I/O access table 410. The secondary I/O table 430 may contain data relating to entire sections of table entries (e. g. , 560,580 in Figure 5B) that may be missing from the I/O access table 410. In one embodiment, the system 200 divides I/O space 340 into pages510, such that the processor 310 has access to I/O space 340 based upon the pages 510. In one embodiment, the pages 510 are defined to be memory sections of 4 kbytes, which is compatible with x86 processors. The I/O access table 410 and the secondary I/O table 430 contain indexes into the tables 410,430. These indexes can be used to calculate a physical I/O space address 553 that can be used to access an I/O device 360 and/or locate a particular portion of an I/O device 360, such as the physical memory of an I/O device 360. Accessing of I/O space 340 using the tables 410, 430, performed by the processor 310, is provided in greater detail below. Once the system 200 sets-up the secondary I/O table 430, the system 200 checks for I/O space access requests from the processor 310 (block 720). Memory access requests from the processor 310 are generally prompted by an object 350. Some objects 350 require extensive I/O space 340 and/or memory accesses to perform their respective tasks, such as initiating communications through a modem, retrieving data pertaining to a particular document, and the like. The system 200 makes a determination whether an I/O space access request was received (block 730). When the system determines that an I/O space access has not been received, the system 200 continues to check for I/O space access requests as indicated by the path from block 730 back to block 720 in Figure 7. When the system 200 makes a determination that an I/O space access has been requested, the system 200 performs a multi-level table access, in accordance with one embodiment of the present invention (block 740). A more detailed description of the multi-level table access performed by the system 200 is provided below. Once the system 200 performs the multi-table access described in block 740, the system 200 then allows appropriate I/O space access in response to the multi-level table access (block 750). In other words, system 200 allows the object 350, which prompted the processor 310 to request a memory request, to actually gain access to the I/O device 360 and/or physical memory within an I/O device 360. Turning now to Figure 8, one embodiment of the method of setting up the secondary table 430, as indicated in block 710 of Figure 7, is illustrated. The system 200 divides the I/O space 340 and/or memory in an I/O device 360 into a plurality of segments (block 810). In one embodiment, these segments may be referred to as pages 510. In one embodiment, the segments/pages 510 are divided into memory equivalent of four kilobytes. In one embodiment, the division of the I/O space 340 into 4 Kbytes segments can be performed by hardware processes known to those skilled in the art having benefit of the present disclosure. In an alternative embodiment, the division of the I/O space 340 into segments can be performed using software techniques known to those skilled in the art having benefit of the present disclosure. The system 200 determines which segments to omit from the secondary table 430 and performs an omitting function (block 820). The segments that are omitted from the secondary table 430 are pages 510 that can be assigned a default security level. The omitted segments comprise pages 510 that can be allocated a broad-level or a low-level security level. Therefore, the system 200 assigns a default security level for omitted segments (block 830). The lowest security level is assigned to the omitted segments, therefore the omitted <Desc/Clms Page number 9> segments can be accessed by virtually any software object 350 that prompts the processor 310 to access I/O space 340 and/or memory. The system 200 then assigns a security level that corresponds to each un-omitted segment/page 510 in the I/O space 340 (block 840). The system 200 assigns a security level to the pages 510 based upon expected accesses by particular objects 350 via the processor 310. The system 200 protects certain hardware devices and other memory locations in the processor unit 210 while assigning appropriate security levels to the un-omitted segments/pages 510. Once the security levels are assigned, the system 200 correlates particular segments/pages 510 with an initial or virtual I/O space address 553 (block 850). The virtual I/O space addresses 553 may point to particular I/O devices 360 and/or memory in an I/O device 360 based upon particular security levels. The system 200 then utilizes the correlation of virtual I/O space addresses 553 to segments in the I/O space 340 to create a multi-level secondary I/O table 430 (block 850). In one embodiment, particular spaces in the secondary I/O table 430 are omitted in order to save memory resources. As described above, the omitted memory locations are assigned a default security level, which is generally the lowest security level. Turning now to Figure 9, one embodiment of performing the multi-level table access process indicated in block 740 of Figure 7, is illustrated. After receiving a request for I/O space access, the system 200 determines a security level in the secondary I/O table 430 in response to the requested I/O space access (block 910). The system 200 determines the security level in the secondary table 430 based upon the I/O space access in response to an indication to the processor 310 regarding a type of object 350 that initiates the execution of software in the processor 310. Certain software objects 350 require a more high-level security access that allows access to certain sensitive I/O devices and/or data in memory. For example, a software object 350 that requires a communication transfer of data would require a high security-level clearance in order to access sensitive data from the processor unit 310. In contrast, a software object 350 that performs the function of a data processor, such as Microsoft Word@, would require a low-level of security clearance to perform its task. The system 200 then examines the execution security level of the software object 350 initiating the I/O space access request, and the security level of the page 510 that is the target of the I/O space access (block 920). The processor 310 compares the security level of the currently executing software object 350 against the security level of the page 510 that is the target of the I/O space 340 and/or memory access, in order to determine a match (ie., whether to allow the requested I/O space 340 and/or memory access). This prevents certain software objects 350 that are unauthorized to access certain VO devices 360 and/or sensitive data in the physical memory of I/O devices 360, from accessing and controlling certain VO devices 360 and/or memory locations. The system 200 then correlates the appropriate security level to the particular access request initiated by the software object 350 (block 930). The system 200 then correlates a secondary I/O table 430 address to the I/O space interface 345 location that corresponds to a location in the I/O space 340 and/or memory (block 940). The system 200 locates the I/O space 340 and correlates the appropriate security level to the physical I/O space 340 (block 950). In one embodiment, the I/O space access interface 320 performs the locating of the I/O space interface 345 location, and the correlating of the I/O space interface 345 location to a location in the I/O space 340. Turning now to Figure 10, one embodiment of determining the security level in the secondary table 430 in response to a memory access request of the processor 310, as indicated in block 910 of Figure 9, is illustrated. The system 200 determines the I/O space address 553 that is responsive to the I/O space access <Desc/Clms Page number 10> request from the I/O access table 410 (block 1010). The system 200 then locates the segment/page 510 that is being executed by the processor 310 responsive to the software object 350, based upon the physical I/O space address 553 (e. g., using the address as an index into the secondary I/O table 430) (block 1020). The system 200, when executing code based upon the software object 350, determines the security level of the page 510 from which the processor 310 is executing, which can define the current security level. Therefore, the system 200 effectively uses the segment/page 510 to define the security level (block 1030). The system 200 then sends the defined security level to the processor 310 to perform a proper I/O space access (block 1040). The completion of the steps illustrated in Figure 10 substantially completes the step of determining security level in the secondary I/O table 430, as indicated in the block 910 of Figure 9. Turning now to Figure 11, a flowchart depiction of one embodiment of the steps for performing the appropriate I/O space access described in block 750 of Figure 7 is illustrated. The system 200 checks the security level that corresponds to a particular memory access request (block 1110). The security level can be correlated with a particular I/O space access request based upon the particular software object 350 being executed by the processor 310. The system 200 then determines whether the security level is sufficient to allow access to an 1/0-resource/I/O-memory (e. g., an I/O device 360 and/or a portion of memory of an I/O device 360) (block 1120). The system 200 checks to see if the security level clearance is appropriate to allow the I/O space access requested by the processor 310 and gain access to particular I/O devices 360 and/or memory locations within the I/O devices 360. When the system 200 determines that the security level is not high enough to allow VO-resources/I/O- memory access based upon a particular I/O space access request made by the processor 310, the system 200 denies the requested 1/O-resources/I/O-memory access (block 1140). When the system 200 determines that the security level is indeed sufficient to allow the requested 1/0-resources/I/O-memory access, the system 200 allows the processor 310 or the software object 350 to gain access to a particular I/O device 360 and/or a memory location within an I/O device 360 in the physical memory 345 (block 1130). The completion of the steps indicated in Figure 11 substantially completes the process of allowing the appropriate memory access as indicated in block 750 of Figure 7. The principles taught by the present invention can be implemented into other types of automated frameworks. The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
Embodiments of techniques and systems for performance of predicted actions are described. In embodiments, a predicted action performance engine ("PAE") may receive one or probabilities of potential actions that may be performed on a computing device. The PAE may also receive a system context for the computing device describing available resources on the computing device, workload, etc. Based on these probabilities and the system context, the PAE may determine one or more predicted actions and/or resource utilizations which are likely to occur and which may be performed ahead of time. The PAE may then facilitate performance of these actions and/or resource utilizations. Other embodiments may be described and claimed. |
One or more non-transitory computer-readable media comprising instructions stored thereon that cause a first computing device, in response to execution of the instructions by the first computing device, to: receive one or more indications of a current system context for a second computing device; receive one or more probabilities of potential actions or resource utilizations of the second computing device, wherein to receive the one or more probabilities of potential actions or resource utilizations of the second computing device, the first computing device, in response to execution of the instructions, is to receive a flow structure comprising an ordered identification of potential actions or resource utilizations wherein the flow structure is ordered by probability and ordered by distance in time from a current event; and select, based at least in part on the current system context and the one or more probabilities of potential actions or resource utilizations of the second computing device, one or more actions or resource utilizations to be performed to assist performance of one or more actions or resource utilizations that are predicted to occur, wherein the one or more probabilities includes a probability indicating that an action or resource utilization of the one or more actions or resource utilizations is predicted to occur more than once per occurrence of the current system context. The one or more computer-readable media of claim 1, wherein to receive the one or more indications of the current system context, the first computing device, in response to execution of the instructions, is to receive one or more of: an execution state of a process, environmental information for the second computing device, or an indication of availability of a resource. The one or more computer-readable media of claim 1, wherein to select the one or more predicted actions or resource utilizations, the first computing device, in response to execution of the instructions, is to select one or more actions or resource utilizations that -15can be performed with available resources on the second computing device without slowing down performance of the second computing device. The one or more computer-readable media of claim 1, wherein the first and second computing devices are the same computing device, and the instructions are to further cause the computing device, in response to execution of the instructions by the computing device, to facilitate performance of the one or more selected actions or resource utilizations. The one or more computer-readable media of claim 4, wherein to facilitate performance of the one or more selected actions, the first computing device, in response to execution of the instructions, is to one or more of: load executable code for the one or more actions that are predicted to occur, cache data from the resource, or perform a data access over a network. The one or more computer-readable media of claim 1, wherein the first computing device, in response to execution of the instructions, is to add the selected action to the context. An apparatus for predicting activities of the apparatus, the apparatus comprising: one or more computer processors; a probabilities engine to be operated by the one or more computer processors to: generate a flow structure to indicate a frequency of how often a transition between at least two steady states is observed during an observation period; and determine one or more probabilities of potential actions or resource utilizations by first one or more processes executing on the computing device based on the generated flow structure, wherein the one or more probabilities of potential actions or resource utilizations are based on the frequency of how often the transition between the at least two steady states is observed as indicated by the flow structure; and a predicted action engine to be operated by the one or more computer processors to: receive one or more indications of a current system context for a computing device; and -16select, based at least in part on the current context and the one or more probabilities of potential actions or resource utilizations by first one or more processes executing on the computing device, one or more predicted actions or resource utilizations to be performed by second one or more processes to assist performance of the one or more actions or resource utilizations that are predicted for the first one or more tasks, wherein the one or more probabilities includes a probability indicating that an action or resource utilization of the one or more actions or resource utilizations is predicted to occur more than once per occurrence of the current system context. The apparatus of claim 7, further comprising at least a selected one of the first or second one or more processes. The apparatus of claim 8, wherein the first and second one or more processes are the same one or more processes. The apparatus of claim 7, wherein the probabilities engine comprises an analysis engine to be operated by the one or more computer processors to: determine the one or more probabilities of potential actions or resource utilizations by first one or more processes executing on the computing device; and provide the determined one or more probabilities to the predicted action engine. The apparatus of claim 7, wherein the predicted action engine is to receive one or more indications of a current system context via receipt one or more of: an execution state of a process, environmental information for the computing device, or an indication of availability of a resource. The apparatus of claim 7, wherein the predicted action engine is to select one or more predicted actions or resource utilizations through selection of one or more actions or resource utilizations that can be performed with available resources without slowing down performance of the second computing device. The apparatus of claim 7, wherein: the apparatus and the computing device are the same device; and -17the predicted action engine is to be operated by the one or more computer processors to facilitate performance of the one or more selected actions or resource utilizations. The apparatus of claim 13, wherein the predicted action engine is to facilitate performance of the one or more selected actions through one or more of: a load of executable code for the one or more actions that are predicted to occur, a cache of data from the resource, or performance of a data access over a network. The apparatus of claim 7, wherein the predicted action engine is further to be operated by the one or more computer processors to receive the one or more probabilities of potential actions or resource utilizations by the first one or more processes. The apparatus of claim 15, wherein the predicted action engine is to receive the one or more probabilities of potential actions or resource utilizations by the first one or more processes through receipt of the flow structure wherein the flow structure further comprises an ordered identification of potential actions or resource utilizations. The apparatus of claim 16, wherein the predicted action engine is to receive the structure comprising the ordered identification of potential actions or resource utilizations through receipt of a structure ordered by probability and/or distance in time from a current event. The apparatus of claim 7, wherein the predicted action engine is further to be operated to add the selected action to the context. A computer-implemented method for improving responsiveness of a first computing device, the method comprising: receiving, by a second computing device, one or more indications of a current system context for the first computing device; receiving, by the second computing device, one or more probabilities of potential actions or resource utilizations of the first computing device wherein receiving the one or more probabilities of potential actions or resource utilizations of the first computing device comprises receiving a flow structure comprising an ordered identification of potential actions or resource utilizations, wherein the flow structure is ordered by probability and by distance in time from a current event; and -18selecting, by the second computing device, based at least in part on the current system context and one or more probabilities of potential actions or resource utilizations of the computing device, one or more actions or resource utilizations to be performed to assist performance of one or more actions or resource utilizations that are predicted to occur, wherein the one or more probabilities includes a probability indicating that an action or resource utilization of the one or more actions or resource utilizations is predicted to occur more than once per occurrence of the current system context. The method of claim 19, wherein receiving one or more indications of a current system context comprises receiving one or more of: an execution state of a process, environmental information for the first computing device, or an indication of availability of a resource. The method of claim 19, wherein selecting one or more predicted actions or resource utilizations comprises selecting one or more actions or resource utilizations that can be performed with available resources without slowing down performance of the first computing device. The method of claim 19, wherein the first and second computing devices are the same computing device, and the method further comprises facilitating performance, by the computing device, of the one or more selected actions or resource utilizations. The method of claim 22, wherein facilitating performance of the one or more selected actions comprises loading executable code for the one or more actions that are predicted to occur. The method of claim 22, wherein facilitating performance of the one or more selected resource utilizations comprises one or more of caching data from the resource or performing a data access over a network. The one or more non-transitory computer-readable media of claim 1, wherein the one or more indications of a current system context includes an indication that a specific action has taken place, and wherein the probability indicating that the action or resource utilization is predicted to occur more than once per occurrence of the current system context includes a probability indicating that the action or resource utilization is predicted to occur -19more than once per occurrence of the specific action. The apparatus of claim 7, wherein the one or more indications of a current system context includes an indication that a specific action has taken place, and wherein the probability indicating that the action or resource utilization is predicted to occur more than once per occurrence of the current system context includes a probability indicating that the action or resource utilization is predicted to occur more than once per occurrence of the specific action. The method of claim 19, wherein the one or more indications of a current system context includes an indication that a specific action has taken place, and wherein the probability indicating that the action or resource utilization is predicted to occur more than once per occurrence of the current system context includes a probability indicating that the action or resource utilization is predicted to occur more than once per occurrence of the specific action. -20. |
WO 2014/003919 PCT/US2013/042092 PERFORMANCE OF PREDICTED ACTIONS Cross Reference to Related Application The present application claims priority to U.S. Patent Application No. 13/539,177, filed June 29, 2012, the entire content of which is hereby incorporated by reference in its entirety for all purposes. Background Many users experience slower-than-expected performance when using computing devices. In particular, many new computers and devices are often perceived as only marginally faster than their predecessors because response time of the system to user input may remain similar to older systems. Similarly, common applications may be perceived to take about the same amount of time to start or to complete. For example, clicking on a button in a user interface or starting a new command often tends to result in a largely constant response time from system to system. This performance may appear to be almost independent from the real performance and capabilities of the underlying system. While use of solid state drives and smarter caching mechanisms may help in some circumstances, they have not solved this issue. Brief Description of the Drawings Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. Figure 1 is a block diagram illustrating an example predicted action performance system, in accordance with various embodiments. Figure 2 is a block diagram illustrating an example probabilities engine, in accordance with various embodiments. Figure 3 illustrates an example action prediction and performance process, in accordance with various embodiments. Figure 4 illustrates an example probability generation process, in accordance with various embodiments. Figure 5 illustrates an example flow structure generation process, in accordance with various embodiments. Figure 6 illustrates an example observation collection process, in accordance with various embodiments. - 1WO 2014/003919 PCT/US2013/042092 Figure 7 illustrates an example flow structure, in accordance with various embodiments. Figure 8 illustrates an example process for generating probabilities from a flow structure, in accordance with various embodiments. Figure 9 illustrates an example expected value structure, in accordance with various embodiments. Figure 10 illustrates an example predicted action performance process, in accordance with various embodiments. Figure 11 illustrates an example computing environment suitable for practicing the disclosure, in accordance with various embodiments. Detailed Description In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents. Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, the term "module" may refer to, be part of, or include an Application Specific Integrated Circuit ("ASIC"), an electronic circuit, a processor -2WO 2014/003919 PCT/US2013/042092 (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Referring now to Figure 1, a block diagram is shown illustrating embodiments of an example predicted action performance system. In various embodiments, the predicted action performance system may include a predicted action engine 100 ("PAE 100") and a probabilities engine 110 ("PE 110"). In various embodiments, the PAE 100 may be configured to receive information about the historical and/or current operation of a computing device. The PAE 100 may be configured to, based in part on this information, select one or more actions to support potential actions and/or resource utilizations that are predicted as likely to occur on the computing device. In various embodiments, actions may include such things as starting of processes, opening a window or dialog box, incoming network events, or user interaction. For example, the PAE 100 may be configured to select to pre-load code for an application that is predicted to be executed soon, or may read data into a cache. As illustrated in the example of Figure 1, in various embodiments, the PAE 100 may be configured to select actions to support potential actions and/or resource utilizations of an executing process, such as process 150. In various embodiments, the process 150 may include a subprocess 160. In various embodiments, the PAE 100 may be configured to predict that a second subprocess 170 is likely to be executed in the near future. Thus, in various embodiments, the PAE 100 may be configured to facilitate pre-fetching of (and/or facilitate early execution of ) code for the subprocess 170. In other embodiments, the PAE may be configured to cause pre-fetching and/or early execution of executable code that is outside of a currently-executing process. For example, if an email is received with an attachment of a particular document type, the PAE 100 may select to pre-fetch code for an application or process that is configured to read that document type. Similarly, in some embodiments, the PAE 100 may be configured to predict that an external resource 175 (for example a network card) is likely to be used in the near future (for example, to perform a domain name system search). Thus, in various embodiments, the PAE 100 may be configured to facilitate the making of an early request of the external resource 175. Recognizing that the foregoing example was merely indicative of potential actions and capabilities of the PAE 100, in other embodiments, different processes or external resources may be involved. -3 WO 2014/003919 PCT/US2013/042092 In the examples of Figure 1, aspects of the predicted action performance system may be illustrated on the left side of the dashed line, while aspects of the computing device for which the predicted action performance system is predicting action may be illustrated on the right side of the dashed line. Thus, in some embodiments, the predicted action performance system may be configured to operate on a device or apparatus that is separate from the predicted action performance system. However, in various embodiments, one or more aspects of the predicted action performance system may be operated on the same computing device that actions are being predicted for. In various embodiments, the PAE 100 may be configured to receive one or more probabilities of potential actions to be performed on a computing device. In various embodiments, the PAE 100 may receive these probabilities from the PE 110. Particular embodiments of the PE 110 are discussed below. In various embodiments, the PAE 100 may also be configured to receive (or otherwise obtain) a current system context 120 for the computing device. In various embodiment, the system context may include a state of the computing device (e.g., power, performance, memory, storage, load, battery state, and/or thermal data), logical environment (e.g., network connectivity, data received over a network), and/or physical location of the computing device (e.g., is the computing device mobile, at home, at an office, on a flight, in a foreign country, etc.).. In various embodiments, the context may include other information, both outside and inside the computing device, data, and/or conclusions that may be drawn from that information and data. In various embodiments, the current system context may be received passively by the PAE 100, such as by applications or system processes reporting system context information to the PAE 100. In other embodiments, the PAE 100 may configured to actively request and/or otherwise obtain the current system context 120 from the computing device. In various embodiments, the PAE 100 may be configured to select actions for performance based on available system resources, such as those identified in the current system context. Referring now to Figure 2, a block diagram is shown illustrating an example PE 110, in accordance with various embodiments. In various embodiments, the PE 110 may include an observation engine 250 ("OE 250") and an analysis engine 260 ("AE 260"). In various embodiments, the OE 250 may be configured to receive actions and resource utilizations 210 of the computing device. As described herein the OE 250 may generate a flow structure 250 describing steady states and transitions of the computing device based -4WO 2014/003919 PCT/US2013/042092 on the historical data received by the OE 250. This flow structure may be used by the AE 260, along with an indication of a current action 205 that is being performed by the computing device, to determine one or more probabilities for potential actions that may follow the received current action 205. These probabilities may be used by the PAE 100 to select an action for performance, as described herein. In various embodiments, the actions/resource utilizations 210 may be received passively by the OE 250, such as by applications or system processes reporting indications of actions and/or resource utilizations that have been performed to the OE 250. In other embodiments, the OE 250 may configured to actively request and/or otherwise obtain the actions and/or resource utilizations 210 from the computing device. In various embodiments, the OE 250 may also be configured to receive application context information from one or more applications 220 executing on the computing device. In various embodiments, the application 220 may include a context component 230 which may be in communication with the OE 250 in order to provide the context information. The application 220 may be so configured in order to provide the OE 250, and therefore the PE 110, with more information than would otherwise be available to the PE 110 without direct assistance from applications executing on the computing device. For example, a coding environment application 220 may provide, such as through its context component 230, tags that describe a type of code is being written in the application. In another example, an email application 220 may provide a tag that an email has been received, a tag of the sender of the email, and a tag describing that a .ppt file is attached. This information may be used by the PE 110 to determine that every time an email with a .ppt file is received from a certain person, PowerPoint is likely to be executed. The PAE 100 may thus facilitate the loading of code for the PowerPointTM application. In various embodiments, the context component 230 may provide information such as, but not limited to, application state, information describing one or more files accessed by the application 220, messages received by the application 220, the identity of one or more recipients or senders of information to the application, etc. In various embodiments the context component 230 may provide application context information to the OE 250 in the form of one or more tags. As described below, these tags may be appended to actions and/or resource utilizations 210 received by the OE 250 in order to provide additional context for these received actions and/or resource utilizations 210; this, in turn, may allow the OE to generate more accurate and/or detailed flow structures 250. Similarly, the OE -5WO 2014/003919 PCT/US2013/042092 250 may, in various embodiments, provide one or more context tags 225 to the AE 260, which may be used to provide context to one or more current actions 205. This provision of the context tag 255 may, in various embodiments, facilitate the AE 260 in producing more accurate probabilities 270. Particular uses of application context information and tags are described herein. Figure 3 illustrates an example action prediction and performance process 300, in accordance with various embodiments. The process may begin at operation 320, where, in various embodiments, the PE 110 may generate one or more probabilities for use by the PAE 100. Particular embodiments of operation 320 are discussed below. Next, at operation 340, the PAE 100 may perform one or more predicted actions based on the probabilities generated by the PE 110 at operation 320. In embodiments, the performance of predicted actions at operation 340 may also be based in part on the current system context 120. Particular embodiments of operation 340 are discussed below. In various embodiments, the process may then repeat at operation 320 for additional probabilities and predicted action. In some embodiments, the process instead end. Figure 4 illustrates an example probability generation process 400, in accordance with various embodiments. In various embodiments, process 400 may be performed by the PE 110 to implement one or more embodiments of operation 320 of process 300. The process may begin at operation 410, where the OE 250 may generate a flow structure 250. Particular embodiments of operation 410 are discussed below. Next, at operation 420, the AE 260 may generate probabilities based on the generated flow structure 250 and a current action 205. Particular embodiments of operation 420 are discussed below. Next, at operation 430, the probabilities may be output from the AE 260. In various embodiments, the output probabilities may be ordered for ease of use by the PAE 100. Thus, in some embodiments, the probabilities may be ordered by likelihood. In other embodiments, the probabilities output by the AE 260 may be ordered by assumed distance in time from the current action 205. The process may then end. Figure 5 illustrates an example flow structure generation process 500, in accordance with various embodiments. In various embodiments, process 500 may be performed by the OE 250 to implement one or more embodiments of operation 410 of process 400. The process may begin at operation 520, where the OE 250 may collect information about actions and/or resource utilizations from the computing device. In various embodiments, these observations may be also be acquired from one or more applications. Particular embodiments of operation 520 are described below with reference -6WO 2014/003919 PCT/US2013/042092 to process 600 of Figure 6. Referring now to Figure 6, that figure illustrates an example observation collection process 600, in accordance with various embodiments. In various embodiments, process 600 may be performed by the OE 250 to implement one or more embodiments of operation 510 of process 500. The process may begin at operation 610, where the OE 250 may receive application context information from an application 220. In various embodiments, the application context information may be received from a context component 230 of the application 220. In some embodiments, the application context information may be received in the form of a tag. The following descriptions of operations of process 600 thus may make specific reference to a tag; however it may be recognized that, in other embodiments, the received application context information may take other forms. At operation 620, the OE 250 may push the recently-received tag onto a stack data structure. In various embodiments, a stack is used in order to allow for easy removal of the context, as well as to allow for nesting of various stacks as they are applied to received actions and resource utilizations; in other embodiments, other data structures may be used to store stacks. Next, at operation 630, the OE 250 may obtain one or more actions and/or resource utilizations. As discussed above, in various embodiments, these actions and/or resource utilizations may be received passively, while in others, the OE 250 may actively seek out action and/or resource utilization information. Next, at operation 640, the OE 250 may tag the received action/resource utilization with the recently-received tag. This tagging may, in various embodiments, facilitate the OE 250 in providing application context information to accompany received actions and/or resource utilizations, providing improved probability generation. In various embodiments, the OE 250 may repeat operations 630 and 640 in order to receive (and tag) additional actions and/or resource utilizations. However, the OE 250 may also receive an indication that an application context associated with the application context information has changed, such as at operation 650. Thus, for example, an application 220 may receive a user interaction where a user may select a menu. The application 220 may, such as using its context component 230, then send a tag indicating this menu selection to the OE 250. Later, if the user ends selection of the menu, the context component 230 of the application 220 may indicate to the OE 250 that the relevant context has ended. Then, at operation 660, the OE 250 may remove the -7WO 2014/003919 PCT/US2013/042092 tag from the stack structure. This may effectively end the tagging of future received actions with the received tag. The process may then end. Returning to process 500 of Figure 5, after collecting information about actions and/or resource utilizations, process 500 may continue to operation 530, where the OE 250 may identify one or more steady states of the computing device. In various embodiments, as illustrated below, these steady states may represent states at which the computing device is in a consistent state at a particular time. A steady state may, in various embodiments, include a consistent state of the context of the computing device. In some embodiments, a steady state may include a consistent state of one or more internal variables of the computing device, such as, for example, a current working directory, a current IP address of a network device, a current running state of one or more applications, etc. For example, in one embodiment, an example steady state may be described at a high level as "email program is running in foreground, displaying an editor window, waiting for user input." Next, at operation 540, the OE 250 may identify one or more transitional actions and/or resource utilizations that may be performed by the computing device. For example, at operation 540, the OE 250 may identify that a directory change command causes the computing device to change between directory steady states. In another example, at operation 540, the OE 250 may identify that a command to execute an application may cause the computing device to change to a steady state where the application is executing. In another example, a transitional actions may include receipt of a command from a user (such as a "send" command in an email application). Next, at operation 550, the OE 250 may generate frequencies of each of the steady states based on its received information about actions and resource utilizations. Particular examples of these frequencies may be seen below at Figure 7. At operation 560, these frequencies may be provided to the AE 260 for use in determining probabilities to be used by the PAE 100. The process may then end. Figure 7 illustrates an example flow structure with steady states and frequencies, in accordance with various embodiments. In the illustrated example, steady states are illustrated as graph nodes, while the graph transitions show frequencies of how often the OE 260 observed that particular transition between the two steady states during a given period ob observation. As the illustrated flow structure 700 shows, steady states may, in various embodiments, include receipt of a command to execute an application (e.g., "/usr/bin/bash", "/usr/bin/make/", "/bin/rm") or may include execution of a process based -8WO 2014/003919 PCT/US2013/042092 on that command (e.g., "/usr/bin/bash::bash", "/usr/bin/make::make"). It may be noted that, while the example flow structure of Figure 7 does not show steady states tagged with application context information, in various embodiments, the flow structure may additionally include application context information. Thus, in various embodiments, more than one steady state may exist for a given directory or process, but with different tags. Figure 8 illustrates an example process 800 for generating probabilities from a flow structure, in accordance with various embodiments. In various embodiments, process 800 may be performed by the AE 260 to implement operation 420 of process 400. The process may begin at operation 810, where the AE 260 may receive the flow structure generated by the OE 250. Next, at operation 820, the AE 260 may receive an indication of a current action 205. At operation 830, the AE 260 may receive application context tags 255 from the OE 250; these tags may be used to better identify relevant steady states and transitions in the flow structure. Next, at operation 840, the AE 260 may compute expected values that follow the received action. In various embodiments, the expected values may be computed based on direct frequencies between each steady state to the next and may not include frequencies that are not related the transition for which the expected value is being computed. In various embodiments, the AE 260 may utilize a sub-structure of the received flow structure that only includes steady states that may be reached after performance of the current action 205. In various embodiments, the AE 260 may then compute the expected values for how often each subsequent steady state may be reached after the current action 205. Referring now to Figure 9, Figure 9 illustrates an example expected value structure 900, in accordance with various embodiments. As illustrated in the example of Figure 9, in various embodiments, the AE 260 may compute expected values in a form of a number of times the transition may be performed out of 100. For example, if, based on a current action a given application is expected to be run 50% of the time, the expected value of a transition to that application may be 50 (out of 100). In another example, if an application is expected to be run, on average, twice, the expected value may be 200 out of 100. In some embodiments, the expected value may be capped at a maximum value. Returning to Figure 8, at operations 850 and 860, the AE 260 may compute, from the computed expected values, effective probabilities of steady states (850) and of resource utilizations (860). In various embodiments, the AE 260 may compute the effective probabilities by directly multiplying the expected values in probabilistic form. In -9WO 2014/003919 PCT/US2013/042092 other embodiments the AE 260 may utilize other methods of computing the probabilities, such as using artificial intelligence-based techniques or by including other information. Finally, at operation 870, the AE 260 may order the computed probabilities, such as by likelihood or distance (e.g. distance in the flow structure) from the current action 205. The process may then end. Figure 10 illustrates an example predicted action performance process 1000, in accordance with various embodiments. In various embodiments, the PAE 100 may perform process 1000 to implement operation 340 of process 300 of Figure 3. The process may begin at operation 1010, where the PAE 100 may obtain a system context from the computing device. As discussed above, in various embodiments, the system context may include, in various embodiments, resource availability, such as memory or storage capability, current workload, location of execution, and/or environmental information, such as a temperature of the computing device. Next, at operation 1020, the PAE 100 may obtain one or more probabilities for actions and/or resources, such as from the PE 110. As discussed above, in various embodiments, these probabilities may be ordered for use by the PAE 100. Next, at operation 1030, the PAE 100 may select actions and/or resource utilizations that support potential actions and/or resource allocations and which may be performed given the current system context for the computing device. Thus, in various embodiments, the PAE 100 may determine, for the potential action and/or resource utilizations for which probabilities were received, which support actions and/or resource utilizations may be performed, given the capabilities indicated by the system context. In various embodiments, the PAE 100, at operation 1030, may determine which of these support actions and/or resource utilizations may be performed without causing a noticeable slowdown to a user of the computing device. Finally, at operation 1040, the PAE 100 may facilitate performance of the selected actions and/or resources utilizations. In various embodiments, the PAE 100 may itself direct performance of the actions and/or resource utilizations. In other embodiments, the PAE 100 may request performance of the actions and/or resource utilizations from other entities. The process may then end. Figure 11 illustrates, for one embodiment, an example computer system 1100 suitable for practicing embodiments of the present disclosure. As illustrated, example computer system 1100 may include control logic 1108 coupled to at least one of the processor(s) 1104, system memory 1112 coupled to system control logic 1108, non - 10 WO 2014/003919 PCT/US2013/042092 volatile memory (NVM)/storage 1116 coupled to system control logic 1108, and one or more communications interface(s) 1120 coupled to system control logic 1108. In various embodiments, the one or more processors 1104 may be a processor core. System control logic 1108 for one embodiment may include any suitable interface controllers to provide for any suitable interface to at least one of the processor(s) 1104 and/or to any suitable device or component in communication with system control logic 1108. System control logic 1108 for one embodiment may include one or more memory controller(s) to provide an interface to system memory 1112. System memory 1112 may be used to load and store data and/or instructions, for example, for system 1100. In one embodiment, system memory 1112 may include any suitable volatile memory, such as suitable dynamic random access memory ("DRAM"), for example. System control logic 1108, in one embodiment, may include one or more input/output ("I/O") controller(s) to provide an interface to NVM/storage 1116 and communications interface(s) 1120. NVM/storage 1116 may be used to store data and/or instructions, for example. NVM/storage 1116 may include any suitable non-volatile memory, such as flash memory, for example, and/or may include any suitable non-volatile storage device(s), such as one or more hard disk drive(s) ("HDD(s)"), one or more solid-state drive(s), one or more compact disc ("CD") drive(s), and/or one or more digital versatile disc ("DVD") drive(s), for example. The NVM/storage 1116 may include a storage resource physically part of a device on which the system 1100 is installed or it may be accessible by, but not necessarily a part of, the device. For example, the NVM/storage 1116 may be accessed over a network via the communications interface(s) 1120. System memory 1112 and NVM/storage 1116 may include, in particular, temporal and persistent copies of predicted action performance logic 1124. The predicted action performance logic 1124 may include instructions that when executed by at least one of the processor(s) 1104 result in the system 1100 practicing one or more of the predicted action performance operations described above. In some embodiments, the predicted action performance logic 1124 may additionally/alternatively be located in the system control logic 1108. Communications interface(s) 1120 may provide an interface for system 1100 to communicate over one or more network(s) and/or with any other suitable device. -11 WO 2014/003919 PCT/US2013/042092 Communications interface(s) 1120 may include any suitable hardware and/or firmware, such as a network adapter, one or more antennas, a wireless interface, and so forth. In various embodiments, communication interface(s) 1120 may include an interface for system 1100 to use NFC, optical communications (e.g., barcodes), BlueTooth or other similar technologies to communicate directly (e.g., without an intermediary) with another device. For one embodiment, at least one of the processor(s) 1104 may be packaged together with system control logic 1108 and/or predicted action performance logic 1124. For one embodiment, at least one of the processor(s) 1104 may be packaged together with system control logic 1108 and/or predicted action performance logic 1124 to form a System in Package ("SiP"). For one embodiment, at least one of the processor(s) 1104 may be integrated on the same die with system control logic 1108 and/or predicted action performance logic 1124. For one embodiment, at least one of the processor(s) 1104 may be integrated on the same die with system control logic 1108 and/or predicted action performance logic 1124 to form a System on Chip ("SoC"). The following paragraphs describe examples of various embodiments. In various embodiments, an apparatus for predicting activities of the apparatus may include one or more computer processors. The apparatus may also include a predicted action engine configured to be operated by the one or more computer processors to receive one or more indications of a current system context for a computing device. The predicted action engine may also be configured to be operated to select, based at least in part on the current context and one or more probabilities of potential actions or resource utilizations by first one or more processes executing on the computing device, one or more predicted actions or resource utilizations to be performed by second one or more processes to support the one or more actions or resource utilizations that are predicted for the first one or more tasks. In various embodiments, the apparatus may further include at least a selected one of the first or second one or more processes. In various embodiments, the first and second one or more processes are the same one or more processes. In various embodiments, the apparatus may further include a probabilities engine further configured to be operated by the one or more computer processors to determine the one or more probabilities of potential actions or resource utilizations by first one or more processes executing on the computing device and to provide the determined one or more probabilities to the predicted action engine. - 12 WO 2014/003919 PCT/US2013/042092 In various embodiments, the predicted action engine may be configured to receive one or more indications of a current system context via receipt of an execution state of a process. In various embodiments, the predicted action engine may be configured to receive one or more indications of a current system context via receipt of environmental information for the computing device. In various embodiments, the predicted action engine may be configured to receive one or more indications of a current system context via receipt of an indication of availability of a resource. In various embodiments, the predicted action engine may be configured to select one or more predicted actions or resource utilizations through selection of one or more actions or resource utilizations that can be performed with the available resource without slowing down performance of the second computing device. In various embodiments, the apparatus and the computing device may be the same device. In various embodiments, the predicted action engine configured to be operated by the one or more computer processors to facilitate performance of the one or more selected actions or resource utilizations. In various embodiments, the predicted action engine may be configured to facilitate performance of the one or more selected actions through a load of executable code for the one or more actions that are predicted to occur. In various embodiments, the predicted action engine may be configured to facilitate performance of the one or more selected resource utilizations through cache of data from the resource. In various embodiments, the predicted action engine is configured to facilitate performance of the one or more selected resource utilizations through performance of a data access over a network. In various embodiments, the predicted action engine may be further configured to be operated by the one or more computer processors to receive the one or more probabilities of potential actions or resource utilizations by the first one or more processes. In various embodiments, the predicted action engine may be configured to receive the one or more probabilities of potential actions or resource utilizations by the first one or more processes through receipt of a structure comprising an ordered identification of potential actions or resource utilizations. In various embodiments, the predicted action engine may be configured to receive the structure comprising an ordered identification of potential actions or resource utilizations through receipt of a structure ordered by probability. In various embodiments, the predicted action engine may be configured to receive the structure comprising an ordered identification of potential actions or resource utilizations through receipt of a structure ordered by distance in time from a current event. In various - 13 WO 2014/003919 PCT/US2013/042092 embodiments, the predicted action engine may be further configured to be operated to add the selected action to the context. Computer-readable media (including non-transitory computer-readable media), methods, systems and devices for performing the above-described techniques are illustrative examples of embodiments disclosed herein. Additionally, other devices in the above-described interactions may be configured to perform various disclosed techniques. Although certain embodiments have been illustrated and described herein for purposes of description, a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that embodiments described herein be limited only by the claims. Where the disclosure recites "a" or "a first" element or the equivalent thereof, such disclosure includes one or more such elements, neither requiring nor excluding two or more such elements. Further, ordinal indicators (e.g., first, second or third) for identified elements are used to distinguish between the elements, and do not indicate or imply a required or limited number of such elements, nor do they indicate a particular position or order of such elements unless otherwise specifically stated. - 14 |
An apparatus includes a first circuit configured to receive one or more requests from a plurality of cores. Each of the one or more requests is to enter or to exit one of a plurality of power-down modes. The first circuit further selects one or more of the cores to enter or to exit the requested power-down mode or modes based on inrush current information associated with the power-down modes. A second circuit is configured to effect entering or exiting the requested power-down mode or modes in the selected one or more of the cores. |
1.An electronic device including:The first circuit, which is configured as:Receiving one or more requests from multiple cores, each of the one or more requests is to enter or exit one of multiple power down modes,Assign a priority to each of the cores based on the priority of the power down mode, andSelect one of the cores to enter or exit the requested one or more power-down modes based on the inrush current information associated with entering or exiting the plurality of power-down modes and based on the assigned priority Or more; andThe second circuit is configured to implement entering or exiting the requested one or more power down modes in one or more of the selected cores.2.The electronic device of claim 1, wherein the first circuit is further configured to select one or more additional cores in the core independently of the power down mode, and wherein the second circuit further It is configured to implement the entry or exit of the one or more power-down modes requested by one or more additional cores in the core.3.The electronic device of claim 2, wherein the first circuit is further configured to be independent of the power down mode and select one or more additional cores based on priority within the inrush current budget nuclear.4.The electronic device of claim 1, wherein the first circuit is further configured to select one or more of the cores based on an inrush current budget.5.4. The electronic device of claim 4, wherein the number of one or more of the selected cores is based on the inrush current budget.6.The electronic device of claim 1, wherein the priority of the power down mode is programmable.7.The electronic device according to claim 1, wherein the first circuit is further configured to increase the priority of the unselected core among the cores, and is configured to be based on the priority of the unselected core among the cores. Increase the priority to select another one or more of the cores.8.The electronic device of claim 1, wherein the inrush current information associated with the power down mode is programmable.9.A method of managing power, including:Receiving one or more requests from multiple cores, each of the one or more requests is to enter or exit one of multiple power down modes;Assign a priority to each of the cores based on the priority of the power down mode;Select one of the cores to enter or exit the requested one or more power-down modes based on the inrush current information associated with entering or exiting the plurality of power-down modes and based on the assigned priority Or more; andEnter or exit the requested one or more power down modes in one or more of the selected cores.10.The method of claim 9, further comprising:Select one or more additional cores in the core independently of the power down mode; andEnter or exit one or more power-down modes requested by one or more additional cores in the core.11.The method of claim 10, wherein the selection of one or more additional cores in the core is based on priority, independent of the power down mode, and within the inrush current budget.12.9. The method of claim 9, wherein selecting one or more of the cores is based on an inrush current budget.13.The method of claim 12, wherein the number of one or more of the selected cores is based on the inrush current budget.14.9. The method of claim 9, wherein the priority of the power down mode is programmable.15.The method of claim 9, further comprising:Increase the priority of the unselected core among the cores; andThe other one or more cores among the cores are selected based on the increased priority of the unselected core among the cores.16.9. The method of claim 9, wherein the inrush current information associated with the power down mode is programmable.17.An electronic device including:The first circuit, which is configured as:Receiving multiple requests from multiple cores, each of the multiple requests is one of entering or exiting multiple power down modes;Assign a priority to each of the cores based on the priority of the power down mode; andConcurrently select some of the cores to enter or exit different power-down modes based on the inrush current information associated with entering or exiting the multiple power-down modes and based on the assigned priority; andThe second circuit is configured to implement entering or exiting the requested power down mode in some of the selected cores.18.The electronic device of claim 17, wherein the first circuit is further configured to select some of the cores to enter or exit different power-down modes based on the inrush current budget.19.A method of managing power, including:Receiving multiple requests from multiple cores, each of which is to enter or exit one of multiple power down modes;Assign a priority to each of the cores based on the priority of the power down mode; andConcurrently select some of the cores to enter or exit different power-down modes based on the inrush current information associated with entering or exiting the multiple power-down modes and based on the assigned priority; andEnter or exit the requested power down mode in some of the selected cores.20.The method of claim 19, wherein concurrently selecting some of the cores to enter or exit different power-down modes is based on an inrush current budget. |
Electronic device and method of managing powerPriority claimThis patent application requires the Indian patent application number 4955/CHE/2015 filed on September 16, 2015 entitled "MANAGING POWER-DOWN MODES" and the title filed on January 29, 2016 The priority of US Patent Application No. 15/010,237 of "MANAGING POWER-DOWN MODES", both of which are assigned to the assignee of this application, and are expressly incorporated herein by reference as a whole.backgroundfieldThe present disclosure generally relates to electronic devices, and more particularly to methods and devices for managing multiple cores to enter or exit a power down mode.Background techniqueA typical electronic device (such as a processor in a wireless device) may include various cores operating in different power domains. Cores can range from a collection of transistors or circuits to execution units. More and more, cores can enter or exit power down mode at various times to manage power consumption. Each power-down mode is different and can include a power-collapse mode, in which all power is disconnected from the core. Other power-down modes may include gating the clock of the core (for example, disabling timing in the core). There are other power-down modes that can include adjusting the operating voltage and frequency of the core. Although entering and exiting the power-down mode can save power, such changes to the power-down mode may cause various defects. One design challenge is to manage multiple cores entering or exiting power-down mode and mitigating defects.OverviewVarious aspects of a device are provided. The apparatus includes a first circuit configured to receive one or more requests from a plurality of cores. Each of these one or more requests is to enter or exit one of multiple power down modes. The first circuit further selects one or more cores to enter or exit the requested one or more power-down modes based on the inrush current information associated with the power-down mode. The second circuit is configured to implement entering or exiting the requested one or more power down modes in the selected one or more cores.Provides aspects of a method of managing power. The method includes receiving one or more requests from multiple cores. Each of these one or more requests is to enter or exit one of multiple power down modes. The method further includes selecting one or more cores to enter or exit the requested one or more power-down modes based on the inrush current information associated with the power-down mode, and implementing the selected one or more cores Enter or exit the requested one or more power-down modes.Various aspects of another device are provided. The apparatus includes a first circuit configured to receive multiple requests from multiple cores. Each of these requests is to enter or exit multiple power down modes. The first circuit further selects one of the cores to enter or exit different power-down modes. The second circuit is configured to enable entering or exiting the requested power down modes in one of the selected cores.Aspects of another method of managing power are provided. The method includes receiving multiple requests from multiple cores. Each of these requests is to enter or exit one of multiple power down modes. The method further includes selecting one of the cores to enter or exit the requested power-down modes, and enabling entry or exit of the requested power-down modes in the selected one of the cores.It should be understood that according to the following detailed description, other aspects of the device and method will become easily understood by those skilled in the art, wherein various aspects of the device and method are illustrated and described in an illustrative manner. As will be realized, these aspects can be implemented in other and different forms and several details thereof can be modified in various other aspects. Accordingly, the drawings and detailed description should be regarded as illustrative in nature and not restrictive.Brief description of the drawingsVarious aspects of the equipment and methods will now be given in the detailed description by way of example and not limitation with reference to the accompanying drawings, in which:Figure 1 is a block diagram of an exemplary embodiment of a processor configured to manage multiple cores entering or exiting a power down mode based on inrush current information.Figure 2 is a block diagram of an exemplary embodiment of the power manager of Figure 1 operating as a token manager.Figure 3 is a sequence diagram of the operations of requesting and granting tokens.Figure 4 is a flowchart of the operation of an exemplary embodiment of a power manager control selecting one or more cores to grant a request.Figure 5 is a block diagram of an exemplary embodiment of a power manager control.A detailed descriptionThe detailed description set forth below in conjunction with the accompanying drawings is intended as a description of various exemplary embodiments of the present invention, and is not intended to represent the only embodiments in which the present invention can be practiced. This detailed description includes specific details to provide a thorough understanding of the present invention. However, it is obvious to those skilled in the art that the present invention can be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concept of the present invention. Acronyms and other descriptive terms may be used for convenience and clarity only, and are not intended to limit the scope of the invention.The various devices and methods given throughout this disclosure can be implemented by various forms of hardware. As an example, any of these devices or methods (alone or in combination) may be implemented as an integrated circuit, or as part of an integrated circuit. The integrated circuit may be a final product, such as a microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), programmable logic, or any other suitable integrated circuit. Alternatively, the integrated circuit may integrate other chips, discrete circuit elements, and/or other components as an intermediate product (such as a motherboard) or a part of the final product. The final product can be any suitable product including integrated circuits. By way of example, these products include cellular phones, personal digital assistants (PDA), laptop computers, desktop computers (PC), computer peripherals, multimedia devices, video devices , Audio equipment, global positioning system (GPS), wireless sensors, or any other suitable equipment.The word "exemplary" is used herein to mean serving as an example, instance, or illustration. Any embodiment described herein as "exemplary" need not be construed as being superior or superior to other embodiments. Likewise, the term "embodiment" of a device (equipment) or method does not require that all embodiments of the present invention include the described components, structures, features, functionality, procedures, advantages, benefits, or modes of operation.When referring to "signal", the term can include conductors that carry the desired signal. The term "connection" may include signal lines. The terms "connected", "coupled" or any variants thereof mean any connection or coupling, directly or indirectly, between two or more elements, and can encompass two "connected" or "coupled" together. There are one or more intermediate elements between each element. The coupling or connection between elements can be physical, logical, or a combination thereof. As used herein, as several non-limiting and non-exhaustive examples, two elements can be considered to be connected through the use of one or more wires, cables, and/or printed electrical connections, and through the use of electromagnetic energy (such as having radio frequency regions, The electromagnetic energy of wavelengths in the microwave region and optical (both visible and invisible) regions) are "connected" or "coupled" together.The use of "first" and "second" herein to designate any reference to elements generally does not limit the number or order of those elements. To be precise, these designations are used herein as a convenient way to distinguish two or more elements or instances of elements. Therefore, the quotation of the first element and the second element does not mean that only two elements can be used, or that the first element must be located before the second element.As used herein, the singular forms "a", "some", and "the" are intended to also include the plural forms, unless the context clearly dictates otherwise. It will also be understood that the terms "including", "having", "including" and/or "containing" when used herein specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not The existence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof is not excluded.Various aspects of a device having a circuit for managing multiple cores entering or exiting a power down mode are provided. An example of such a device may be a processor for wireless communication applications. In some examples, the apparatus may include a power management circuit configured to select cores for entering or exiting the power down mode based on the inrush current information. In some examples, the power management circuit is configured to receive a request to enter or exit the power down mode from the core and issue a token to the selected core to grant the requested token manager.As those skilled in the art will readily appreciate, the aspects and applications of the present disclosure may not be limited to the described exemplary embodiments. For example, the device of the present disclosure is not limited to the processor, and the power management circuit is not limited to the token manager. Accordingly, all references to specific applications are only intended to illustrate exemplary aspects of the memory, and it is understood that these aspects can have wide application differences.FIG. 1 is a block diagram of an exemplary embodiment of a processor 100 configured to manage multiple cores entering or exiting a power down mode based on inrush current information. The processor 100 may be, for example, a processor for wireless communication. In some examples, the exemplary apparatus may include the processor 100 or a cellular phone incorporating the processor 100. The processor 100 may be a stand-alone processor or integrated in a terminal product (such as a mobile phone, a desktop computer, a laptop computer, a tablet computer, or the like).The processor 100 includes cores 110 (100-1, 100-2, 100-3, and 110-4). The core may be, for example, a collection of circuits. In some examples, the core 110 may be a processor or an execution unit that executes instructions. The processor 100 may further include additional functional blocks (not shown for clarity), such as a graphics processor unit, a digital signal processor (DSP), a wireless modem, and a wireless local area network or WLAN block that interfaces with the core 110.Each of the cores 110 may include a power down mode circuit 104 (104-1, 104-2, 104-3, or 104-4 of each of the cores 110). The power-down mode circuit 104 can enable the corresponding core 110 to enter or exit various power-down modes. Thus, the power-down mode circuit 104 can make the corresponding core 110 power-on from the power-down mode, power-down to the power-down mode, or change among various power-down modes.Examples of power-down modes may include a power-collapse mode, in which all power is disconnected from the core. Correspondingly, the power-off mode may not draw current because all power is disconnected. Other power down modes can include a clock gating mode that disables timing in the core. There are other power-down modes that can include adjusting the operating voltage and frequency of the core. It can take different amounts of time to enter and exit the various power down modes. For example, entering and exiting power down mode can take more cycles than other power down modes.The power-down mode circuit 104 (as explained) can implement the corresponding core 110 to enter the power-down mode or the clock gating mode (for example, power down the core 110). The power down mode circuit 104 can similarly implement the core 110 exiting the power down mode and returning to full power operation (for example, powering up the core 110). In some examples, the power down mode circuit 104 may implement the corresponding core 110 to transition between the power down mode and the clock gating mode.The processor 100 further includes a power manager 105, an inrush current information storage 120, and a power-down mode priority storage 122. The power manager 105 may be configured to selectively control the power-down mode circuit 104 to select among the cores 110 for entering or exiting the power-down mode. In some examples, the power manager 105 may include a processor (such as one of the cores 110) that executes software instructions. The power manager 105 may select among the cores 110 based on the inrush current information stored in the inrush current information storage 120 and/or the power down mode priority stored in the power down mode priority storage 122. In some examples, the stored inrush current information and/or power down mode priority may be programmable (for example, changed by software instructions).The surge current information storage 120 may be, for example, a register that stores surge current information including surge currents caused by entering or exiting various power down modes. The process of entering and exiting various power-down modes may cause the surge current in the core 110 to reach a peak, and even reach a point where the capacity of the current supply to the core 110 is exceeded. By utilizing the inrush current information associated with entering and exiting various power-down modes, the power manager 105 can determine the core 110 that will be selected to efficiently enter or exit the power-down mode without causing additional inrush current. Number and order.The power-down mode priority storage 122 may be, for example, a register that stores the power-down mode priority. The priority may be based on the time to enter or exit the power down mode, for example. For example, the power-off mode may take the longest time to enter or exit, and therefore, the power-off mode may have the lowest priority. In some examples, the priority may be based on power saving in the power down mode.The power manager 105 may further receive the inrush current budget and select the core 110 based on the inrush current budget. The inrush current budget can be based on the current limit of the power supply (eg, power management integrated circuit or PMIC). In some examples, the inrush current budget may be further based on the current operation of the core, even if the core does not request to enter or exit the power down mode. For example, in cases where some cores are operating in high performance mode (thus consuming more power), the inrush current budget can be reduced.FIG. 2 is a block diagram of an exemplary embodiment of the power manager 105 of FIG. 1 operating as a token manager. The token may be a signaling indicating approval or permission to enter or exit the power down mode. The power manager 105 includes a power manager control 205 configured to receive one or more requests to enter or exit the power down mode from the core 110 (core 110-1 to core 110-4). Each of the one or more requests indicates that one of these cores wishes to enter or exit one of the power down modes.Multiple cores 110 may independently and in parallel send requests to enter or exit the power down mode. For example, the core 110-1 may request power-up from the power-down mode. The core 110-2 may request power down from full power operation to clock gating mode. 110-3 can request a transition from power down mode to clock gating mode, and so on. All requests can be made at the same time. As described above, executing all requests to enter or exit the power down mode in the multiple cores 110 can cause the inrush current to spike. Accordingly, the power manager control 205 may be further configured to select one or more cores 110 to grant the token so as not to cause the inrush current to reach a spike that exceeds the inrush current threshold (eg, inrush current budget). Figure 4-5 shows the additional features of the selection process.The process of requesting and granting tokens is described below. The power manager 105 (e.g., the power manager control 205) uses the signaling REQ 206 (for each of the cores 110 to 206-1 to 206-4) and the signaling ACK 207 (for each of the cores 110). One of 207-1 to 207-4) communicates with the core 110. In order to request permission to enter or exit the power down mode, each of the cores 110 may independently and in parallel request a token from the power manager 105 through the assertion signaling REQ 206. In some examples, the signaling REQ 206 may be carried by multiple signal lines to indicate the desired action and the desired resulting power down mode (e.g., indicate the desire to exit from the current power down mode to full power operation, enter the desired power down mode) The expectations of the mode, the identity of the desired power-down mode, etc.).The power manager 105 (eg, power manager control 205) receives the signaling REQ 206. For example, the signaling REQ 206 is provided as input to logic gates or components within the power manager control 205. To grant the token, the power manager 105 asserts signaling ACK 207 (207-1 to 207-4 for each of the cores 110). In response to the assertion, the power-down mode circuit 104 of the selected core 110 implements the requested power-down action (eg, enter or exit the power-down mode). Upon completion of the requested power-down action, the power-down mode circuit 104 de-asserts the signaling REQ 206 (to terminate the request) and de-asserts the signaling ACK 207 (to indicate the completion of the requested power-down action).The power manager 105 further includes a token register 230 and a core status register 240 to manage requests and tokens. The token register 230 includes a plurality of bits (231 to 234), each bit corresponding to one of the cores 110. Bits 231-234 indicate that the request for the token from the corresponding core 110 is active. For example, bit 231 stores the value "1" to indicate that the corresponding core 110-1 is requesting a token (e.g., signaling REQ 206-1 is asserted). Bits 233 and 234 store the value "0" to indicate that the corresponding cores 110-3 and 110-4 do not request tokens (for example, signaling REQ 206-2 to 206-4 are de-asserted).The core status register 240 stores the current power mode of the core. The core status register 240 includes bits 240-1 to 204-4, each bit corresponding to one of the cores 110. For example, bit 240-1 stores the value CO, thereby indicating that core 110-1 is in a full power operating state. 240-2 stores the value C1, thereby indicating that the core 110-2 is in the power-down mode. 240-3 stores the value C2, indicating that core 110-2 is in clock gating mode, and so on. Via the token register 230 and the core status register 240, the power manager 105 can keep track of the current status of the token request and the status of each of the cores 110 (eg, one of the full power operating state or the power down mode) .Figure 3 is a sequence diagram of the operations of requesting and granting tokens. At T0, one or more cores 110 request a token to enter or exit the power down mode by asserting (eg, pulling up) the signaling REQ 206. At T1, the power manager 105 (e.g., the power manager control 205) signals the ACK 207 by asserting (e.g., pulling high) to issue a token to the selected one or more cores 110. In response, the power down mode circuit 104 of the selected one or more cores implements the requested power down mode operation. For example, the power-down mode circuit 104 causes the corresponding core 110 to be powered on to the full-power operating state, from the full-power operating state to one of the power-down modes, or to transition among various power-down modes (in A Place). At B, the requested power down mode operation is completed. In response, the power down mode circuit 104 notifies the power manager 105 of the release of the token by de-asserting (eg, pulling down) both the signaling REQ 206 and the signaling ACK 207.4 is a flowchart of the operation of an exemplary embodiment of the power manager control 205 selecting one or more cores 110 to grant a request. In some examples, operations may be performed by the power manager control 205. At 402, one or more requests from multiple cores are received. In some examples, the power manager control 205 receives the request by providing the signaling REQ 206 as input to a logic gate or component within the power manager control 205. In some examples, each of the one or more requests is a request to enter or exit one of the multiple power down modes. Referring to FIG. 2, for example, the power manager control 205 receives a token request from the core 110 (110-1 to 110-4) by receiving the low state of the signaling REQ 206-1 and the signaling REQ 206-2. The power manager control 205 then saves the request in the token register 230 by storing "1" in bits 231 and 232, thereby instructing the cores 110-1 and 110-2 to request to enter or exit at least one of the power down modes, respectively. By.Each power-down mode is different and can include a power-collapse mode, in which all power is disconnected from the core. Other power-down modes may include gating the clock of the core (for example, disabling timing in the core). There are other power-down modes that can include adjusting the operating voltage and frequency of the core. The request to enter or exit the power-down mode may include, for example, powering on the requesting core by exiting one of the power-down modes to the full-power operating state, and by entering one of the power-down modes from the full-power operating state To make the requesting party nuclear power down. Such requests can also include transitions among various power-down modes.At 403, priority is assigned to each core. Referring to FIG. 1, for example, the assignment may be based on the power down mode requested by the core 110 and the power down mode priority stored in the power down mode priority storage 122. The priority may be based on the time to enter or exit the power down mode, for example. For example, the power-off mode may take the longest time to enter or exit, and therefore, the power-off mode may have the lowest priority. Referring to FIG. 2, for example, the power manager control 205 may assign a priority to the core 110 based on the state of the core stored in the core status register 240 and the power down mode indicated by the signaling REQ 206.At 404, the additional core or cores are selected. The additional core or cores are used as references and are added to the core selected at 408. In some examples, in order to quickly grant a request to enter or exit the power-down mode, the power manager control 205 can quickly track the selection of the core 110 for granting the request independently of the power-down mode. In some examples, the power manager control 205 may select cores based on a predetermined priority rather than based on the power down mode. For example, the predetermined priority may be a fixed order of core 110-1, core 110-2, core 110-3, and core 110-4. In some examples, it is determined that the fast-tracked selection is within the inrush current budget by selecting only one core. The inrush current budget can be based on the current limit of the power supply (eg, PMIC). In some examples, the inrush current budget may be further based on the current operation of the core, even if the core does not request to enter or exit the power down mode. For example, in cases where some cores are operating in high performance mode (thus consuming more power), the inrush current budget can be reduced.At 408, one or more cores are selected to enter or exit the requested one or more power down modes. In some examples, the selection may be based on the priority assigned in operation 403. In some examples, the selection may be further based on inrush current information associated with the power down mode. For example, the highest priority inrush current is compared with the inrush current budget to allow the maximum selection of cores without exceeding the current budget. In this way, the number of selected cores can be determined.For example, the selection of the core 110 allows the maximum number of requests from the highest priority core 110 (eg, the core 110 with the highest priority requesting to enter or exit the power down mode) to be granted within the inrush current budget. This type of selection can be made using the inrush current information of the highest priority core 110. The remaining inrush current budget can be utilized by selecting a lower priority core 110 that requires inrush current within the remaining inrush current budget (for example, a lower priority request to enter or exit the power down mode) . In this way, one or more cores 110 requesting to enter or exit the first power-down mode and one or more cores 110 requesting to enter or exit the second, different power-down mode can be selected concurrently. In some examples, “concurrently” may refer to selecting and/or granting requests to enter or exit different power-down modes at substantially the same time. In some examples, "concurrently" may refer to a request for selecting and/or granting entry or exit of different power-down modes with a substantial, non-zero overlap as understood by those of ordinary skill in the art.At 410, enter or exit one or more power down modes requested by the selected one or more cores. For example, referring to FIGS. 1 and 2, the power down mode circuit 104 of the selected core 110 (eg, from operations 404 and 408) may cause the selected core 110 to enter the requested power down mode or exit the current power down mode.At 412, the priority of the unselected core is increased. In some examples, when a selected core finishes entering or exiting the power down mode, the inrush current budget can be increased. In response, the power manager control 205 may return 408 (via operation 413) to select another core or cores to grant the request. Increasing the priority of unselected cores (which may have a lower priority in the beginning) prevents starvation of these cores.FIG. 5 is a block diagram of an exemplary embodiment of the power manager control 205. The block diagram may be an exemplary embodiment of a hardware implementation of the power manager control 205 and may include various (e.g., hardware and/or software) components. In some examples, the components described below may include instructions executed by one of the cores 110-1-110-4.In an exemplary embodiment, the power manager control 205 given below and the components included therein may include circuitry, one or more processors, software executing on one or more processors, or a combination thereof. These components may include circuits for generating signals for the functions described below or signal lines that carry those signals.As an example, a component, or any part of a component, or any combination of components may be implemented with one or more processors. Examples of these processors include: microprocessors, microcontrollers, digital signal processors (DSP), field programmable gate arrays (FPGA), programmable logic devices (PLD), state machines, gated logic, discrete hardware Circuitry, and other suitable hardware configured to perform the various functionalities described throughout this disclosure. One or more processors in the processing system can execute software. Software should be broadly interpreted as meaning instructions, instruction sets, codes, code segments, program codes, procedures, subroutines, software modules, applications, software applications, software packages, routines, subroutines, objects, executables , Threads of execution, procedures, functions, etc., regardless of whether they are described in software, firmware, middleware, microcode, hardware description language, or other terms.The power manager control 205 includes a priority assignment component 502, a core selection component 504, and a request processing component 506. The priority assignment component 502 receives the requested power down mode (enter or exit) and the requestor core 110 from the request processing component 506. The priority assigning component 502 further receives the power-down mode priority from the programmable power-down mode priority storage 122. In some examples, the priority assigning component 502 assigns a priority to the core based on the power down mode priority (which may, for example, be based on the time to enter or exit the power down mode) (eg, operation 403). In some examples, the priority assignment component 502 can increase the priority of the unselected cores so as not to starve the unselected cores (eg, operation 412).The core selection component 504 receives the assigned priority of the requester core from the priority assignment component 502. The core selection component 504 also receives the inrush current information and the inrush current budget. In some examples, the inrush current information may include the inrush current consumed to enter or exit each power down mode. The surge current information can be received from the surge current information storage 120 and can be programmed by software. The inrush current budget may be a limitation based on the power supplied (e.g., PMIC). The inrush current budget can be further adjusted based on the current operation of the core 110. For example, some cores 110 may participate in current consumption operations, so that the inrush current budget is reduced.In some examples, the core selection component 504 absorbs all requests and attempts to accommodate the maximum number of requests while following the inrush current budget. For example, the core selection component 504 can grant the maximum number of requests from the highest priority core 110 (eg, the core 110 with the highest priority requesting to enter or exit the power down mode) within the inrush current budget. Such determination can be made using the inrush current information of the highest priority core 110. The core selection component 504 can further utilize the remaining inrush current by selecting a lower priority core 110 that requires inrush current within the remaining inrush current budget (for example, a request with a lower priority to enter or exit the power down mode). Inrush current budget. In this way, the core selection component 504 can concurrently select one or more cores 110 requesting to enter or exit the first power down mode and one or more cores 110 requesting to enter or exit the second, different power down mode. Accordingly, the core selection component 504 is configured to select one or more cores 110 to be granted the power down mode request based on the inrush current budget. In addition, the number of one or more cores 110 selected is based on the inrush current budget. For example, see operation 408.In some examples, the core selection component 504 can select the requesting core 110 independently of the power down mode to speed up the selection process. For example, the core selection component 504 may utilize priorities independently of the power down mode, such as a fixed order of core 110-1, core 110-2, core 110-3, and subsequently core 110-4. In order to ensure that the selection of the core 110 independent of the power down mode is within the inrush current budget, a limited number (for example, one) of the core 110 can be selected in this way. For example, see operation 404. The core selection component 504 may proceed in parallel or sequentially to select the requestor core 110 based on the power down mode (operation 408).The core selection component 504 receives a notification that one of the selected cores 110 is completed from the request processing component 506. In response, the core selection component 504 can perform the above-described selection of the unselected core 110 and the new requestor core 110. The unselected cores 110 may have an increased priority compared to the previous selection, so as not to starve these cores 110. The request processing component 506 interfaces with the core 110 via the signaling REQ 206 and the signaling ACK 207 to receive the request and grant the request to enter or exit the power down mode. To grant the request, the request processing component 506 receives the core selection from the core selection component 504. Upon completion of a request, the request processing component 506 sends a notification to the core selection component 504.The specific order or hierarchy of the boxes in the operation method described above is provided as an example only. Based on design preferences, the specific order or hierarchy of the boxes in the operating method can be rearranged, revised, and/or modified. Unless explicitly stated in the claims, the accompanying method claims include various limitations on the operating method, but the above-mentioned limitations do not mean that they are restricted in any way by a specific order or hierarchy.Various aspects of the present disclosure are provided to enable those of ordinary skill in the art to practice the present invention. Various modifications to the exemplary embodiments given throughout this disclosure will be apparent to those skilled in the art, and the concepts disclosed herein can be extended to other magnetic storage devices. Therefore, the claims are not intended to be limited to all aspects of the present disclosure, but are to be given a full scope consistent with the language of the claims. All the structures and functions of the various components of the exemplary embodiments described throughout this disclosure are known to those of ordinary skill in the art or equivalent solutions known in the future are expressly incorporated herein by application, and are intended to be claimed Covered. In addition, any content disclosed in this article is not intended to contribute to the public, regardless of whether such disclosure is explicitly stated in the claims. No element of the claim shall be interpreted under the provisions of 35U.SC §112(f), unless the element is explicitly stated using the phrase "device for..." or in the case of a method claim, the element is used The phrase "steps for..." |
A four current transistor temperature sensor comprises a p-n junction, preferably the base-emitter junction of a bipolar transistor, which is driven with four different currents in a predetermined sequence. Each of the four currents induces a respective base-emitter voltage, which is measured. The temperature of the transistor is calculated based on the values of the four driving currents and the four measured base-emitter voltages. The four driving currents (I1, I2, I3 and I4) are preferably arranged such that I1=2*I3, I2=2*I4, I1/I2=A and I3/I4=A, where A is a predetermined current ratio. I1 and I2 produce respective base-emitter voltages which are subtracted from each other to produce DELTAVbe1, and I3 and I4 produce respective base-emitter voltages which are subtracted from each other to produce DELTAVbe2. When so arranged, the difference between DELTAVbe1 and DELTAVbe2 is entirely due to the effect of series base and emitter resistances rb and re. Therefore, the DELTAVbe1-DELTAVbe2 value provides a correction factor which enables temperature measurement errors due to rb and re to be eliminated. |
We claim: 1. A transistor temperature sensing system, comprising:a p-n junction, said p-n junction comprising the base-emitter junction of a bipolar transistor, at least one current source arranged to provide four different currents to said junction in a predetermined sequence, said junction and said at least one current source arranged such that each of said four currents induces a respective voltage across said junction, and a voltage measurement means connected to measure each of said induced junction voltages, wherein said four different currents are I1, I2, I3 and I4, respectively, and wherein I1=n*I3, I2=n*I4, I1/I2=A and I3/I4=A, where A is a predetermined current ratio. 2. The temperature sensing system of claim 1, wherein said voltage measurement means produces voltage measurements Vbe1-Vbe2 and Vbe3-Vbe4 when currents I1, I2, I3 and I4 are provided to said transistor in sequence, further comprising a processor arranged to calculate the temperature T of said transistor in accordance with:T=(q*[Delta]Vbe)/[k*ln(I1/I2)]where q is the electron charge, k is Boltzmann's constant, and [Delta]Vbe=[Delta]Vbe-[((n/(n-1))*([Delta]Vbe1-[Delta]Vbe2)] and[Delta]Vbe=[Delta]Vbe2-[((1/(n-1))*([Delta]Vbe1-[Delta]Vbe2)], in which [Delta]Vbe1=Vbe1-Vbe2 and [Delta]Vbe2=Vbe3-Vbe4.3. The temperature sensing system of claim 2, wherein said voltage measurement means comprises:a signal conditioning circuit which receives Vbe1 and Vbe2 and produces [Delta]Vbe1, and which receives Vbe3 and Vbe4 and produces [Delta]Vbe2, and an analog-to-digital converter (ADC) having an input which receives [Delta]Vbe1 and [Delta]Vbe2 from said signal conditioning circuit, converts [Delta]Vbe1 and [Delta]Vbe2 to respective digital values, and provides said digital values to said processor. 4. The temperature sensing system of claim 3, wherein said processor comprises at least two registers and a subtractor, said processor arranged to control said at least one current source to provide said four current values to said transistor, to receive said digital values from said ADC, and to calculate [Delta]Vbe1-[Delta]Vbe2, and[Delta]Vbe1-[((n/(n-1))*([Delta]Vbe1-[Delta]Vbe2)] and/or [Delta]Vbe2-[((1/(n-1))*([Delta]Vbe1-[Delta]Vbe2)]. 5. A transistor temperature sensing system, comprising:a p-n junction, at least one current source arranged to provide four different currents to said junction in a predetermined sequence, said junction and said at least one current source arranged such that each of said four currents induces a respective voltage across said junction, and a voltage measurement means connected to measure each of said induced junction voltages, wherein said at least one current source comprises a current mirror having an input transistor and two output transistors, the outputs of said output transistors connected to provide two of said four different currents when said input transistor is driven with a first current and the other two of said four different currents when said input transistor is driven with a second current. 6. A transistor temperature sensing system, comprising:a bipolar transistor, at least one current source arranged to provide currents I1, I2, I3 and I4 to said transistor in a predetermined sequence, wherein I1=2*I3, I2=2*I4, I1/I2=A and I3/I4=A, where A is a predetermined current ratio, said transistor and said at least one current source arranged such that each of I1, I2, I3 and I4 induces a respective voltage between said transistor's base and emitter, a voltage measurement means connected to produce voltage measurements Vbe1-Vbe2 and Vbe3-Vbe4 when currents I1, I2, I3 and I4 are provided to said transistor in sequence, and a processor arranged to calculate the temperature T of said transistor in accordance with: T=(q*[Delta]Vbe)/[k*ln(I1/I2)]where q is the electron charge, k is Boltzmann's constant, and [Delta]Vbe=[Delta]Vbe1-[2*([Delta]Vbe1-[Delta]Vbe2)] andVbe=[Delta]Vbe2-([Delta]Vbe1-[Delta]Vbe2), in which [Delta]Vbe1=Vbe1-Vbe2 and [Delta]Vbe2=Vbe3-Vbe4.7. The temperature sensing system of claim 6, wherein said voltage measurement means comprises:a signal conditioning circuit which receives Vbe1 and Vbe2 and produces [Delta]Vbe1, and which receives Vbe3 and Vbe4 and produces [Delta]Vbe2 and an analog-to-digital converter (ADC) having an input which receives [Delta]Vbe1 and [Delta]Vbe2 from said signal conditioning circuit, converts [Delta]Vbe1 and [Delta]Vbe2 to respective digital values, and provides said digital values to said processor. 8. The temperature sensing system of claim 7, wherein said processor comprises first and second registers and a subtractor, said processor arranged to control said at least one current source to provide said four current values to said transistor, to receive said digital values from said ADC, and to calculate [Delta]Vbe1-[Delta]Vbe2, and [Delta]Vbe1-[2*([Delta]Vbe1-[Delta]Vbe2)] and/or [Delta]Vbe2-([Delta]Vbe1-[Delta]Vbe2).9. The temperature sensing system of claim 8, wherein said at least one current source comprises a current mirror having an input transistor and two output transistors, the outputs of said output transistors connected to provide two of said four different currents when said input transistor is driven with a first current and the other two of said four different currents when said input transistor is driven with a second current, said processor further arranged to control the current provided to said input transistor.10. A temperature sensing method, comprising:forcing currents I1, I2, I3 and I4 through a p-n junction in sequence, wherein II=n*I3, I2=n*I4, I1/I2=A and I3/I4=A, where A is a predetermined current ratio, such that I1, I2, I3 and I4 produce respective voltages V1, V2, V3 and V4 across said junction, measuring V1, V2, V3 and V4, determining the temperature T of said junction in: accordance with: T=(q*[Delta]Vbe)/[k*ln(I1/I2)]where q is the electron charge, k is Boltzmann's constant, and [Delta]Vbe=[Delta]Vbe1-[((n/(n-1))*([Delta]Vbe1-[Delta]Vbe2)] and [Delta]Vbe=[Delta]Vbe2-[((1/(n-1))*([Delta]Vbe1-[Delta]Vbe2)], in which [Delta]Vbe1=V1-V2 and [Delta]Vbe2=V3-V4. 11. The method of claim 10, wherein said p-n junction comprises a bipolar transistor and said currents I1, I2, I3 and I4 produce respective voltages V1, V2, V3 and V4 across said transistor's base-emitter junction.12. The method of claim 10, wherein n=2 and[Delta]Vbe=[Delta]Vbe1-[2*([Delta]Vbe1-[Delta]Vbe2)] and [Delta]Vbe=[Delta]Vbe2-([Delta]Vbe1[Delta]Vbe2). |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to the field of transistor temperature sensors, and particularly to methods of reducing measurement errors due to intrinsic base and emitter resistances in such sensors.2. Description of the Related ArtNumerous circuit devices, such as transistors, diodes and resistors, have operating characteristics that are temperature dependent. Because of their temperature dependencies, such devices are extensively used as temperature sensors. For example, germanium and silicon diodes can be operated at a constant forward-biased current, and the resulting forward-biased voltage measured to determine the temperature in accordance with the standard forward-biased diode equation:V=kT/q ln I/Iswhere V is the forward-biased voltage, k is Boltzmann's constant, q is the electron charge, T is the absolute temperature in degrees Kelvin, I is the forward-biased current and Is is the diode's saturation current.In practice, the measurement of temperature with a diode is subject to several inaccuracies. The precise voltage-temperature relationship depends upon the actual details of the junction, notably the doping densities on either side of the junction, the dopant profiles and the junction area, as well as secondary considerations such as bulk and surface defects in the material. These factors are difficult to quantify with certainty, and many of the parameters in the device equations (such as mobility) are themselves temperature-dependent. Other effects, such as conductivity modulation and series resistances, can also complicate the device's behavior.Another approach employs two separate junctions which are fabricated on the same substrate, but which are operated at different current densities. This eliminates the effects of variations in doping levels and in the value of the bandgap voltage. The dual junction approach has been implemented with a pair of bipolar transistors whose emitter areas are in the ratio A. The difference in collector current densities gives rise to a difference in the base-emitter voltages for the two transistors. The relationship between the base-emitter voltage differential ([Delta]Vbe) and the device temperature is given by the expression:[Delta]Vbe=kT/q ln AWhile this approach offers significant advantages over the single junction temperature measurement, it still has some limitations. There is a certain amount of tolerance in the transistor fabrication, which introduces an ambiguity into the emitter area ratio. Furthermore, the accuracy of the equation is reduced by ohmic resistances associated with the junction, specifically the base resistance rb and the emitter resistance re. The base and emitter resistances may be considered to include both the intrinsic resistances inherent in the device, and the resistances associated with connecting lines. Calibration of such a sensor is required for most applications, and the fact that at least a pair of junctions are required introduces the possibility that differential strain across the substrate could result in poor tracking of junction voltages with a consequent error in the small [Delta]Vbe voltage.Another technique is described in U.S. Pat. No. 5,195,827 to Audy et al. Here, a single bipolar transistor is sequentially driven with three different currents, inducing three base-emitter voltages which are measured and used to calculate temperature. This approach also has significant shortcomings, however. Using three currents requires that the ratios between the currents be kept small, in order to avoid heating up the sensing transistor and thereby introducing error into the temperature measurement. Also, the calculations necessitated by a three-current approach are likely to require non-integer math, which can be difficult and/or impractical to implement.SUMMARY OF THE INVENTIONA four current transistor temperature sensor and method are presented which overcome the problems noted above. The invention allows the use of large current ratios and simple temperature calculations, while still reducing or eliminating intrinsic base and emitter resistance errors.A p-n junction, preferably the base-emitter junction of a bipolar transistor, is driven with four different currents in a predetermined sequence. Each of the four currents induces a respective base-emitter voltage, which is measured. The temperature of the transistor is calculated based on the values of the four driving currents and the four measured base-emitter voltages.In a preferred embodiment, the four driving currents (I1, I2, I3 and I4) are arranged such that I1=n*I3, I2=n*I4, I1/I2=A and I3/I4=A, where A is a predetermined current ratio. In operation, I1 and I2 produce respective base-emitter voltages which are subtracted from each other to produce [Delta]Vbe1, and I3 and I4 produce respective base-emitter voltages which are subtracted from each other to produce [Delta]Vbe2. When so arranged, the difference between [Delta]Vbe1 and [Delta]Vbe2 is entirely due to the effect of series base and emitter resistances rb and re. The [Delta]Vbe1-[Delta]Vbe2 value thus provides a correction factor which enables temperature measurement errors due to rb and re to be eliminated. This arrangement also allows the use of large currents ratios, and greatly simplifies the calculations required to determine temperature T.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic diagram illustrating the basic principles of a transistor temperature sensor per the present invention.FIG. 2 is a schematic diagram of a preferred current source arrangement for the present invention.FIG. 3 is a schematic diagram showing a preferred current source implementation for the present invention.FIG. 4 is a block/schematic diagram illustrating a temperature measurement system which employs the present invention.DETAILED DESCRIPTION OF THE INVENTIONA four current transistor temperature sensor per the present invention is shown in FIG. 1. A p-n junction 10 is employed as a temperature sensor. P-n junction 10 is preferably a bipolar transistor Qs, but other bipolar devices, such as a junction or Schottky diode, could also be used.As indicated in FIG. 1, bipolar transistor Qs has an associated series base resistance rb and a series emitter resistance re. These resistances may be due to the intrinsic properties of the p-n junction itself, as well as lead and connection resistances. As noted above, these resistances often degrade the accuracy of prior art temperature sensors which employ a p-n junction as a sensing element.The present sensor is arranged to provide four different currents through p-n junction 10, each of which induces a respective voltage across the junction. As shown in FIG. 1, a current source 12 is arranged to provide currents I1, I2, I3 and I4 to sensor transistor Qs in a predetermined sequence, which induces voltages Vbe1 Vbe2, Vbe3 and Vbe4, respectively, across the transistor's base-emitter junction.Temperature can be determined by calculating the difference between the base-emitter voltages induced by two different currents. By measuring Vbe1 with current I1 applied to transistor Qs, and measuring Vbe2 with current I2 applied, the difference [Delta]Vbe1 between Vbe1 and Vbe2 is given by:[Delta]Vbe1=(kT/q)*ln[(I1/(1+(1/[beta])))/Is]+I1*re+(I1/[beta])*rb-(kT/q)*ln[(I2/(1+(1/[beta])))/Is]-I2*re-(I2/[beta])*rb volts (Eq. 1)where k is Boltzmann's constant, q is the electron charge, T is the absolute temperature in degrees Kelvin, Is is the transistor's saturation current, and [beta] is the transistor's gain. This simplifies to:[Delta]Vbe1=kT/q ln(I1/I2)+(I1-I2)*(re+(rb/[beta])) (Eq. 2)Similarly, Vbe3 and Vbe4 are measured with currents I3 and I4 applied, respectively, with the difference [Delta]Vbe2 between Vbe3 and Vbe4 given by:[Delta]Vbe2=kT/q ln(I3/I4)+(I3-I4)*(re+(rb/[beta])) (Eq. 3)If rb, re and [beta] are known, either of equations 2 and 3 could be used to determine the temperature of transistor Qs. With each expression requiring the application of only two currents, the I1/I2 or I3/I4 ratios can be larger than would be permissible with a three-current scheme, and still not unacceptably heat the transistor. The larger current ratios provide a larger [Delta]Vbe, which tends to increase the accuracy of the measurement.However, transistor parameters rb, re and [beta] can be difficult to ascertain, and may vary from transistor to transistor. These problems are overcome when the invention is arranged in accordance with the preferred embodiment shown in FIG. 2. Here, the ratios between currents I1 and I2, and between I3 and I4, are equal to a common value "A". In addition, currents I1 and I2 are made equal to n*I3 and n*I4, respectively. When so arranged, the difference between [Delta]Vbe1 and [Delta]Vbe2 is entirely due to the effect of parasitic resistances rb and re. This arrangement results in an expression for temperature T as follows:T=(q*[Delta]Vbe)/[k*ln(I1/I2)] (Eq. 4)where [Delta]Vbe is given by:[Delta]Vbe=[Delta]Vbe1-[((n/(n-1))*([Delta]Vbe1-[Delta]Vbe2)], (Eq. 5)and[Delta]Vbe=[Delta]Vbe2-[(1/(n-1))*([Delta]Vbe1-[Delta]Vbe2)], (Eq. 6)in which [Delta]Vbe1=Vbe1-Vbe2 and [Delta]Vbe2=Vbe3-Vbe4. Thus, when I1-I4 have the relationships specified above, errors that would otherwise be present due to the series base and emitter resistances are eliminated.The value of n is preferably 2. When n=2, the expressions for [Delta]Vbe are simplified as follows:[Delta]Vbe=[Delta]Vbe1-[2*([Delta]Vbe1-[Delta]Vbe2)], (Eq. 7)and[Delta]Vbe=Vbe2-([Delta]Vbe1-[Delta]Vbe2). (Eq. 8)Making n equal to two is preferred because, assuming the voltage measurements are converted to digital values, a multiplication by two is easily accomplished by calculating [Delta]Vbe1-[Delta]Vbe2 and performing a left-shift on the result.A preferred embodiment of current source 12 is illustrated in FIG. 3. Here, current source 12 comprises a current mirror having an input transistor Qin and two output transistors Qout1 and Qout2; current ratio A is established by making the emitter area (A*X) of Qout1 A times the emitter area of Qout2 (X) A control signal CONTROL establishes the current in Qin. In operation, CONTROL is set to a first value, causing Qout1 and Qout2 to output currents I1 and I2. A switch connects first I1, then I2, to junction 10, and a voltage measuring means 20 measures the resulting base-emitter voltages Vbe1 and Vbe2. The CONTROL signal is then changed such that the current in Qin is halved, which also halves the current in Qout1 (I3) and Qout2 (I4). These currents are connected to junction 10 in sequence, and Vbe3 and Vbe4 measured. With I1-I2 and Vbe1-Vbe4 known, [Delta]Vbe1 and [Delta]Vbe2 are calculated and provided to equation 7 or 8 to determine [Delta]Vbe1 which is then used by equation 4 to produce T. Operation is the same if current source 12 is arranged such that n is a value other than two, except that equation 5 or 6 must be used to calculate [Delta]Vbe.A system for determining and storing the value [Delta]Vbe1[Delta]Vbe2 needed in equations 5 and 6 is shown in FIG. 4. Voltage measuring means 20 is implemented with a signal conditioning circuit 22 and an analog-to-digital converter (ADC) 24. Circuit 22 samples consecutive Vbe values (e.g., Vbe1 and Vbe2, or Vbe3 and Vbe4) and provides the differences (i.e., [Delta]Vbe1 and [Delta]Vbe2) to ADC 24, which converts the [Delta]Vbe values to digital form. The ADC output is provided to a processor 26 which includes a first register 28, an offset register 30, and a subtractor 32.When CONTROL is asserted (i.e., goes high), currents I3 and I4 are successively applied to junction 10. This produces voltages Vbe3 and Vbe4, and signal conditioning circuit 22 calculates Vbe3-Vbe4=[Delta]Vbe2, ADC 24 converts this to a digital value, which is stored in register 28 when the conversion is complete (as indicated by the ADC's generation of the "eoc" (end-of-convert) signal). CONTROL is then deasserted, the above sequence repeats for currents I1 and I2, and a digital representation of [Delta]Vbe1 appears at the output of the ADC. The output of register 28 ([Delta]Vbe2) is subtracted from the ADC output ([Delta]Vbe1) by subtractor 32, with the result stored in offset register 30. This value ([Delta]Vbe2-[Delta]Vbe1) can then be used in equation 5, 6, 7 or 8 to produce [Delta]Vbe, which is used in equation 4 to calculate T. Note that, if n=2, the [Delta]Vbe1-[Delta]Vbe2 value stored in offset register 30 can be doubled by left-shifting the data one bit; this can be useful when using equation 7 to calculate [Delta]Vbe.A controller (not shown) controls the system's operating sequence, by, for example, providing the CONTROL signal and controlling the switching between currents within current source 12. The controller function may be handled by processor 26, or may be a separate circuit block.Signal conditioning circuit 22 might comprise, for example, a switched capacitor integrator which samples two base-emitter voltages (e.g., Vbe1 and Vbe2) and integrates the difference to produce the [Delta]Vbe value (e.g., [Delta]Vbe1) provided to processor 26.Note that the implementation shown in FIG. 4 and discussed above is merely exemplary. Many other signal conditioning circuit designs might be employed, and other topologies could be used to determine [Delta]Vbe1-[Delta]Vbe2 For example, rather than use a signal conditioning circuit to calculate the difference between base-emitter voltages, each base-emitter voltage might be converted to a digital value, and the digital values subtracted as necessary to determine [Delta]Vbe1 and [Delta]Vbe2. However, use of an analog signal conditioning circuit to calculate [Delta]Vbe1 and [Delta]Vbe2 is preferred, as this approach allows the base-emitter voltages to be amplified to a level sufficient for the ADC to resolve, thereby obtaining a degree of measurement resolution that would be otherwise be difficult to achieve.While particular embodiments of the invention have been shown and described, numerous variations and alternate embodiments will occur to those skilled in the art. Accordingly, it is intended that the invention be limited only in terms of the appended claims. |
Apparatuses, systems and methods associated with management of virtualized network function identities in network function virtualization operator networks and/or software-defined networks are disclosed herein. In embodiments, an apparatus, for identity management of virtualized entities, may include a memory device with instructions stored thereon and a processor. The processor, in response to execution of the instructions stored on the memory device, may detect an instantiation of a virtualized network function component (VNFC) and obtain identifiers for components of a platform based on the detected instantiation, the platform to implement the VNFC. The process may further generate a unique identifier based on the identifiers for the components of the platform and assign the unique identifier to the VNFC. Other embodiments may be described and/or claimed. |
ClaimsWhat is claimed is:1. An apparatus for identity management of virtualized entities, comprising:a memory device with instructions stored thereon; andone or more processors that, in response to execution of the instructions stored on the memory device, are to:detect an instantiation of a virtualized network function component (V FC);obtain identifiers for components of a platform based on the detected instantiation, the platform to implement the VNFC;generate a unique identifier based on the identifiers for the components of the platform; andassign the unique identifier to the VNFC.2. The apparatus of claim 1, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further perform a hash operation on the identifiers to generate the unique identifier.3. The apparatus of any of the claims 1 and 2, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further: obtain a globally unique identifier associated with the VNFC; andperform a hash operation on the identifiers and the globally unique identifier to generate the unique identifier.4. The apparatus of any of the claims 1 and 2, wherein the identifiers for the components of the platform include one or more identifiers selected from the group consisting of a rack identifier, a root of trust identifier, a platform identifier, a basic input/output system identifier, an operating system identifier, and a virtual machine manager identifier.5. The apparatus of any of the claims 1 and 2, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further: obtain a tenant identifier for a tenant that requests use of the VNFC; and associate the unique identifier with the tenant identifier.6. The apparatus of claim 5, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further store, in a log on the memory device, the tenant identifier with the associated unique identifier.7. The apparatus of any of the claims 1 and 2, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further: obtain a tenant identifier for a tenant that requests use of the VNFC;determine that the tenant is not authorized to utilize the VNFC based on the tenant identifier; andprevent use of the VNFC by the tenant based on the determination that the tenant is not authorized.8. The apparatus of any of the claims 1 and 2, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further: receive an instantiation request of a virtualized network function (VNF) from a cloud operating system (OS) service;determine that the VNF is to utilize the VNFC based on the instantiation request; andtransmit, to the cloud OS service, the unique identifier assigned to the VNFC for association with a root of trust (RoT) associated with the VNF.9. The apparatus of claim 8, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further:generate a second unique identifier based on the unique identifier assigned to theVNFC;assign the second unique identifier to the VNF; andregister the second unique identifier assigned to the VNF.10. The apparatus of claim 8, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further:retrieve a secure timestamp based on the instantiation request;associate the secure timestamp with the unique identifier assigned to the VNFC; andtransmit, to the cloud OS service, the secure timestamp with the unique identifier for association with the RoT.11. The apparatus of claim 9, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further:log operations, performed by the VNF, associated with the second unique identifier; andstore the logged operations on the memory device.12. The apparatus of any of the claims 1 and 2, wherein the unique identifier is assigned to the VNFC by a root of trust (RoT) of the apparatus, and wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further:log the unique identifiers in the RoT for management by the RoT.13. One or more computer-readable media having instructions storedthereon, wherein the instructions, in response to execution by a device, cause the device to:process an instantiation request from a network function virtualization (NFV) infrastructure for a virtualized network function component (VNFC);obtain identifiers for components of a platform to implement the VNFC based on the instantiation request;generate a unique identifier for the VNFC based on the identifiers for the components of the platform; andtransmit the unique identifier to the NFV infrastructure for association with theVNFC.14. The one or more computer-readable media of claim 13, wherein the instructions, in response to execution by the device, cause the device to further:perform a hash operation on the identifiers for the components of the platform, wherein the unique identifier is set to a result of the hash operation.15. The one or more computer-readable media of any of the claims 13 and 14, wherein the instructions, in response to execution by the device, cause the device to further:extract, from the instantiation request, a globally unique identifier associated with the VNFC; andperform a hash operation on the identifiers for the components of the platform and the globally unique identifier associated with the VNFC, wherein the unique identifier is set to a result of the hash operation.16. The one or more computer-readable media of any of the claims 13 and 14, wherein the instructions, in response to execution by the device, cause the device to further:obtain a tenant identifier for a tenant that requests use of the VNFC; and associate the unique identifier with the tenant identifier.17. The one or more computer-readable media of claim 16, wherein the instructions, in response to execution by the device, cause the device to further:store, in a log, the tenant identifier with the associated unique identifier.18. The one or more computer-readable media of any of the claims 13 and 14, wherein the instructions, in response to execution by the device, cause the device to further:obtain a tenant identifier for a tenant that requests use of the V FC;determine that the tenant is not authorized to utilize the virtualized network function based on the tenant identifier; andprevent use of the VNFC by the tenant based on the determination that the tenant is not authorized.19. The one or more computer-readable media of any of the claims 13 and 14, wherein the instructions, in response to execution by the device, cause the device to further:receive an instantiation request of a virtualized network function (VNF) from a cloud operating system (OS) service;determine that the VNF is to utilize the VNFC based on the instantiation request; andtransmit, to the cloud OS service, the unique identifier assigned to the VNFC for association with a root of trust (RoT) associated with the VNF.20. The one or more computer-readable media of claim 19, wherein the instructions, in response to execution by the device, cause the device to further:generate a second unique identifier based on the unique identifier assigned to theVNFC;assign the second unique identifier to the VNF; andregister the second unique identifier assigned to the VNF.21. The one or more computer-readable media of any of the claims 13 and 14, wherein the instructions, in response to execution by the device, cause the device to further:retrieve a secure timestamp based on the instantiation request;associate the secure timestamp with the unique identifier assigned to the VNFC; andtransmit, to the NFV infrastructure, the secure timestamp with the unique identifier for association with a root of trust (RoT) of the platform.22. The one or more computer-readable media of any of the claims 13 and 14, wherein the unique identifier is generated by a root of trust (RoT) of the platform, and wherein the instructions, in response to execution by the device, cause the device to further:log the unique identifiers in the RoT for management by the RoT.23. An apparatus for virtualized entity identity management, comprising:means for obtaining identifiers for components of a platform in response to instantiation of a virtualized network function component (VNFC), the platform to implement the VNFC;means for generating a unique identifier for the VNFC based on the identifiers for the components of the platform; andmeans for assigning the unique identifier to the VNFC.24. The apparatus of claim 23, further comprising:means for performing a hash operation on the identifiers for the components of the platform, wherein the unique identifier is set to a result of the hash operation.25. The apparatus of any of the claims 23 and 24, further comprising:means for obtaining a globally unique identifier associated with the VNFC; and means for performing a hash operation on the identifiers for the components of the platform and the globally unique identifier associated with the VNFC, wherein the unique identifier is set to a result of the hash operation.26. The apparatus of any of the claims 23 and 24, further comprising:means for obtaining a tenant identifier for a tenant that requests use of the VNFC; andmeans for associating the unique identifier with the tenant identifier.27. The apparatus of any of the claims 23 and 24, further comprising:means for obtaining a tenant identifier for a tenant that requests use of the VNFC; means for determining that the tenant is not authorized to utilize the VNFC based on the tenant identifier; andmeans for preventing use of the VNFC by the tenant based on the determination that the tenant is not authorized.28. The apparatus of any of the claims 23 and 24, further comprising:means for receiving an instantiation request of a virtualized network function (VNF) from a cloud operating system (OS) service;means for determining that the VNF is to utilize the VNFC based on the instantiation request; andmeans for transmitting, to the cloud OS service, the unique identifier assigned to the V FC for association with a root of trust (RoT) associated with the VNF.29. The apparatus of claim 28, further comprising:means for generating a second unique identifier based on the unique identifier assigned to the VNFC;means for assigning the second unique identifier to the VNF; andmeans for registering the second unique identifier assigned to the VNF.30. The apparatus of claim 28, further comprising:means for retrieving a secure timestamp based on the instantiation request; means for associating the secure timestamp with the unique identifier assigned to the VNFC; andmeans for transmitting, to the cloud OS service, the secure timestamp with the unique identifier for association with the RoT. |
IDENTITY MANAGEMENT OF VIRTU ALIZED ENTITIESCross Reference to Related ApplicationsThe present application claims priority to U.S. Provisional Patent Application Ser. No. 62/295,924, entitled CRYPTOGRAPHIC IDENTITIES MANAGEMENT INVIRTUAL NFV AND SDN OPERATOR NETWORKS, filed February 16, 2016, which is herein incorporated by reference in its entirety.Cross Reference to Related ApplicationsThe present disclosure relates to the field of electronic circuits. More particularly, the present disclosure relates to the management of virtualized network function identities in network function virtualization operator networks and software-defined networks.BackgroundThe background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.Legacy security systems based on physical system technology have relied on globally unique identifiers cryptographically bound to un-mutable identities, such as being bound to media access control (MAC) addresses, internet protocol (IP) addresses, or other identities that are embedded into security credentials. The globally unique identifiers have been domain-wide unique allowing for the physical entity to be uniquely identified within the corresponding domain. The globally unique identities are unmutable for the lifetime of the system.Brief Description of the DrawingsEmbodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.Figure 1 illustrates an example network functions virtualization environment, according to various embodiments.Figure 2 illustrates an example message flow diagram among a virtualized network function component and a security controller, according to various embodiments.Figure 3 illustrates an example message flow diagram among elements of a network functions virtualization environment, according to various embodiments. Figure 4 illustrates an example flow diagram of operations within a network functions virtualization environment, according to various embodiments.Figure 5 illustrates an example computing device that may employthe apparatuses and/or methods described herein.Figure 6 illustrates an example computer-readable storage medium that may employ the apparatuses and/or methods described herein.Detailed DescriptionApparatuses, systems and methods associated with management of virtualized network function identities in network function virtualization operator networks and/or software-defined networks are disclosed herein. In embodiments, an apparatus, for identity management of virtualized entities, may include a memory device with instructions stored thereon and a processor. The processor, in response to execution of the instructions stored on the memory device, may detect an instantiation of a virtualized network function component (VNFC) and obtain identifiers for components of a platform based on the detected instantiation, the platform to implement the VNFC. The processor may further generate a unique identifier based on the identifiers for the components of the platform and assign the unique identifier to the VNFC. Other embodiments may be described and/or claimed.In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.Aspects of the disclosure are disclosed in the accompanying description. Alternate embodiments of the present disclosure and their equivalents may be devised without parting from the spirit or scope of the present disclosure. It should be noted that like elements disclosed below are indicated by like reference numbers in the drawings.Various operations may be described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter.However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).The description may use the phrases "in an embodiment," or "in embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous.As used herein, the term "circuitry" may refer to, be part of, or include anApplication Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.Figure 1 illustrates an example network functions virtualization (NFV)environment 100, according to various embodiments. The NFV environment 100 may include one or more virtualized network functions that may dynamically instantiate and terminate during operation of the NFV environment 100. It is desirable to have globally unique identifiers assigned to these virtualized network functions. Unlike legacy systems where the globally unique identities that bind the globally unique identifiers to the un- mutable identities, the dynamic instantiation and termination of the virtualized network functions present challenges, including assigning the globally unique identifiers upon dynamic instantiation of the virtualized network functions and maintaining the assignment of the globally unique identifiers as life cycle events occur with the virtualized network functions. The embodiments disclosed herein may be able to assign the globally unique identifiers upon dynamic instantiation of the virtualized network functions and maintain the assignments.The network function virtualization environment 100 may include a tenant management portion 102, a network functions virtualization infrastructure (NFVI) portion 104, and/or an operations management portion 106. The tenant management portion 102 may include one or more tenants, such as tenant 108a and tenant 108b, an operations support system (OSS)/business support system (BSS) 110, or some combination thereof. The tenant 108a and the tenant 108b may be one or more end-user devices (such as computer devices, user devices, cellular phones, handheld computer devices, or some combination thereof), one or more subscribers to an NFV service provider, or some combination thereof. In some embodiments, the tenant 108a and the tenant 108b may be user devices that request and/or receive services from a cellular network provider.The OSS/BSS 110 may be provided by an NFV service provider (which may also be referred to as an operator herein). The OSS/BSS 110 may be communicatively coupled to the tenant 108a and/or the tenant 108b. The tenant 108a and/or the tenant 108b may transmit a request for a NFV service to the OSS/BSS 110. The OSS/BSS 110 may receive the request and forward the request to the operation management portion 106 for scheduling of the NFC service.The operation management portion 106 may receive the request for the NFV service and schedule performance of the NFV service with the NFVI portion 104. The NFVI portion 104 may include one or more virtual machines and/or containers that operate on general purpose hardware platform, such as a computer device and/or a server. The NFVI portion 104 may perform the NFV service and return a result of the NFV service to the operation management portion 106 for provision to the tenant 108a and/or the tenant 108b that requested the service.The NFVI portion 104 may include a NFVI 114 to perform the requested NFV services and output the results of the NFV service to the operation management portion 106. The NFVI 114 may include one or more virtualized functions, such as virtualized network function (VNF) 164, VNF-N 162, virtualized switch/router 160, virtualized security functions 158, or some combination thereof. The virtualized security functions 158 may include a user data plane 154 developed through use of a data plane development kit. The one or more virtualized functions may include computer code to cause computer hardware to perform one or more functions that, in legacy systems, were previously performed by the individual tenants. The one or more virtualized functions may be assigned computer hardware from one or more devices to perform operations associated with the one or more virtualized functions. Each of the one or more virtualized functions may have a corresponding unique identifier, such as VNF ID 166 that corresponds with the VNF 164, Vswitch ID 152 that corresponds with the virtualized switch/router 160, SecMon ID 156 that corresponds with the virtualized security functions 158, or some combination thereof.The one or more virtualized functions may each include one or more virtualized function components, such as virtualized network function component (VNFC) 148 and/or VNFC 150. The VNFC 148 and/or the VNFC 150 may dynamically instantiate, migrate, clone, and run in active-active and/or active-passive modes. The VNFC 148 and/or the V FC 150 may each perform a portion of the operations to be performed by the VNF 164. Each of the VNFC 148 and/or the VNFC 150 may be assigned with corresponding computer code, and hardware and/or software components of the NFVI 114 to perform operations associated with the VNFC 148 and/or the VNFC 150, respectively. The VNFC 148 may utilize different computer code, and hardware and/or software components of the NFVI 114 than the VNFC 150 utilizes.The NFVI 114 may further include hardware infrastructure 146 to receive computer code from the one or more virtualized functions and perform the operations associated with the computer code. For example, in response to a request for performance of an operation associated with the virtualized switch/router 160, the virtualized switch/router 160 may provide computer code to the hardware infrastructure 146 to perform the operation associated with the computer code. The hardware infrastructure 146 may be associated with a unique identifier, such as rack ID 116.The hardware infrastructure 146 may include one or more platforms, such as platform PI 134 and/or platform Pn 118. The platform PI 134 and/or platform Pn 118 may include computer hardware that may perform operations in response to execution of computer software and/or computer code. The platform PI 134 may include one or more field programmable gate arrays (FPGAs) 132, one or more cores 120, one or more input/output (I/O) interfaces and/or network interface cards (NIC) 122, one or more communications system monitor equipment (CSME) 126, one or more busses and/or interconnects 128, or some combination thereof. One or more components of the platform PI 134 may correspond to the processors described in, for example, Figure 5. The platform PI 134 may be associated with a unique identifier, such as platform ID 130.The platform PI 134 may further include a root of trust (RoT), which may be associated with a unique identifier, RoT ID 124. The RoT may be an elementary security piece that may originate a chain of trust. The RoT can delegate trust upwards in a chain to other software. The other software may include firmware, unified extensible firmware interface (UEFI) basic input/output system (BIOS), operating system (OS) bootloader, OS kernel, VNF, virtual machine, or some combination thereof. It is to be understood that platform Pn 118 may include one or more of the features and/or computer hardware components described in relation to platform PI 134.The hardware infrastructure 146 may further include software for performance of basic operations. The software for basic operation may include an OS 140 (such as a hypervisor, OS, cloud OS, or some combination thereof), a UEFI BIOS 138, or some combination thereof. Each of the software components may be associated with a corresponding unique identifier. For example, the OS 140 may be associated with OS ID 142 and the UEFI BIOS 138 may be associated with BIOS ID 136.The operation management portion 106 may act as an interface between the tenant management portion 102 and the FVI portion 104. The operation management portion 106 may include an orchestrator 168. The orchestrator 168 may receive requests for FV services from the OSS/BSS 110 and may perform orchestration and management of the NFVI 114 and software to provide the NFV services.The operation management portion 106 may further include a VNF manager(VNFM) 170 and a virtualized infrastructure manager (VIM) 176. The VNFM 170 may be responsible for control and management of the one or more virtualized network functions, including instantiation, update, query, scaling, and/or termination of the one or more virtualized network functions. The VFM 176 may control and manage interaction of the one or more virtualized network functions with the hardware infrastructure 146, including computing, storage, and/or network resources of the hardware infrastructure 146.The orchestrator 168, in response to receiving the requests for NFV services from the OSS/BSS 110, may determine which resources of the NFVI 114 are to be utilized to fulfil the requests. The orchestrator 168 may communicate with the VNFM 170 and/or the VIM 176 to instruct the VNFM 170 and/or the VIM 176 to schedule the resources to be utilized to fulfill the requests. The VNFM 170 and/or the VFM 176, in response to the communication from the orchestrator 168, may schedule the resources to be utilized corresponding to each of the VNFM 170 and/or the VIM 176, thereby generating a virtual machine that includes the resources to be utilized based on the requests.The operation management portion 106 may further include a security controller 172. The security controller 172 may be communicatively coupled to the orchestrator 168, the VNFM 170, the VIM 176, the NFVI 114, the hardware infrastructure 146, or some combination thereof. The security controller 172 may include an identity manager 174 for identity management associated with requests for NFV services and resources of the NFVI 114 to fulfill the requests.The security controller 172 may detect and/or receive a notification from the orchestrator 168 that the orchestrator 168 is generating an instantiation of a VNFC. The generation of the instantiation may occur in response to a request, received by the orchestrator 168 from the OSS/BSS 110, for performance of a NFV service. In some embodiments, the security controller 172 may detect and/or receive a notification from the orchestrator 168 that the orchestrator 168 received a request for performance of the NFV service. In some embodiments, the security controller 172 may detect the instantiation of the VNFC based on operations associated with the VNFM 170 and/or the VIM 176.In response to detecting or receiving the notification associated with the instantiation of the VNFC, the security controller 172 may obtain identifiers associated with resources to be assigned to the VNFC and/or identifiers associated with the request for the NFV service that induced instantiation of the VNFC. The security controller 172 may generate one or more requests to be sent to the orchestrator 168, the VNFM 170, the VIM 176, the NFVI 114, the hardware infrastructure 146, or some combination thereof, wherein the request may request identifiers of resources to be associated with the VNFC. The security controller 172 may transmit the requests to orchestrator 168, the VNFM 170, the VFM 176, the NFVI 114, the hardware infrastructure 114, or some combination thereof.In response to the request for the identifiers, the orchestrator 168, the VNFM 170, the VFM 176, the NFVI 114, the hardware infrastructure, or some combination thereof may transmit identifiers for the resources to be associated with the VNFC to the security controller 172. In some embodiments, the VNFM 170 and/or the VIM 176 may transmit identifiers for the resources corresponding to the VNFM 170 and/or the VFM 176, respectively, to the security controller 172, while in other embodiments any of the elements that transmit the identifiers may transmit all the identifiers for the resources to be associated with the VNFC. The identifiers for the resources may include the OS ID 142, the BIOS ID 136, the platform ID 130, the RoT ID 124, the rack ID 116, or some combination thereof.The security controller 172 may provide the identifiers for the resources to the identity manager 174. The identity manager 174 may generate a unique identifier for the VNFC based on the identifiers. The identity manager 174 may generate the unique identifier through performance of a hash operation applied to the identifiers to produce the unique identifier. The hash operation may include performance of one or more hash functions with respect to the identifiers, the one or more hash functions including a Zobrist hash function, a universal one-way hash function, a tabulation hash function, a Rabin fingerprint hash function, a non-cryptographic hash function, a keyed cryptographic hash function, an unkeyed cryptographic hash function, or some combination thereof. As the unique identifier is generated through performance on a hash of the identifiers for the resources, it may be possible to identify the resources associated with the VNFC based on the unique identifier.In some embodiments, the security controller 172 may further obtain a globally unique identifier associated with the VNFC, such as VNFC ID 144. The identity manager 174 may further include the globally unique identifier in the hash operation to generate the unique identifier. Accordingly, it may be possible to determine the VNFC to which the unique identifier is associated based on the unique identifier itself.The identity manager 174 may assign the unique identifier to the VNFC. The identity manager 174 may store the unique identifier, with an indication of the association of the unique identifier with the VNFC, in a memory device of the security controller 172 and/or the memory device associated with the security controller 172. The memory device may be a secure storage device. In some embodiments, the identity manager 174 may provide the unique identifier with the indication to security administrators associated with the NFVI 114 and/or the OSS/BSS 110 in response to a request received from the security administrators and/or the assignment of the unique identifier to the VNFC.In some embodiments, the security controller 172 may further generate a request for a tenant ID, such as tenant ID 109, associated with the detected instantiation of the VNFC. The security controller 172 may transmit the request to the orchestrator 168, the VNFM 170, the VDVI 176, the NFVI 114, the hardware infrastructure 146, or some combination thereof, and receive the tenant ID is response to the request. Identity manager 174 may associate the tenant ID with the VNFC and may store the association in the memory device of, or associated with, the security controller 172. In someembodiments, the identity manager 174 may include the tenant ID in the hash operation to produce the unique identifier. In these embodiments, it may be possible to identify the tenant associated with the VNFC based on the unique identifier.The identity manager 174 may further generate a second unique identifier, such as VNF ID 166, for a VNF based on the unique identifier associated with the VNFC. The security controller 172 may obtain identifiers associated with one or more VNFC (such as VNFC ID 144) of the VNF in response to detection of an instantiation of the VNF. The identity manager 174 may perform a hash operation applied to the identifiers associated with the one or more VNFC to produce the second unique identifier. The identity manager 174 may store the second unique identifier in the memory device of, or associated with, the security controller 172. In some embodiments, the identity manager 174 may further associate the second unique identifier with a tenant ID (such as tenant ID 109) associated with the tenant that requested and/or caused the instantiation of the VNF. The identity manager 174 may store the tenant ID in the memory device, the tenant ID associated with the second unique identifier.In some embodiments, the identity manager 174 may generate a third unique identifier for a service function chain (SFC) that utilizes the VNF. The security controller 172 may obtain one or more VNF IDs (such as VNF ID 166) associated with VNFs within the SFC. The identity manager 174 may perform a hash on the one or more VNF IDs to generate the third unique identifier. The identity manager 174 may associate the third unique identifier with the SFC and store, in the memory device, the third unique identifier and/or an indication of the association with the SFC.The unique identifier, the second unique identifier, and/or the third unique identifier (collectively, 'the unique identifiers') may be utilized in many different processes. The unique identifiers may be utilized for initial provision of the NFVI 116 and/or the platform 134, which includes remote provisioning. Further the unique identifiers may be utilized for updates of the platform 134 and/or NFVI 116, including firmware, unified extensible firmware interface, operating system, open source business library, open source cloud computing software, and/or other software updates. The unique identifiers may further be presented at instantiation of communication protocols associated with the unique identifiers, including internet protocol security, secure sockets layer protocol, encryption protocol, and/or accelerator attachment protocol.In some embodiments, the unique identifiers may be utilized for security. The security controller 172 may store one or more lists of identifiers associated with authorized users that may utilize the NFVI 114, a platform (such as the platform PI 134 and the platform Pn 118), a VNF (such as the VNF 164, the VNF 162, the virtualizedswitch/router 160, virtualized security functions 158, or some combination thereof. The authorized users may include tenants, administrators, and/or certain software/firmware. In response to detection of a request for a NFV service, the security controller 172 may obtain an identifier associated with the requesting entity, such as tenant ID 109. The security controller 172 may compare the identifier with a list of identifiers of authorized users for the NFV service to determine whether the requesting entity is authorized to utilize the NFV service. In response to determining that the requesting entity is not authorized, the security controller 172 may prevent the NFVI 114 from performing the NFV service. Further, the security controller 172 may store an identifier associated with the requesting entity in a log, stored on the memory device of, or associated with, the security controller 172, the log including entities that accessed and/or attempted to access the NFV service.In some embodiments, the RoT may manage and assign the unique identifiers. The RoT may access secure timestamps and other security authorization credentials to associate with the unique identifiers. In some embodiments, the RoT may root keys with the unique identifiers. The RoT may later utilize the rooted keys to verify the authenticity of the unique identifiers and attest to the authenticity of the unique identifiers.Figure 2 illustrates an example message flow diagram 200 among a VNFC 202 and a security controller 206, which may include an identity manager 204, according to various embodiments. The VNFC 202 may include one or more of the features of the VNFC 148 and/or the VNFC 150, described in relation to Figure 1. Further, the security controller 206 and/or the identity manager 204 may include one or more of the features of the security controller 172 and/or the identity manager 174, respectively, described in relation to Figure 1.In 208, the VNFC 202 may transmit a security association establishment message to the identity manager 204. The VNFC 202 may transmit the message in response to instantiation of the VNFC 202. The identity manager 204 may detect the instantiation of the VNFC 202 based on the message.In 210, the identity manager 204 may provide a second security association establishment message to the security controller 206. The identity manager 204 may provide the second message in response to reception of the message 208. The second message may include the same information as the message 208.In 212, the VNFC 202 may perform operations associated with composition of a platform configuration. The VNFC 202 may compose the platform configuration through identification and/or association of one or more resources for utilization in performance of operations associated with the VNFC 202. The one or more resources may include one or more of the hardware/software components of the NFVI 114, described in relation to Figure 1, including the platform PI 134, the platform Pn 118, the UEFI BIOS 138, the operating system 140, or some combination thereof.In 214, the VNFC 202 may transmit an identity request message to the identity manager 204. The identity request message may request a unique identifier for the VNFC 202, such as VNFC ID 144, described in relation to Figure 1. The identity request message may include one or more identifiers associated with the one or more resources to be utilized by the VNFC 202. The one or more identifiers may include the OS ID 142, the BIOS ID 136, the platform ID 130, the RoT ID 124, the rack ID 116, or some combination thereof, as described in relation to Figure 1. The identity manager 204 may perform a hash function with the one or more identifiers to generate a unique identifier for the VNFC 202. The generation of the unique identifier, by the identity manager 204, for the VNFC 202 may include one or more of the features of generation of the unique identifier by the identity manager 174, as described in relation to Figure 1.The identity manager 204 may store the generated unique identifier in a database on a memory device of, or associated with, the security controller 206. The unique identifier may indicate a platform configuration of the VNFC 202 based on the unique identifier being generated through a hash operation of the one or more identifiers.In 218, the identity manager 204 may transmit an identity response message to the VNFC 202. The identity response message may include the generated unique identifier for the VNFC 202. The VNFC 202 may store the generated unique identifier.In 220, the identity manager 204 may transmit an identifier register message to the security controller 206. The identifier register message may request that the security controller 206 add the generated unique identifier to a log, stored on the memory device of, or associated with, the security controller 206, of VNFCs identifiers. In 222, in response to the identifier register message, the security controller 206 may transmit a VNFC information request message to the identity manager 204.In 224, the identity manager 204 may transmit a VNFC information message to the security controller 206 in response to the VNFC information request message. The VNFC information message may include the generated unique identifier. The security controller 206 may add the generated unique identifier to a log, stored on the memory device of, or associated with, the security controller 206, of VNFCs. The security controller 206 may further store a list of authorized users associated with the VNFC 202 in the log. The authorized users may include tenants, administrators, and/or certain software/firmware.Figure 3 illustrates an example message flow 300 diagram among elements of a NFV environment, according to various embodiments. The NFV environment may include an orchestrator 302, a VIM 304, a nova agent NFVI 306, a OS/virtual machine manager (VMM) NFVI 308, an identity manger 310, a RoT/trusted execution environment (TEE) 312, a security controller 314 and/or a VNFM 316.The NFV environment may include one or more of the features of the NFV environment 100, described in relation to Figure 1. In particular, the orchestrator 302 may include one or more features of the orchestrator 168; the VIM 304 may include one or more of the features of the VIM 176; the nova agent NFVI 306 and/or the OS/VMM NFVI 308 may include one or more of the features of the NFVI 114; the identity manager 310 may include one or more of the features of the identity manager 174; the RoT/TEE 312 may include one or more of the features of the CSME 126; the security controller 314 may include one or more of the features of the security controller 172; and the VNFM may include one or more of the features of the VNFM 170.The nova agent NFVI 306 may be utilized for operation of a cloud server that implements the NFV environment. The nova agent NFVI 306 may provide means of interacting with the cloud server through an application program interface of a cloud control panel. The nova agent NFVI 306 may perform startup functions of the cloud server including configuring the cloud server's network, establishing the cloud server's hostname, and/or setting the cloud server's root or admin passwords.The OS/VMM NFVI 308 may be utilized for management of an NFVI (such as the NFVI 114, described in relation to Figure 1) and/or a virtual machine or virtualized datacenter provided by the NFV environment. The OS/VMM NFVI 308 may enable configuration and management of a virtualization host, networking and/or storage management for the NFV environment.In 318, the VFM 304 may transmit a request for platform identifiers to the identity manager 310. The VIM 304 may transmit the request for platform identifiers in response to the establishment of the NFV environment, introduction of new platforms into the NFV environment, identification of a platform within the NFV environment which does not have a platform identifier associated with the platform, or some combination thereof. The identity manager 310 may generate a unique identifier for each of the platforms in the NFV environment not already associated with a platform identifier and may transmit the unique identifiers to the VIM 304 for association with each corresponding platform.In 320, the RoT/TEE 312 may be associated with an NFVI of the NFVenvironment. The RoT/TEE 312 may transmit a list of platform identifiers corresponding to platforms within the NFVI to the security controller 314. The RoT/TEE 312 may transmit the list in response to the VFM 304 receiving the platform identifiers from the identity manager 310 and/or the VIM 304 associating the platform identifiers with each corresponding platform. The security controller 314 may store the list of the platform identifiers and indications of which platform each of the platform identifiers is associated with. In 322, the VIM 304 may compose the platform. The VIM 304 may compose the platform by associating the platform identifiers with each of the corresponding platforms within the NFVI of the NFV environment. The VIM 304 identify the components (such as the FPGA 132, the cores 120, the I/O and/or NIC 122, the CSME 126, and/or the busses and/or interconnects 128 as described in relation to Figure 1) associated with each of the platforms within the NFVI. The VIM 304 may store information that indicates the components included in each of the platforms and the platform which each of the components is associated with.In 324, the OS/VMM NFVI 308 may transmit a request for a timestamp from the RoT/TEE 312. The OS/VMM NFVI 308 may transmit the request for the timestamp in response to a request for instantiation of a VNF and/or a VNFC. The RoT/TEE 312 may response to the request by transmitting a timestamp corresponding to the time of reception of the request to the OS/VMM NFVI 308.In some embodiments, the timestamp from the RoT/TEE 312 may be a secure timestamp. The secure timestamps may be generated by a trusted source that cannot be falsified. Some examples of trusted sources include Intel's software guard extensions (SGX), Intel's converged security manageability engine (CSME), and Intel's interrupt enable register (IE). The OS/VMM NFVI 308 may assign the secure timestamp to a VNF and/or a VNFC upon instantiation.In 326, the orchestrator 302 may transmit an indication that event associated with aVNF life cycle has occurred to the VNFM 316. The orchestrator 302 may transmit the indication in response to a request for instantiation of a VNF and/or a VNFC. The indication may indicate that a request for instantiation of the VNF and/or the VNFC has been received by the orchestrator 302.In 328, the VNFM 316 may transmit a request and/or instructions to the VIM 304 to instantiate a VNF and/or a VNFC. The VNFM 316 may transmit the request and/or instructions in response to reception of the indication that event associated with the VNF life cycle has occurred from the orchestrator 302. The request and/or instructions may include an indication of the VNF and/or the VNFC to be instantiated, a list of the types of NFVI components (such as the platform PI 134, the platform Pn 118, the OS 140 and/or the UEFI BIOS 138, as described in relation to Figure 1) to be utilized by the VNF and/or the VNFC, or some combination thereof.In 330, the VIM 304 may transmit a request for VNF and/or VNFC identifiers from the identity manager 310. The VIM 304 may transmit the request in response to reception of the request and/or instructions from the VNFM 316 to instantiate the VNF and/or the VNFC. The request for VNF and/or VNFC identifiers may include a list of identifiers for one or more VNFCs (such as VNFC ID 144. Vswitch ID 152, and/or SecMon ID 156, described in relation to Figure 1) and/or components of an NFVI (such as OS ID 142, BIOS ID 136, platform ID 130, RoT ID 124, and/or rack ID 116, as described in relation to Figure 1) to be associated with the VNF and/or the VNFC to be instantiated.Further in 330, the identity manager 310 may generate one or more unique identifiers to be associated with the VNF and/or VNFC to be instantiated. The identity manager 310 may generate the one or more unique identifiers through the process of generation of unique identifiers described in relation to Figure 1, including application of a hash operation (such as any of the hash operations described in relation to Figure 1) to the list of identifiers to be associated with the VNF and/or the VNFC to be instantiated. The identity manager 310 may transmit the generated one or more unique identifiers to the VIM 304 for association, by the VIM 304, with the VNF and/or the VNFC to be instantiated.In 332, the identity manager 310 may register the generated one or more unique identifiers, for association with the VNF and/or the VNFC to be instantiation, with the security controller 314. The identity manager 310 may provide the security controller 314 with the unique identifiers and/or indications of the VNF and/or the VNFC for which each unique identifier is to be associated with. The security controller 314 may store the unique identifiers and/or the indications in a log in a memory device of, or associated with, the security controller 314.In some embodiments, the identity manager 310 may further provide the security controller 314 with a list of authorized users that may utilize the VNF and/or the VNFC to be instantiated, which the security controller 314 may store in association with each unique identifier. In other embodiments, the security controller 314 may generate the list of authorized users that may utilize the VNF and/or the VNFC and associate the list with each unique identifier provided by the identity manager 310.In 334, the VIM 304 may transmit a spin-up VNF request to the nova agent NFVI 306. The VIM 304 may transmit the spin-up VNF request in response to reception, by the VIM 304, of the unique identifiers, for association with the VNF and/or VNFC to be instantiated from the identity manager 310. The spin-up VNF request may include the unique identifiers for association with the VNF and/or VNFC to be instantiated and/or a request to associate one or more components of the NFVI (such as the platform PI 134, the platform Pn 118, the OS 140 and the UEFI BIOS 138, as described in relation to Figure 1), associated with the nova agent NFVI 306, to be associated with the VNF and/or VNFC to be instantiated.In 336, the nova agent NFVI 306 may transmit a spin-up VNF request to the OS/VMM NFVI 308. The nova agent NFVI 306 may transmit the spin-up VNF request in response to reception of the spin-up VNF request from the VIM 304. The spin-up VNF request transmitted by the nova agent NFVI 306 may include the same information as the spin-up VNF request transmitted by the VFM 304. The NFVI 306 may translate the spin- up VNF request received from the VIM 304 into computer code and/or format that may be operable by the OS/VMM NFVI 308.In 338, the OS/VMM NFVI 308 may transmit a signal to the RoT/TEE 312 that attests to the unique identifiers to the be associated with the VNF and/or VNFC to be instantiated. The OS/VMM NFVI 308 may transmit the signal in response reception of the spin-up VNF request from the nova agent NFVI 306. The RoT/TEE 312 may add the unique identifiers to a list of trusted applications/functions.In 340, the OS/VMM NFVI 308 may instantiate the VNF and/or the VNFC. The OS/VMM NFVI 308 may instantiate the VNF and/or the VNFC in response to reception of the spin-up VNF request received from the nova agent NFVI 306.In 342, the OS/VMM NFVI 308 may register the instantiated VNF and/or the VNFC with the security controller 314. The OS/VMM NFVI 308 may register the instantiated VNF and/or the VNFC through transmission of an indication that the VNF and/or the VNFC has been instantiated, and/or transmission of the unique identifier associated with the VNF and/or the VNFC.In some embodiments, a VNF and/or VNFC may send a registration message activation to the security controller 314. The VIM 304, the nova agent NFVI 307, the OS/VMM NFVI 308, the VNFM 316, or some combination thereof, may provide the registration message activation to the security controller 314.In some embodiments, the security controller 314 may be communicatively coupled to a secure storage device and may store information in the secure storage device. The security controller 314 may store logs, audit trails, traces, or some combination thereof, in the secure store device. The security controller 314 may further store corresponding identities and/or timestamps with the logs, audit trails, trace, or some combination thereof, in the secure storage device. The timestamps may include the timestamp and/or secure timestamp obtained by the OS/VMM NFVI 308 in 324. In some embodiments, one or more of the messages disclosed in relation to the message flow diagram 200 and/or the message flow diagram 300 may be protected and/or encrypted. An RoT, such as RoT/TEE 312, may be utilized to protect and/or encrypt the messages. Further, the protection and/or encryption of the messages may be based on secure timestamps associated with the messages, V Fs corresponding to the messages, V FMs corresponding to the messages, or some combination thereof. The messages may be protected by secure sockets layer, transport layer security, internet protocol security, message wise protection, or some combination thereof.Figure 4 illustrates an example flow diagram 400 of operations within a NFV environment, according to various embodiments. In 402, an identity manager (such as identity manager 174 of Figure 1, identity manager 204 of Figure 2, and/or identity manager 310 of Figure 3) may assign a globally unique identifier to one or more FVI components (such as V F 164, NVF-N 162, virtualized switch/router 160, virtualized security function 158, OS 140, UEFI BIOS 138, platform PI 134, platform Pn 118, FPGA 132, cores 120, I/O and/or NIC 122, CSME 126, and/or busses and/or interconnects 128. In 414, the identity manager may provide the globally unique identifiers and/or indications of the assignment of the globally unique identifiers with the one or more NFVI components for storage in a secure logging service 412. The secure logging service 412 may include a security controller (such as security controller 172 of Figure 1, security controller 206 of Figure 2, and/or security controller 314 of Figure 3) that may store the globally unique identifiers and/or indications of the assignment of the globally unique identifiers with the one or more NFVI components.In 404, an NFVI (such as NFVI 114 of Figure 1) may communicate with the identity manager to retrieve component identities for the one or more NFVI components. A RoT of the NFVI may cryptographically bind the component identities to the corresponding NFVI components. In 416, the cryptographic binding of the component identities to the corresponding NFVI components may be provided to the secure logging service 412 for storage.In 406, an OS service (such as OS 140 of Figure 1) may communicate with the identity manager regarding events that occur during VNF life cycles, such as at instantiation, activation, deletion, migration, or some combination thereof. In response to the communication, the identity manager may assign unique identifiers to each of the VNF instances and may communicate with the OS service to embed the unique identifiers in the corresponding VNF descriptors. In 418, the assigned unique identifiers may be provided to the secure logging service 412 for storage.In 408, a VNFM (such as the V FM 170 of Figure 1 and/or the V FM 316 of Figure 3) and/or a VIM (such as the VFM 176 of Figure 1 and/or the VIM 304 of Figure 3) may transmit a VNF image and/or VNF descriptor for a unique VNF instance to the platform on which the VNF is to be instantiated. The VNFM and/or VFM maycommunication with a RoT of the platform to verify authorization for instantiation of the VNF on the platform. In 420, the platform, the VNFM, and/or the VFM may provide information regarding the transmission of the VNF image and/or the VNF descriptor to the secure logging service 412 for storage.In 410, an OS of the NFVI (such as OS 140 of Figure 1) may deliver the unique identifier for a VNF instance to an instantiation 'command line' parameter, which may be sent into the VNF instance. The RoT may deliver a signed/attested 'command line' parameter set into the VNF instance. The VNF instance may utilize the signed/attested 'command line' parameter set to register the VNF instance with the security controller. In 422, the signed/attested 'command line' parameter set may be provided to the secure logging service 412 for storage.Figure 5 illustrates an example computing device 500 that mayemploy the apparatuses and/or methods described herein (e.g., the NFV environment 100 (including the tenant management portion 102, the operation management portion 106, and/or the NFV infrastructure portion 104), the VNFC 202, the identity manager 204, the security controller 206, the orchestrator 302, the VIM 304, the nova agent NFVI 306, the OS/VMM 308, the identity manager 310, the RoT/TEE 312, the security controller 314, and/or the VNFM 316), in accordance with various embodiments. As shown, computing device 500 may include a number of components, such as one or more processor(s) 504 (one shown) and at least one communication chip 506. In various embodiments, the one or more processor(s) 504 each may include one or more processor cores. In various embodiments, the at least one communication chip 506 may be physically andelectrically coupled to the one or more processor(s) 504. In further implementations, the communication chip 506 may be part of the one or more processor(s) 504. In various embodiments, computing device 500 may include printed circuit board (PCB) 502. For these embodiments, the one or more processor(s) 504 and communication chip 506 may be disposed thereon. In alternate embodiments, the various components maybe coupled without the employment of PCB 502.Depending on its applications, computing device 500 may include other components that may or may not be physically and electrically coupled to the PCB 502. These other components include, but are not limited to, memory controller 526, volatile memory (e.g., dynamic random access memory (DRAM) 520), non-volatile memory such as read only memory (ROM) 524, flash memory 522, storage device 554 (e.g., a hard-disk drive (HDD)), an I/O controller 541, a digital signal processor (not shown), a crypto processor (not shown), a graphics processor 530, one or more antenna 528, a display (not shown), a touch screen display 532, a touch screen controller 546, a battery 536, an audio codec (not shown), a video codec (not shown), aglobal positioning system (GPS) device 540, a compass 542, an accelerometer(not shown), a gyroscope (not shown), a speaker 550, a camera 552, and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD), digital versatile disk (DVD)) (not shown), and so forth.In some embodiments, the one or more processor(s) 504, flashmemory 522, and/or storage device 554 may include associated firmware (not shown) storing programming instructions configured to enable computing device500, in response to execution of the programming instructions by one ormore processor(s) 504, to practice all or selected aspects of the methods described herein. In various embodiments, these aspects may additionally or alternatively be implemented using hardware separate from the one or more processor(s) 504, flash memory 522, or storage device 554.The communication chips 506 may enable wired and/or wireless communications for the transfer of data to and from the computing device 500. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods,techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although insome embodiments they might not. The communication chip 506 may implement any of a number of wireless standards or protocols, including but not limited to IEEE502.20, Long Term Evolution (LTE), LTE Advanced (LTE-A), GeneralPacket Radio Service (GPRS), Evolution Data Optimized (Ev-DO), Evolved High Speed Packet Access (HSPA+), Evolved High Speed Downlink PacketAccess (HSDPA+), Evolved High Speed Uplink Packet Access (HSUPA+),Global System for Mobile Communications (GSM), Enhanced Data rates forGSM Evolution (EDGE), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced CordlessTelecommunications (DECT), Worldwide Interoperability for Microwave Access(WiMAX), Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 500 may include a plurality of communication chips 506. For instance, a first communication chip 506 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth, and a second communication chip 506 may be dedicated to longer range wirelesscommunications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.In various implementations, the computing device 500 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a computing tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit (e.g., a gaming console or automotive entertainment unit), a digital camera, an appliance, a portable music player, or a digital video recorder. In further implementations, the computing device 500 may be any other electronic device that processes data.Figure 6 illustrates an example computer-readable storage medium that may employ the apparatuses and/or methods described herein. As will be appreciated by one skilled in the art, the present disclosure may be embodied as methods or computer program products. Accordingly, the present disclosure, in addition to being embodied in hardware as earlier described, may take the form of an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to as a "circuit,""module" or "system." Furthermore, the present disclosure may take the form of a computer program product embodied in any tangible or non-transitory medium of expression having computer-usable program code embodied in the medium. Figure 6 illustrates an example computer-readable non-transitory storage medium that may be suitable for use to store instructions that cause an apparatus, in response to execution of the instructions by the apparatus, to practice selected aspects of the present disclosure. As shown, non-transitory computer-readable storage medium 602 may include a number of programming instructions 604. Programming instructions 604 may be configured to enable a device, e.g., computer 500, in response to execution of the programming instructions, to implement (aspects of) the FV environment 100 (including the tenant management portion 102, the operation management portion 106, and/or the NFV infrastructure portion 104), the VNFC 202, the identity manager 204, the security controller 206, the orchestrator 302, the VIM 304, the nova agent NFVI 306, theOS/VMM 308, the identity manager 310, the RoT/TEE 312, the security controller 314, and/or the VNFM 316. In alternate embodiments, programming instructions 604 may be disposed on multiple computer-readable non-transitory storage media 602 instead. In still other embodiments, programming instructions 604 may be disposed on computer-readable transitory storage media 602, such as, signals.Any combination of one or more computer usable or computer readable medium(s) may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non- exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. Note that the computer- usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer- readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer- usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc.Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer programinstructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.These computer program instructions may also be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.Example 1 may include an apparatus for identity management of virtualized entities comprising a memory device with instructions stored thereon, and one or more processors that, in response to execution of the instructions stored on the memory device, are to detect an instantiation of a virtualized network function component (VNFC), obtain identifiers for components of a platform based on the detected instantiation, the platform to implement the VNFC, generate a unique identifier based on the identifiers for the components of the platform, and assign the unique identifier to the VNFC. Example 2 may include the apparatus of example 1, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further perform a hash operation on the identifiers to generate the unique identifier.Example 3 may include the apparatus of any of the examples 1 and 2, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further obtain a globally unique identifier associated with the VNFC, and perform a hash operation on the identifiers and the globally unique identifier to generate the unique identifier.Example 4 may include the apparatus of any of the examples 1-3, wherein the identifiers for the components of the platform include one or more identifiers selected from the group consisting of a rack identifier, a root of trust identifier, a platform identifier, a basic input/output system identifier, an operating system identifier, and a virtual machine manager identifier.Example 5 may include the apparatus of any of the examples 1-4, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further obtain a tenant identifier for a tenant that requests use of the VNFC, and associate the unique identifier with the tenant identifier.Example 6 may include the apparatus of example 5, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further store, in a log on the memory device, the tenant identifier with the associated unique identifier.Example 7 may include the apparatus of any of the examples 1-6, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further obtain a tenant identifier for a tenant that requests use of the VNFC, determine that the tenant is not authorized to utilize the VNFC based on thetenant identifier, and prevent use of the VNFC by the tenant based on the determination that the tenant is not authorized.Example 8 may include the apparatus of any of the examples 1-7, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further receive an instantiation request of a virtualized network function(VNF) from a cloud operating system (OS) service determine that the VNF is to utilize the VNFC based on the instantiation request, and transmit, to the cloud OS service, the unique identifier assigned to the VNFC for association with a root of trust (RoT) associated with the VNF. Example 9 may include the apparatus of example 8, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further generate a second unique identifier based on the unique identifier assigned to the V FC, assign the second unique identifier to the VNF, and register the second unique identifier assigned to the VNF.Example 10 may include the apparatus of any of the examples 8 and 9, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further retrieve a secure timestamp based on the instantiation request, associate the secure timestamp with the unique identifier assigned to the VNFC, and transmit, to the cloud OS service, the secure timestamp with the unique identifier for association with the RoT.Example 11 may include the apparatus of any of the examples 9 and 10, wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further log operations, performed by the VNF, associated with the second unique identifier, and store the logged operations on the memory device.Example 12 may include the apparatus of any of the examples 1-11, wherein the unique identifier is assigned to the VNFC by a root of trust (RoT) of the apparatus, and wherein the one or more processors, in response to execution of the instructions stored on the memory device, are to further log the unique identifier in the RoT for management by the RoT.Example 13 may include a method for virtualized entity identity management, comprising obtaining identifiers for components of a platform in response to instantiation of a virtualized network function component (VNFC), the platform to implement the VNFC, generating a unique identifier for the VNFC based on the identifiers for the components of the platform, and assigning the unique identifier to the VNFC.Example 14 may include the method of example 13, further comprising performing a hash operation on the identifiers for the components of the platform, wherein the unique identifier is set to a result of the hash operation.Example 15 may include the method of any of the examples 13 and 14, further comprising obtaining a globally unique identifier associated with the VNFC and performing a hash operation on the identifiers for the components of the platform and the globally unique identifier associated with the VNFC, wherein the unique identifier is set to a result of the hash operation.Example 16 may include the method of any of the examples 13-15, wherein the identifiers for the components of the platform include one or more identifiers selected from the group consisting of a rack identifier, a root of trust identifier, a platform identifier, a basic input/output system identifier, an operating system identifier and a virtual machine manager identifier.Example 17 may include the method of any of the examples 13-16, further comprising obtaining a tenant identifier for a tenant that requests use of the VNFC, and associating the unique identifier with the tenant identifier.Example 18 may include the method of example 17, further comprising storing, in a log, the tenant identifier with the associated unique identifier.Example 19 may include the method of any of the examples 13-18, further comprising obtaining a tenant identifier for a tenant that requests use of the VNFC, determining that the tenant is not authorized to utilize the virtualized networkfunction based on the tenant identifier, and preventing use of the VNFC by the tenant based on the determination that the tenant is not authorized.Example 20 may include the method of any of the examples 13-19, further comprising receiving an instantiation request of a virtualized network function (VNF) from a cloud operating system (OS) service, determining that the VNF is to utilize the VNFC based on the instantiation request, and transmitting, to the cloud OS service, the unique identifier assigned to the VNFC for association with a root of trust associated with the VNF.Example 21 may include the method of example 20, further comprising generating a second unique identifier based on the unique identifier assigned to the VNFC, assigning the second unique identifier to the VNF, and registering the second unique identifier assigned to the VNF.Example 22 may include the method of example 21, further comprising logging operations, performed by the VNF, associated with the second unique identifier.Example 23 may include the method of any of the examples 13-22, wherein the instructions, further comprising retrieving a secure timestamp based on the instantiation of the VNFC, associating the secure timestamp with the unique identifier assigned to the VNFC, and transmitting, to a network function virtualization (NFV) infrastructure, the secure timestamp with the unique identifier for association with a root of trust (RoT) of the platform.Example 24 may include the method of any of the examples 13-23, wherein the unique identifier is generated by a root of trust (RoT) of the platform, and wherein the method further comprises logging the unique identifier in the RoT for management by the RoT.Example 25 may include one or more computer-readable media having instructions stored thereon, wherein the instructions, in response to execution by a device, cause the device to process an instantiation request from a network function virtualization(NFV) infrastructure for a virtualized network function component (VNFC), obtain identifiers for components of a platform to implement the VNFC based on the instantiation request, generate a unique identifier for the VNFC based on the identifiers for the components of the platform, and transmit the unique identifier to the NFV infrastructure for association with the VNFC.Example 26 may include the one or more computer-readable media of example 25, wherein the instructions, in response to execution by the device, cause the device to further perform a hash operation on the identifiers for the components of the platform, wherein the unique identifier is set to a result of the hash operation.Example 27 may include the one or more computer-readable media of any of the examples 25 and 26, wherein the instructions, in response to execution by the device, cause the device to further extract, from the instantiation request, a globally unique identifier associated with the VNFC, and perform a hash operation on the identifiers for the components of the platform and the globally unique identifier associated with the VNFC, wherein the unique identifier is set to a result of the hash operation.Example 28 may include the one or more computer-readable media of any of the examples 25-27, wherein the identifiers for the components of the platform include one or more identifiers selected from the group consisting of a rack identifier, a root of trust identifier, a platform identifier, a basic input/output system identifier, an operating system identifier and a virtual machine manager identifier.Example 29 may include the one or more computer-readable media of any of the examples 25-28, wherein the instructions, in response to execution by the device, cause the device to further obtain a tenant identifier for a tenant that requests use of the VNFC, and associate the unique identifier with the tenant identifier.Example 30 may include the one or more computer-readable media of example 29, wherein the instructions, in response to execution by the device, cause the device to further store, in a log, the tenant identifier with the associated unique identifier.Example 31 may include the one or more computer-readable media of any of the examples 25-30, wherein the instructions, in response to execution by the device, cause the device to further obtain a tenant identifier for a tenant that requests use of the VNFC, determine that the tenant is not authorized to utilize the virtualized network function based on the tenant identifier, and prevent use of the VNFC by the tenant based on the determination that the tenant is not authorized.Example 32 may include the one or more computer-readable media of any of the examples 25-31, wherein the instructions, in response to execution by the device, cause the device to further receive an instantiation request of a virtualized network function (VNF) from a cloud operating system (OS) service, determine that the VNF is to utilize the VNFC based on the instantiation request, and transmit, to the cloud OS service, the unique identifier assigned to the VNFC for association with a root of trust (RoT) associated with the VNF.Example 33 may include the one or more computer-readable media of example 32, wherein the instructions, in response to execution by the device, cause the device to further generate a second unique identifier based on the unique identifier assigned to the VNFC, assign the second unique identifier to the VNF, and register the second unique identifier assigned to the VNF.Example 34 may include the one or more computer-readable media of example 33, wherein the instructions, in response to execution by the device, cause the device to further log operations, performed by the VNF, associated with the second unique identifier.Example 35 may include the one or more computer-readable media of any of the examples 25-34, wherein the instructions, in response to execution by the device, cause the device to further retrieve a secure timestamp based on the instantiation request, associate the secure timestamp with the unique identifier assigned to the VNFC, and transmit, to the NFV infrastructure, the secure timestamp with the unique identifier for association with a root of trust (RoT) of the platform.Example 36 may include the one or more computer-readable media of any of the examples 25-35, wherein the unique identifier is generated by a root of trust (RoT) of the platform, and wherein the instructions, in response to execution by the device, cause the device to further log the unique identifier in the RoT for management by the RoT.Example 37 may include an apparatus for virtualized entity identity management, comprising means for obtaining identifiers for components of a platform in response to instantiation of a virtualized network function component (VNFC), the platform to implement the VNFC, means for generating a unique identifier for the VNFC based on the identifiers for the components of the platform, and means for assigning the unique identifier to the VNFC.Example 38 may include the apparatus of example 37, further comprising means for performing a hash operation on the identifiers for the components of the platform, wherein the unique identifier is set to a result of the hash operation.Example 39 may include the apparatus of any of the examples 37 and 38, further comprising means for obtaining a globally unique identifier associated with the VNFC, and means for performing a hash operation on the identifiers for the components of the platform and the globally unique identifier associated with the VNFC, wherein the unique identifier is set to a result of the hash operation.Example 40 may include the apparatus of any of the examples 37-39, wherein the identifiers for the components of the platform include one or more identifiers selected from the group consisting of a rack identifier, a root of trust identifier, a platform identifier, a basic input/output system identifier, an operating system identifier and a virtual machine manager identifier.Example 41 may include the apparatus of any of the examples 37-40, further comprising means for obtaining a tenant identifier for a tenant that requests use of the VNFC, and means for associating the unique identifier with the tenant identifier.Example 42 may include the apparatus of example 41, further comprising means for storing, in a log, the tenant identifier with the associated unique identifier.Example 43 may include the apparatus of any of the examples 37-42, further comprising means for obtaining a tenant identifier for a tenant that requests use of the VNFC, means for determining that the tenant is not authorized to utilize the VNFC based on the tenant identifier, and means for preventing use of the VNFC by the tenant based on the determination that the tenant is not authorized.Example 44 may include the apparatus of any of the examples 37-43, further comprising means for receiving an instantiation request of a virtualized network function (VNF) from a cloud operating system (OS) service, means for determining that the VNF is to utilize the VNFC based on the instantiation request, and means for transmitting, to the cloud OS service, the unique identifier assigned to the VNFC for association with a root of trust (RoT) associated with the VNF.Example 45 may include the apparatus of example 44, further comprising means for generating a second unique identifier based on the unique identifier assigned to the VNFC, means for assigning the second unique identifier to the VNF, and means for registering the second unique identifier assigned to the VNF.Example 46 may include the apparatus of example 45, further comprising means for logging operations, performed by the VNF, associated with the secondunique identifier.Example 47 may include the apparatus of any of the examples 44-46, further comprising means for retrieving a secure timestamp based on the instantiation request, means for associating the secure timestamp with the unique identifier assigned to the VNFC, and means for transmitting, to the cloud OS service, the secure timestamp with the unique identifier for association with the RoT.Example 48 may include the apparatus of any of the examples 37-47, wherein the unique identifier is generated by a root of trust (RoT) of the platform, and wherein the apparatus further comprises means for logging the unique identifier in the RoT for management by the RoT.It will be apparent to those skilled in the art that various modifications and variations can be made in the disclosed embodiments of the disclosed device and associated methods without departing from the spirit or scope of the disclosure. Thus, it is intended that the present disclosure covers the modifications and variations of the embodiments disclosed above provided that the modifications and variations come within the scope of any claims and their equivalents. |
Memory cells utilizing dielectric charge carrier trapping sites formed in trenches provide for non- volatile storage of data. The memory cells of the various embodiments have two control gates. One control gate is formed adjacent the trench containing the charge carrier trap. The other control gate has a portion formed over the trench, and, for certain embodiments, this control gate may extend into the trench. The charge carrier trapping sites may be discrete formations on a sidewall of a trench, a continuous layer extending from one sidewall to the other, or plugs extending between sidewalls. |
What is claimed is: 1. A memory cell, comprising: a first control gate overlying a semiconductor substrate; a second control gate having a first portion overlying the first control gate, a second portion adjacent a first side of the first control gate and a third portion adjacent a second side of the first control gate; a first charge carrier trap in the substrate and adjacent the first side of the first control gate; a second charge carrier trap in the substrate and adjacent the second side of the first control gate; and a conductively doped portion of the substrate extending between the first and second charge carrier trapping sites. 2. The memory cell of claim 1, wherein the second and third portions of the second control gate extend below a surface of the substrate. 3. The memory cell of claim 1, wherein the charge carrier trapping sites are formed in trenches in the substrate. 4. The memory cell of claim 3, wherein the charge carrier trapping sites extend only along a portion of one sidewall of their respective trenches. 5. The memory cell of claim 4, wherein the second and third portions of the second control gate extend to a level adjacent the charge carrier trapping sites. 6. The memory cell of claim 3, wherein the charge carrier trapping sites extend as a continuous layer from one sidewall of their respective trenches to the other sidewall of their respective trenches. 7. The memory cell of claim 3, wherein the charge carrier trapping sites extend across a bottom of their respective trenches from one sidewall to the other. 8. The memory cell of claim 3, wherein the charge carrier trapping sites extend between sidewalls of their respective trenches. 9. The memory cell of claim 8, wherein upper surfaces of the charge carrier trapping sites are recessed below an upper surface of the substrate. 10. The memory cell of claim 1 , wherein the memory cell is adjacent a second memory cell sharing the same second control gate but a different first control gate. 11. A memory device, comprising: a plurality of bit lines; at least one source line; a plurality of first control gates; a plurality of second control gates intersecting with the first control gates; a plurality of strings of memory cells coupled in series between a source line and a bit line, with each memory cell formed at an intersection of a first control gate and a second control gate; and circuitry for control and access of the memory cells; wherein the memory cells comprise charge trapping regions adjacent first and second sides of their respective first control gates. 12. The memory device of claim 11 , wherein the charge trapping regions are formed in trenches interposed between adjacent memory cells. 13. The memory device of claim 12, wherein the trenches are partially filled with a first dielectric material and the charge trapping regions comprise second dielectric material formed on sidewalls of the trenches above the first dielectric material. 14. The memory device of claim 13, wherein a portion of a second control gate is interposed between a charge trapping region on a first sidewall of a trench and a charge trapping region of the second sidewall of that trench. 15. The memory device of claim 13, wherein a charge trapping region on a first sidewall of a trench extends to the second sidewall of that trench. 16. The memory device of claim 12, wherein the charge trapping regions line sidewalls and bottoms of the trenches. 17. The memory device of claim 12, wherein the charge carrier trapping regions extend as a continuous layer from one sidewall of their respective trenches to the other sidewall of their respective trenches. 18. A method of forming an array of memory cells, comprising: forming trenches in a semiconductor substrate; forming charge trapping regions in the trenches; forming conductively doped regions between the trenches; forming a gate dielectric on a surface of the substrate between the trenches; forming first control gates overlying the gate dielectric and substantially parallel to the trenches; forming an intergate dielectric overlying the first control gates; and forming second control gates overlying the intergate dielectric and having at least a portion overlying the trenches, wherein the second control gates are substantially orthogonal to the first control gates. 19. The method of claim 18, wherein forming charge trapping regions in the trenches comprises forming charge trapping regions containing silicon nitride. 20. The method of claim 18, wherein forming charge trapping regions in the trenches comprises forming charge trapping regions of a dielectric material capable of storing charge and having a dielectric constant of greater than a dielectric constant of silicon nitride. 21. The method of claim 18, wherein forming charge trapping regions in the trenches comprises forming charge trapping regions of a dielectric material capable of storing charge and having a dielectric constant of greater than about 10. 22. The method of claim 18, wherein forming charge trapping regions in the trenches comprises forming charge trapping regions as discrete regions on opposing sidewalls of the trenches. 23. The method of claim 18, wherein forming charge trapping regions in the trenches comprises forming a charge trapping region as a continuous region extending from one sidewall of a trench to an opposing sidewall of that trench. 24. The method of claim 18, wherein forming charge trapping regions in the trenches comprises forming charge trapping regions as continuous regions extending around sidewalls and bottoms of the trenches. 25. The method of claim 18, wherein forming charge trapping regions in the trenches comprises forming charge trapping regions as continuous regions extending across the trenches from one sidewall to the other. 26. A method of programming a target memory cell in an array of memory cells, comprising: applying a first potential to a first control gate overlying an active area of the target memory cell; applying a second potential to a second control gate having a first portion overlying the first control gate and a second portion adjacent to the first control gate; and injecting charge into at least one charge trapping region of the target memory cell in response, at least in part, to the first potential and the second potential. 27. The method of claim 26, wherein applying a first potential to a first control gate comprises applying a potential to the first control gate approximately equal to a ground potential. 28. The method of claim 27, wherein applying a first potential to a first control gate comprises applying a positive potential to the first control gate. 29. The method of claim 27, wherein applying a second potential to a second control gate comprises applying a potential to the second control gate sufficient to initiate tunneling of charge carriers. 30. The method of claim 26, wherein applying a second potential to a second control gate comprises applying a second potential to a second control gate extending substantially orthogonal to the first control gate. 31. The method of claim 26, wherein applying a second potential to a second control gate comprises applying a second potential to a second control gate having a second portion adjacent to the first control gate and extending below a level of the first control gate. 32. The method of claim 26, further comprising: applying a third potential to another second control gate substantially parallel to the second control gate associated with the target memory cell. 33. The method of claim 32, wherein applying a third potential to another second control gate substantially parallel to the second control gate associated with the target memory cell comprises applying a potential to the other second control gate that is insufficient to initiate tunneling of memory cells associated with the other second control gate. 34. A method of reading a target memory cell in an array of memory cells, comprising: applying a first potential to a first control gate overlying an active area of the target memory cell; applying a second potential to a second control gate having a first portion overlying the first control gate and a second portion adjacent to the first control gate; and v sensing a conductance of the target memory cell while applying the first potential and the second potential, the conductance indicative of a data value of the target memory cell. 35. The method of claim 34, wherein applying a first potential to a first control gate comprises applying a potential to the first control gate approximately equal to a ground potential. 36. The method of claim 35, wherein applying a first potential to a first control gate comprises applying a positive potential to the first control gate. 37. The method of claim 35, wherein applying a second potential to a second control gate comprises applying a potential to the second control gate sufficient to overcome a charge stored in one or more charge carrier trapping sites of the target memory cell if the target memory cell has a first data value and insufficient to overcome the charge stored in the one or more charge carrier trapping sites of the target memory cell if the target memory cell has a second data value. 38. The method of claim 34, wherein applying a second potential to a second control gate comprises applying a second potential to a second control gate extending substantially orthogonal to the first control gate. 39. The method of claim 34, wherein applying a second potential to a second control gate comprises applying a second potential to a second control gate having a second portion adjacent to the first control gate and extending below a level of the first control gate. 40. The method of claim 34, further comprising: applying a third potential to another second control gate substantially parallel to the second control gate associated with the target memory cell. 41. The method of claim 40, wherein applying a third potential to another second control gate substantially parallel to the second control gate associated with the target memory cell comprises applying a potential to the other second control gate that is sufficient to overcome a charge stored in one or more charge carrier trapping sites of memory cells associated with the other second control gate regardless of data values of those memory cells. |
TRENCH MEMORY STRUCTURES AND OPERATION TECHNICAL FIELD OF THE INVENTION The present invention relates generally to semiconductor memory devices, and in particular, the present invention relates to non- volatile memory device architectures having charge-carrier trap sites in trenches. BACKGROUND OF THE INVENTION Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), and flash memory. Flash memory devices have developed into a popular source of non- volatile memory for a wide range of electronic applications. Flash memory devices typically use a one-transistor memory cell that allows for high memory densities, high reliability, and low power consumption. Changes in threshold voltage of the cells, through programming of charge storage or trapping layers or other physical phenomena, determine the data value of each cell. Common uses for flash memory and other non-volatile memory include personal computers, personal digital assistants (PDAs), digital cameras, digital media players, digital recorders, games, appliances, vehicles, wireless devices, cellular telephones, and removable memory modules, and the uses for non-volatile memory continue to expand. Flash memory typically utilizes one of two basic architectures known as NOR flash and NAND flash. The designation is derived from the logic used to read the devices. In NOR flash architecture, a column of memory cells are coupled in parallel with each memory cell coupled to a bit line. In NAND flash architecture, a column of memory cells are coupled in series with only the first memory cell of the column coupled to a bit line. As semiconductor memory devices continue to scale down to smaller and smaller architectures, problems arise. As just a few examples related to typical NAND floating-gate structures, charge retention become increasingly difficult as dielectric layers become thinner, coupling from neighboring floating gates increases as separation betweenfloating gates is reduced and the likelihood of disturbing the charge of a floating gate during the programming or reading of a neighboring cell increases for similar reasons. Similar problems arise with structures that rely on charge trapping sites, such as SONOS or NROM memory cells. For example, charge retention becomes increasingly difficult as the volume of the carrier storage nodes decrease and programming and read disturbs increase. Other problems include the mere fabrication of structures for decreasing gate lengths. However, cells that rely on charge trapping sites do not exhibit interference among floating gates of neighboring cells. For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for alternative memory structures and their operation. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a functional block diagram of an electronic system having at least one memory device in accordance with an embodiment of the invention. Figure 2 is a top view of a portion of a memory array showing array architecture as might be used with an embodiment of the invention. Figures 3A-3B are cross-sectional views of the memory array of Figure 2 in accordance with an embodiment of the invention. Figures 4A-4C are cross-sectional views of memory cells in accordance with embodiments of the invention. Figure 5 A-5J are cross-sectional views of a portion of a memory array at various stages of fabrication in accordance with an embodiment of the invention. Figure 6 is a top view of a portion of a memory array showing array architecture as might be used with another embodiment of the invention. Figures 7A-7B are cross-sectional views of the memory array of Figure 6 in accordance with an embodiment of the invention. Figure 8 is a cross-sectional view of a memory cell in accordance with an embodiment of the invention.Figure 9 is a functional block diagram of a memory module having at least one memory device in accordance with an embodiment of the invention. DETAILED DESCRIPTION OF THE INVENTION In the following detailed description of the present embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the inventions may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that process, electrical or mechanical changes may be made without departing from the scope of the present invention. The terms wafer and substrate used in the following description include any base semiconductor structure. Both are to be understood as including silicon- on-sapphire (SOS) technology, silicon-on-insulator (SOI) technology, thin film transistor (TFT) technology, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor, as well as other semiconductor structures well known to one skilled in the art. Furthermore, when reference is made to a wafer or substrate in the following description, previous process steps may have been utilized to form regions/junctions in the base semiconductor structure. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof. The memory devices of the various embodiments include memory cells utilizing dielectric charge carrier trapping sites formed in trenches. The memory cells of the various embodiments have two control gates. One control gate is formed adjacent to the trench containing the charge carrier trap. The other control gate has a portion formed over the trench, and, for certain embodiments, this control gate may extend into the trench. The charge carrier trapping sites may be discrete formations on a sidewall of a trench, a continuous layer extending from one sidewall to the other, or plugs extending between sidewalls. The two control gates of the various embodiments mitigate against disturb conditions during programming or reading the memory cells. Figure 1 is a simplified block diagram of a NAND flash memory device 100 coupled to a processor 130 as part of an electronic system, according to an embodiment of the invention. Some examples of electronic systems include personal computers, personaldigital assistants (PDAs), digital cameras, digital media players, digital recorders, games, appliances, vehicles, wireless devices, cellular telephones and the like. The processor 130 may be a memory controller or other external processor. Memory device 100 includes an array of memory cells 104 arranged in rows and columns. The memory cells of the array 104 utilize dual control gate structures in accordance with embodiments of the invention. A row decode circuitry 108 and a column decode circuitry 110 are provided to decode address signals. Address signals are received and decoded to access memory array 104. Memory device 100 also includes input/output (I/O) control circuitry 112 to manage input of commands, addresses and data to the memory device 100 as well as output of data and status information from the memory device 100. An address register 114 is coupled between I/O control circuitry 112 and row decode circuitry 108 and column decode circuitry 110 to latch the address signals prior to decoding. A command register 124 is coupled between I/O control circuitry 112 and control logic 116 to latch incoming commands. Control logic 116 controls access to the memory array 104 in response to the commands and generates status information for the external processor 130. The control logic 116 is coupled to row decode circuitry 108 and column decode circuitry 110 to control the row decode circuitry 108 and column decode circuitry 110 in response to the addresses. Control logic 116 is also coupled to a cache register 118. Cache register 118 latches data, either incoming or outgoing, as directed by control logic 116 to temporarily store data while the memory array 104 is busy writing or reading, respectively, other data. During a write operation, data is passed from the cache register 118 to data register 120 for transfer to the memory array 104; then new data is latched in the cache register 118 from the I/O control circuitry 112. During a read operation, data is passed from the cache register 118 to the I/O control circuitry 112 for output to the external processor 130; then new data is passed from the data register 120 to the cache register 118. A status register 122 is coupled between I/O control circuitry 112 and control logic 116 to latch the status information for output to the processor 130. Memory device 100 receives control signals at control logic 116 from processor 130 over a control link 132. The control signals may include a chip enable CE#, a command latch enable CLE, an address latch enable ALE, and a write enable WEU in accordance with the present invention. Memory device 100 receives command signals (orcommands), address signals (or addresses), and data signals (or data) from processor 130 over a multiplexed input/output (I/O) bus 134 and outputs data to processor 130 over I/O bus 134. Specifically, the commands are received over input/output (I/O) pins [0:7] of I/O bus 134 at I/O control circuitry 112 and are written into command register 124. The addresses are received over input/output (I/O) pins [0:7] of bus 134 at I/O control circuitry 112 and are written into address register 114. The data are received over input/output (I/O) pins [0:7] for an 8-bit device or input/output (I/O) pins [0:15] for a 16-bit device at I/O control circuitry 112 and are written into cache register 118. The data are subsequently written into data register 120 for programming memory array 104. For another embodiment, cache register 118 maybe omitted, and the data are written directly into data register 120. Data are also output over input/output (I/O) pins [0:7] for an 8-bit device or input/output (I/O) pins [0:15] for a 16-bit device. It will be appreciated by those skilled in the art that additional circuitry and control signals can be provided, and that the memory device of Figure 1 has been simplified to help focus on the invention. Additionally, while the memory device of Figure 1 has been described in accordance with popular conventions for receipt and output of the various signals, it is noted that the various embodiments are not limited by the specific signals and I/O configurations described unless expressly noted herein. Figure 2 is a top view of a portion of a memory array 200 showing array architecture as might be used with an embodiment of the invention. As shown in Figure 2, the memory array 200 includes memory cells formed at the intersections of first control gates 202 and second control gates 204. The first control gates 202 are formed over active areas of a semiconductor substrate and may be termed active gates. Portions of the second control gates 204 are formed over isolation trenches of the semiconductor substrate and may be termed trench gates. For ease of addressing in the digital environment, the number of first control gates 202 and the number of second control gates 204 are generally each some power of two. The first control gates 202 may further be electrically coupled, such as by a first control gate select line 203. Although the memory array 200 depicts am 8x8 block of memory cells having eight first control gates 202 and eight second control gates 204, a block may contain more or less memory cells. As an example, a block might contain 32 second control gates 204 by 1,024 first control gates 202. However, theembodiments will be described with reference to a relatively small block of memory cells in order to show the components in more detail. The first control gates 202 overlie diffusion areas 206, diffusion areas 209 and channel implant areas (not shown in Figure 2). The diffusion areas 206 are conductively doped regions coupled to a bit line contact 208 on one end of the first control gates 202 and the diffusion areas 209 are conductively doped regions coupled to a ground node, such as a source line 210, on the other end of the first control gates 202. Figures 3A-3B are cross-sectional views of the memory array 200 in accordance with an embodiment of the invention. Figure 3 A is a cross-sectional view of the memory array 200 taken along a second control gate 204 while Figure 3B is a cross- sectional view of the memory array 200 taken along a first control gate 202. Direction of current flow during a read operation is into the page for Figure 3 A and parallel to the page for Figure 3B. For the embodiment depicted in Figure 3A, the second control gate 204 extends into isolation trenches 214 formed into a semiconductor substrate 220. Semiconductor substrate 220 may be, for example, a p-type monocrystalline silicon substrate. The isolation trenches 214 are at least partially filled with a dielectric material 224. Dielectric material 224 may include a variety of dielectric materials, e.g., silicon dioxide, doped silicate glass or other dielectric material generally resistant to storing charge carriers. Charge carrier trapping sites 222 are formed on sidewalls of the isolation trenches 214. Charge carrier trapping sites 222 are formed of a material capable of trapping charge carriers. One example includes silicon nitride, or composites such as ONO (oxide-nitride-oxide). However, smaller dimensions and ease of fabrication may be facilitated through the use of trapping materials having dielectric constants greater than that of silicon nitride, which has a dielectric constant, k, of about 3.9. Some examples of higher-k dielectrics include HfO2, ZrO2, ZrSnTiO, ZrON, ZrAlO, ZrTiO4, Al2O3, La2O3, LaAlO3, HfAlO3, HfSiON, Ta2O5, TiO2, Pr2O3, HfO2, TiAlOx, LaAlO3, La2Hf2O7, and HfTaO. For some embodiments, the charge trapping materials have a dielectric constant of approximately 10 or higher. To further aid in isolation of individual memory cells, a channel-stop implant 226 may be formed in the substrate 220 in contact with the dielectric material 224. Thechannel-stop 226 implant would have a conductivity type different than that of the substrate 220. For example, for a p-type substrate, the channel-stop implant 226 may be an n-type diffusion. As shown in Figures 3A-3B, the memory array 200 would further include bit lines 212 extending substantially parallel with the first control gates 202 and coupled to a diffusion area 206 through a bit line contact 208. The various conductive components, e.g., first control gates 202, second control gates 204, bit line contacts 208 and source line 210 are isolated by one or more layers of dielectric material 228. Figures 4A-4C are cross-sectional views showing more detail of the structure of memory cells in accordance with various embodiments of the invention. Figure 4A corresponds to a memory cell having discrete charge carrier trapping sites 222 on either sidewall of the isolation trenches 214. Figure 4B corresponds to a memory cell sharing a charge carrier trap 222 with an adjacent memory cell and extending between the sidewalls of the isolation trench 214. Figure 4C corresponds to a memory cell sharing a charge carrier trap 222 with an adjacent memory cell and extending around a second control gate 204 from one sidewall of an isolation trench 214 to the other sidewall. Structures of the type depicted in Figures 4A-4C can be fabricated using techniques that are well understood in the art of semiconductor fabrication. Figure 5A-5 J are cross-sectional views of a portion of a memory array at various stages of fabrication in accordance with an embodiment of the invention. In Figure 5 A, a mask 530 is formed overlying the substrate 220 to expose areas 532 for future isolation trenches. In Figure 5B, portions of the substrate 220 defined by the exposed areas 532 are removed, such as by etching, to define isolation trenches 214. As depicted in Figure 5C, a trench liner 534 may be formed on the sidewalls and bottoms of the trenches 214. For example, with a silicon-containing substrate 220, the trench liner 534 may be a thermally grown oxide. Alternatively, the trench liner 534 may be a deposited dielectric material. Liner material formed on an upper surface of the substrate 220 may be removed, such as by chemical-mechanical planarization (CMP). The trench liner 534, if utilized, may act as a tunnel dielectric to the subsequent charge carrier trapping sites.In Figure 5D, a dielectric plug 224 is formed in the bottoms of the trenches 214 and the charge carrier trapping sites 222 are formed on sidewalls of the trenches 214. For example, the dielectric plugs 224 could be formed by forming a layer of dielectric material over the structure of Figure 5C, followed by a removal of the dielectric material formed overlying the upper surface of the substrate 220, such as by CMP. This would leave the trenches 214 substantially filled with the dielectric material. This material could then be etched back to leave the dielectric plugs 224 in the trenches 214. To form the charge carrier trapping sites 222, a layer of charge trapping material could be formed overlying the upper surface of the substrate 220, the trench liners 534 and the dielectric plugs 224 followed by an anisotropic removal of the charge trapping material to leave behind the charge carrier trapping sites as depicted in Figure 5D. At this stage, the channel implant areas 207 may be formed, such as by implanting and/or diffusing dopant materials to modify the conductivity of the substrate 220 in these areas to adjust for a desired threshold voltage Vt. Although the charge carrier trapping sites 222 are formed as discrete sites on the sidewalls of the trenches 214, alternatively the charge carrier trapping sites 222 could be formed as a continuous layer extending from one sidewall to the other such as that depicted in Figure 4C. For example, instead of performing an anisotropic removal of the charge trapping material, a CMP process could be performed to remove only the charge trapping material on the upper surface of the substrate. In Figure 5E, a dielectric layer 536 may be formed overlying the structure of Figure 5D to act as a blocking dielectric for the charge carrier trapping sites 222. Subsequent to forming the blocking dielectric layer 536, a conductive layer 538 is formed. Conductive layer 538 will form a portion of a second control gate. The conductive layer 538 may be formed of one or more layers of conductive material. Some examples include a conductively-doped polysilicon, a metal suicide, a metal or some combination of such conductive materials. Additional layers may include conductive adhesion or barrier layers. In Figure 5 F, portions of the conductive layer 538 are removed to leave conductive plugs 538 bounded by the blocking dielectric layer 536. Portions of the blocking dielectric layer 536 overlying the exposed portions of the substrate 220 may also be removed. For example, a CMP process could be utilized for form the structure ofFigure 5F. If channel implant areas 207 had not previously been formed, they may alternatively be formed at this point. In Figure 5 G, a gate dielectric layer 540 is formed overlying the structure of Figure 5F. Gate dielectric layer 540 may be a variety of dielectric materials, but typical construction may be a deposited silicon dioxide material. Following formation of this gate dielectric 540, the first control gates 202 may be formed as depicted in Figure 5H. As an example, a conductive material may be formed overlying the dielectric layer 540 and then patterned to form the first control gates 202. The first control gates 202 may be formed of one or more layers of conductive material. Some examples include a conductively-doped polysilicon, a metal suicide, a metal or some combination of such conductive materials. Additional layers may include conductive adhesion or barrier layers. As further shown in Figure 5H, an intergate dielectric layer 542 is formed overlying the first control gates 202 to insulate the first control gates 202. Portions of the intergate dielectric layer 542 are then removed in Figure 51 along with portions of the gate dielectric layer 540 to expose the conductive plugs 538. For example, the structure of Figure 5H could be patterned and etched to remove portions of the intergate dielectric layer 542 and the gate dielectric layer 540 overlying the conductive plugs 538. In Figure 5 J, the second control gate 204 is formed. Formation of the second control gates 204 can be substantially the same as formation of the first control gates 202. The thickness of the intergate dielectric layer 542 should be chosen to avoid breakdown between the first control gates 202 and the second control gates 204 during operation. As the second control gate 204 is in contact with the conductive plugs 538, the conductive plugs 538 may be deemed to be an extension of, or a portion of, the second control gate 204. To form a structure similar to that depicted in Figure 4B, the fabrication could generally follow the same process as described with reference to Figure 5A-5J. However, instead of forming discrete charge carrier trapping sites 222 on opposing sidewalls of the trenches 214 as shown in Figure 5D, a plug of charge trapping material could be formed overlying the dielectric plug 224. For example, a layer of charge trapping material could be formed overlying the upper surface of the substrate 220, the trench liners 534 and the dielectric plugs 224 to substantially fill the trenches 214. Charge trapping materialoverlying the upper surface of the substrate 220 could then be removed, such as by CMP, leaving behind a plug of charge trapping material in the trenches 214, similar to the structure of the charge carrier trap 222 as depicted in Figure 4B. This plug could then optionally be recessed, such as by an etch process. Subsequent processing could then follow as described with reference to Figures 5G-5J with the exception that the conductive plugs 538 would not be present. A memory cell is the structure at the intersection of a first control gate 202 and a second control gate 204. Charge stored in the charge carrier trapping sites 222 on opposing sides of the first control gates 202 define the data value of that memory cell. Because the charge carrier trapping sites 222 are dielectric, charge storage is localized. This allows a plug of charge trapping material as depicted in Figure 4B to store charge for two different memory cells, i.e., the two memory cells on either side of the trench 214, if the dielectric constant of the material is sufficiently high to prevent migration of the charge. The charge stored in the charge carrier trapping sites 222 will tend to pinch off the channel in the adjacent channel implant area 207, thus changing the conductance of the channel implant area 207. A string of memory cells in accordance with an embodiment of the invention includes those memory cells associated with a single first control gate 202, e.g., those located between the source line 210 and the bit line contact 208. Erasing memory cells of the type described with reference to Figures 2-5 J can be performed by creating a sufficient voltage differential across the charge carrier trapping sites 222 to cause them to release their charge. In general, the first control gates 202 and second control gates 204 should receive some voltage sufficiently less than a voltage of the substrate 220 to cause the charge carriers, or electrons, to move from the charge carrier trapping sites 222 to the substrate 220. For example, the first control gates 202 and second control gates 204 could receive a negative erase voltage, e.g., -14V to -15V, while the substrate 220 receives the ground potential. Alternatively, the first control gates 202 and second control gates 204 could receive the ground potential while the substrate receives a positive erase voltage, e.g., +14V to +15V. Erasing would typically be performed on an entire block of memory cells before programming any memory cell of that block. Note that voltages described herein are examples only and will depend upon dimensions of the various layers. For example, erase voltages having a magnitude of approximately 14-15 Vwould generally be appropriate where the thickness of the tunneling dielectric is approximately 20-30 A. In general, to program memory cells of the type depicted in Figures 2-5 J, voltages are applied to the various nodes to invert the channel defined by the channel implant area 207 under the first control gates 202 and to initiate tunneling of charges, or electrons, from the substrate 220 into the charge carrier trapping sites 222. For example, the first control gates 202 could receive a first potential, such as a ground potential or OV, or slightly higher, sufficient to invert the channels between the isolation trenches 214. A second control gate 204 associated with a target memory cell, or selected second control gate, could receive a second or program potential. The program potential applied to the second control gate 204 associated with a target memory cell should be sufficient to initiate tunneling. For example, the program potential could be 14- 15 V. Remaining second control gates 204, or unselected second control gates, should receive a third potential. This third potential should be chosen to avoid breakdown between selected and unselected trench gates. For example, this third potential may be some fraction of the program potential or may be the ground potential. Bit lines 212 and source lines 210 associated with target memory cells may receive the ground potential. Unselected bit lines 212 and source lines 210 not associated with at least one target memory cell may be allowed to float or may receive some other potential as applied to the first control gates 202 such that tunneling is inhibited on unselected memory cells. To read memory cells of the type depicted in Figures 2-5 J, voltages are applied to the first control gates 202 and second control gates 204 of a block of memory cells such that more current will flow between a bit line 212 and the source line 210 if a target memory cell has a first data value than will flow if the target memory cell has a second data value. As one example, each of the first control gates 202 could receive a first potential, such as the ground potential or some small positive potential, depending upon the threshold voltage of the cells. A second control gate 204 associated with a target memory cell, or selected second control gate, could receive a second or read potential. The read potential applied to the second control gate 204 associated with a target memory cell should be insufficient to overcome the charge stored in the charge carrier trapping sites 222 if the target memory cell has the second data value. In this manner, the conductance of the cell will differ depending upon the amount of charge stored. Forexample, the second control gate 204 associated with a target memory cell might receive the ground potential. Remaining second control gates 204, or unselected second control gates, should receive a third or pass potential. The pass potential applied to second control gates 204 not associated with the target memory cell should be sufficient to overcome any charge stored in the charge carrier trapping sites 222 of unselected memory cells such that their channels are not pinched off regardless of their data values. For example, the unselected second control gates 204 may receive approximately 4-5V. The resulting difference in conductance of a target memory cell depending upon its data value can then be read by sensing the conductance of the string of memory cells containing the target memory cell. As one example, the bit lines 212 could be precharged to some potential, such as the supply voltage Vcc, while the source line 210 could receive the ground potential. The foregoing potentials would then be applied to the first control gates 202, the selected second control gate 204 and the unselected control gates 204. Those bit lines 212 associated with a string of memory cells containing a target memory cell having the first data value will experience a larger drop in voltage than those bit lines 212 associated with a string of memory cells containing a target memory cell having the second data value. Thus, by sensing a voltage level of the bit lines 212 after some predetermined delay, the data values of the target memory cells can be determined. Figure 6 is a top view of a portion of a memory array 600 showing array architecture as might be used with an embodiment of the invention. As shown in Figure 6, the memory array 600 includes memory cells formed at the intersections of first control gates 602 and second control gates 604. The first control gates 602 are formed over active areas of a semiconductor substrate and may be termed active gates. Portions of the second control gates 604 are formed over trenches of the semiconductor substrate and may be termed trench gates. For ease of addressing in the digital environment, the number of first control gates 602 and the number of second control gates 604 are generally each some power of two. Although the memory array 600 depicts am 8x8 block of memory cells having eight first control gates 602 and eight second control gates 604, a block may contain more or less memory cells. As an example, a block might contain 32 first control gates 602 by 1,024 second control gates 604. However, the embodiments will be described with reference to a relatively small block of memory cells in order to show the components in more detail.Diffusion areas 606 are coupled to a bit line contact 608 on one end of the second control gates 604. Diffusion areas 609 are coupled to act as source lines on the other end of the second control gates 604. Diffusion areas 609 maybe coupled to a single ground node. However, for certain embodiments providing for multi-bit storage, these diffusion areas or source lines 609 may be individually addressable. Figures 7A-7B are cross-sectional views of the memory array 600 in accordance with an embodiment of the invention. Figure 7A is a cross-sectional view of the memory array 600 taken along a second control gate 604 while Figure 7B is a cross- sectional view of the memory array 600 taken along a first control gate 602. Direction of current flow during a read operation is parallel to the page for Figure 7 A and into the page for Figure 7B. For the embodiment depicted in Figure 7A, the second control gate 604 extends into trenches 614 formed into a semiconductor substrate 620. Semiconductor substrate 620 may be, for example, a p-type monocrystalline silicon substrate. Charge carrier trapping sites 622 are formed on sidewalls and bottoms of the trenches 614. Charge carrier trapping sites 622 are formed of a material capable of trapping charge carriers. One example includes silicon nitride. However, smaller dimensions and ease of fabrication maybe facilitated through the use of trapping materials having dielectric constants greater than that of silicon nitride, which has a dielectric constant, k, of about 3.9. Some examples of higher-k dielectrics include HfO2, ZrO2, ZrSnTiO, ZrON, ZrAlO, ZrTiO4, Al2O3, La2O3, LaAlO3, HfAlO3, HfSiON, Ta2O5, TiO2, Pr2O3, HfO2, TiAlOx, LaAlO3, La2Hf2O7, and HfTaO. For some embodiments, the charge trapping materials have a dielectric constant of approximately 10 or higher. As shown in Figures 7A-7B, the memory array 600 would further include bit lines 612 extending substantially parallel with the second control gates 602 and coupled to a diffusion areas 606 through a bit line contact 608. The various conductive components, e.g., first control gates 602, second control gates 604, bit line contacts 608 and source line 610 are isolated by one or more layers of dielectric material 628. Figure 8 is a cross-sectional view showing more detail of the structure of memory cells in accordance with various embodiments of the invention. Structures of the type depicted in Figure 8 can be fabricated using techniques that are well understood in theart of semiconductor fabrication. For example, processing could be performed as described generally with reference to Figures 5A-5C. Then, instead of forming dielectric plugs 224, the charge carrier trap 622 could be formed in a manner similar to the trench liner 534, i.e., forming a layer of charge trapping material and removing portions overlying the upper surface of the substrate. Remaining processing could generally follow as provided with reference to Figures 5E-5J. A memory cell is the structure at the intersection of a first control gate 602 and a second control gate 604. Charge stored in the charge carrier trapping sites 622 on opposing sides of the first control gates 602 define the data value of that memory cell. Because the charge carrier trapping sites 622 are dielectric, charge storage is localized. The charge stored in the charge carrier trapping sites 622 will tend to pinch off the channel in the adjacent channel implant area 607, thus changing the conductance of the channel implant area 607. A string of memory cells in accordance with an embodiment of the invention includes those memory cells associated with a single second control gate 604, e.g., those located between the source line 610 and the bit line contact 608. Erasing memory cells of the type described with reference to Figures 6-9G can be performed by creating a sufficient voltage differential across the charge carrier trapping sites 622 to cause them to release their charge. In general, the first control gates 602 and second control gates 604 should receive some voltage sufficiently less than a voltage of the substrate 620 to cause the charge carriers, or electrons, to move from the charge carrier trapping sites 622 to the substrate 620. For example, the first control gates 602 and second control gates 604 could receive a negative erase voltage, e.g., -14V to -15V, while the substrate 620 receives the ground potential. Alternatively, the first control gates 602 and second control gates 604 could receive the ground potential while the substrate receives a positive erase voltage, e.g., +14V to +15V. Erasing would typically be performed on an entire block of memory cells before programming any memory cell of that block. Programming of memory cells of the type depicted in Figures 6-9G can be performed, e.g., using tunneling between first and second control gates. However, because the charger carrier trapping sites 622 extend around the second control gates 604, such cells can also be programmed using gate induced drain leakage or GIDL. Programming using GIDL allows spatial storage of charge, thus facilitating storage of multiple data values in a single cell by programming and reading directionally.For programming by tunneling, voltages are applied to the various nodes to initiate tunneling of charges, or electrons, from the first control gate 602 into the charge carrier trapping sites 622. For example, a first control gate 602 associated with a target memory cell, or selected first control gate, could receive a first potential, such as a ground potential or OV. First control gates 602 not associated with a target memory cell, or unselected first control gates, could receive a second or inhibit potential. The inhibit potential applied to the unselected first control gates 602 should be sufficient to inhibit tunneling in unselected memory cells. For example, the inhibit potential could be approximately 10V. The second control gate 604 associated with a target memory cell should receive a third potential. This third potential should be high enough to initiate tunneling from the selected first control gate 602 into the trap sites. The second control gates 604 not associated with the target memory cell should receive a lower fourth potential of about half the level applied to the selected second control gate 604. Bit lines 612 and source lines 610 may be permitted to float. For embodiments using GIDL, the memory cells are read and programmed in both a forwards and backwards direction of current flow in the source/drain regions (interchanging their source/drain function) to allow access to programming and reading the two stored data bits. The function of each source/drain region (i.e., whether source or drain) depends upon which bit trapping area is being read or written. For example, in a read operation, if the carrier is input at the left side source/drain region and output from the right side region, the left side is the source and the right side is the drain and the data bit charge is stored in the charge carrier trap 622 at the source end. Because of the localized storage of electrons in the trapping layer, while reading memory cells of such an embodiment, only the charge stored in the trapping layer nearest the source/drain region operating as the source affects the current flow through the device. The charge stored near the other source/drain region is "read through" and has minimal influence. The bits are programmed in the reverse bias/current flow direction from the read direction for each stored data bit. For example, to program in a first direction from the bit line 212 to a target memory cell, the second control gates 604 receive a first potential sufficient to pass a program potential from the bit line 212 to the target memory cell. The program potential for the bit line 212 may be, for example, 6-7 V. The source lines 609 may be allowed tofloat. The first potential applied to the second control gates 604 may be, for example,9- 1 OV. Each first control gate 602 between the bit line 212 and the target memory cell should also receive a potential sufficient to pass the program potential. These unselected first control gates 602 may receive the same potential as the second control gates 604. The first control gate 602 associated with the target memory cell would then receive a ground potential in order to cause band-to-band tunneling of charge in a portion of the charge carrier trap 622 adjacent the selected first control gate 602 and nearest the bit line 212. Upon programming in the first direction, the process could be repeated in the opposite direction, applying the program potential to the selected source line 609 and allowing its associated bit line 212 to float. To read memory cells of the type depicted in Figures 6-9G, voltages are applied to the first control gates 602 and second control gates 604 of a block of memory cells such that more current will flow between a bit line 612 and the source line 609 if a target memory cell has a first data value than will flow if the target memory cell has a second data value. As one example, each of the second control gates 602 could receive a first or pass potential. For example, the pass potential may be 4-5V. A first control gate 602 associated with a target memory cell, or selected first control gate, could receive a second or read potential. The read potential applied to the first control gate 602 associated with a target memory cell should be insufficient to overcome the charge stored in the charge carrier trapping sites 622 if the target memory cell has the second data value. In this manner, the conductance of the cell will differ depending upon the amount of charge stored. For example, the first control gate 602 associated with a target memory cell might receive the ground potential or a potential between the pass potential and the ground potential. Remaining first control gates 602, or unselected first control gates, should receive the pass potential. The resulting difference in conductance of a target memory cell depending upon its data value can then be read by sensing the conductance of the string of memory cells containing the target memory cell. As one example, one end of the string of memory cells, e.g., the bit lines 612, could be precharged to some potential, such as the supply voltage Vcc, while the other end of the string of memory cells, e.g., the source lines 609, could receive the ground potential. The foregoing potentials would then be applied to the selected first control gate 602, the unselected first control gates 602 and the second controlgates 604. Those bit lines 612 (or source lines 609) associated with a string of memory cells containing a target memory cell having the first data value will experience a larger drop in voltage than those bit lines 612 (or source lines 609) associated with a string of memory cells containing a target memory cell having the second data value. Thus, by sensing a voltage level of the bit lines 612 (or source lines) after some predetermined delay, the data values of the target memory cells can be determined. In addition, because the charge carrier trap 622 extends from one first control gate 602 to the next, capacitive sensing can also be utilized. Figure 9 is an illustration of a memory module 900 in accordance with an embodiment of the invention. Memory module 900 is illustrated as a memory card, although the concepts discussed with reference to memory module 900 are applicable to other types of removable or portable memory, e.g., USB flash drives, and are intended to be within the scope of "memory module" as used herein. In addition, although one example form factor is depicted in Figure 9, these concepts are applicable to other form factors as well. In some embodiments, memory module 900 will include a housing 905 (as depicted) to enclose one or more memory devices 910, though such a housing is not essential to all devices or device applications. At least one memory device 910 is a nonvolatile memory in accordance with an embodiment of the invention. Where present, the housing 905 includes one or more contacts 915 for communication with a host device. Examples of host devices include personal computers, PDAs, digital cameras, digital media players, digital recorders, games, appliances, vehicles, wireless devices, cellular telephones, memory card readers, interface hubs and the like. For some embodiments, the contacts 915 are in the form of a standardized interface. For example, with a USB flash drive, the contacts 915 might be in the form of a USB Type- A male connector. In general, contacts 915 provide an interface for passing control, address and/or data signals between the memory module 900 and a host having compatible receptors for the contacts 915. The memory module 900 may optionally include additional circuitry 920 which may be one or more integrated circuits and/or discrete components. For some embodiments, the additional circuitry 920 may include a memory controller for controlling access across multiple memory devices 910 and/or for providing a translation layer between an external host and a memory device 910. For example, there may not be a one-to-one correspondence between the number of contacts 915 and a number of I/O connections to the one or more memory devices 910. Thus, a memory controller could selectively couple an I/O connection (not shown in Figure 9) of a memory device 910 to receive the appropriate signal at the appropriate I/O connection at the appropriate time or to provide the appropriate signal at the appropriate contact 915 at the appropriate time. Similarly, the communication protocol between a host and the memory module 900 may be different than what is required for access of a memory device 910. A memory controller could then translate the command sequences received from a host into the appropriate command sequences to achieve the desired access to the memory device 910. Such translation may further include changes in signal voltage levels in addition to command sequences. The additional circuitry 920 may further include functionality unrelated to control of a memory device 910 such as logic functions as might be performed by an ASIC (application specific integrated circuit). Also, the additional circuitry 920 may include circuitry to restrict read or write access to the memory module 900, such as password protection, biometrics or the like. The additional circuitry 920 may include circuitry to indicate a status of the memory module 900. For example, the additional circuitry 920 may include functionality to determine whether power is being supplied to the memory module 900 and whether the memory module 900 is currently being accessed, and to display an indication of its status, such as a solid light while powered and a flashing light while being accessed. The additional circuitry 920 may further include passive devices, such as decoupling capacitors to help regulate power requirements within the memory module 900. Conclusion The various embodiments describe herein include memory cells utilizing dielectric charge carrier trapping sites formed in trenches. The memory cells of the various embodiments have two control gates. One control gate is formed adjacent the trench containing the charge carrier trap. The other control gate has a portion formed over the trench, and, for certain embodiments, this control gate may extend into the trench. The charge carrier trapping sites may be discrete formations on a sidewall of a trench, acontinuous layer extending around the bottom of the trench from one sidewall to the other, or plugs extending between sidewalls. Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. Many adaptations of the invention will be apparent to those of ordinary skill in the art. Accordingly, this application is intended to cover any adaptations or variations of the invention. |
Sub-micron-dimensioned, asymmetrically-configured MOS and/or CMOS transistors are fabricated using removable sidewall spacers made of a material, such as UV-nitride, one of which is selectively treated subsequent to deposition, e.g., by ion implantation, to augment the etch rate thereof with a room temperature etchant, e.g., dilute aqueous HF. The treated spacer is removed with the dilute, aqueous HF prior to implantation of asymmetrically-configured, moderately or heavily-doped source/drain regions but prior to any post-implantation annealing processing, in order not to increase the etch resistance of the spacer material by thermally-induced densification. |
What is claimed is: 1. A method of manufacturing a semiconductor device, which method comprises the sequential steps of:(a) providing a device precursor structure comprising a semiconductor substrate of a first conductivity type and a layer stack formed on a portion of a surface of said substrate, said layer stack comprising: i. a thin gate insulating layer in contact with said substrate surface; and ii. a gate electrode layer formed on said gate insulating layer, said layer stack comprising first and second opposing side surfaces and a top surface; (b) forming first and second insulative, tapered sidewall spacers on respective first and second opposing side surfaces of said layer stack, each of said sidewall spacers comprising a dielectric material having an as-deposited etch resistance; (c) selectively positioning a masking material over said first sidewall spacer on the first opposing side surface of said layer stack; (d) selectively treating the exposed second sidewall spacer on the second opposing side surface of said layer stack with impurities for reducing the etch resistance of said dielectric material from its as-deposited state to a more readily-etchable state; (e) removing the masking material from over said first sidewall spacer; (f) selectively removing the reduced etching resistance second sidewall spacer by an etching process; (g) selectively introducing dopant impurities of a second, opposite conductivity type into exposed portions of said substrate surface adjacent said first sidewall spacer and adjacent said second opposing side surface of said layer stack to form a pair of spaced-apart, heavily-doped regions in said substrate; (h) removing the first sidewall spacer by an etching process; (i) thermally treating said pair of spaced-apart, heavily-doped regions to form a pair of heavily-doped source/drain regions in said substrate each having a junction therewith at a predetermined depth below said substrate surface, a first one of said pair of heavily-doped source/drain regions being laterally spaced away from a respective proximal edge of said gate insulating layer by a distance substantially equal to the width of the lower end of said first sidewall spacer adjacent said substrate surface and a second one of said pair of heavily-doped source/drain regions extending to just beneath a respective proximal edge of said gate insulating layer; (j) selectively introducing second, opposite conductivity type dopant impurities into the exposed portion of said substrate surface intermediate said gate insulating layer and said first, laterally spaced-away, heavily-doped source/drain region to form a lightly- or moderately-doped extension region; and (k) thermally treating said lightly- or moderately-doped extension region to form a shallow-depth, lightly- or moderately-doped source/drain extension in said substrate extending from the proximal edge of the first, laterally spaced-away, heavily-doped source/drain region to just beneath the respective proximal edge of said gate insulating layer. 2. The method as in claim 1, wherein step (a) comprises providing a silicon wafer substrate of n or p first conductivity type, said thin gate insulating layer comprises a silicon oxide layer about 25-50 Å thick, and said gate electrode layer comprises heavily-doped polysilicon.3. The method as in claim 2, wherein step (b) comprises forming said first and second sidewall spacers from a dielectric material having an as-deposited etch resistance which can be altered by subsequent treatment(s).4. The method as in claim 3, wherein step (b) comprises forming said sidewall spacers from a UV-nitride.5. The method as in claim 3, wherein each of said tapered sidewall spacers has a width profile varying from relatively wider at the lower ends thereof adjacent said substrate surface relatively narrower at the upper ends thereof.6. The method as in claim 2, wherein step (c) comprises selectively forming a layer of photoresist over said first sidewall spacer.7. The method as in claim 2, wherein: step (c) comprises selectively positioning an ion implantation mask over said first sidewall spacer; and step (d) comprises selective ion implantation of said second sidewall spacer.8. The method as in claim 7, wherein: step (d) comprises selectively implanting impurity ions selected from Si<+>, Ge<+>, and p and n type dopant ions; and step (f) comprises selectively removing the ion-implanted, reduced etching resistance second insulative sidewall spacer by etching with dilute aqueous HF.9. The method as in claim 8, wherein step (f) comprises etching the second insulative spacer with 1:100 HF/H2O at about 20-35[deg.] C.10. The method as in claim 2, wherein step (g) comprises selectively implanting dopant ions of second, opposite conductivity type at dosages of from about 5*10<14 >to about 5*10<15 >atoms/cm<2 >and energies of from about 20 to about 60 KeV.11. The method as in claim 2, wherein step (h) comprises etching the first sidewall spacer with 1:100 HF/H2O at about 20-35[deg.] C.12. The method as in claim 2, wherein step (i) comprises rapid thermal annealing to diffuse and activate said second conductivity type dopant impurities introduced during step (g) to form said pair of heavily-doped, relatively deep, source/drain regions.13. The method as in claim 2, wherein step (j) comprises selectively implanting dopant ions of second conductivity type at dosages of from about 5*10<13 >to about 5*10<14 >atoms/cm<2>and energies of from about 5 to about 30 KeV.14. The method as in claim 2, wherein step (k) comprises rapid thermal annealing to diffuse and activate said second conductivity type dopant impurities introduced during step (j) to form said shallow-depth, lightly- or moderately-doped source/drain extension.15. The method as in claim 2, comprising forming a relatively narrow sidewall spacer on each of said first and second opposing side surfaces of said layer stack prior to performing step (b), said relatively narrow sidewall spacers comprising an etch resistant material which is retained throughout processing.16. The method as in claim 15, comprising forming said relatively narrow sidewall spacers from a dielectric material selected from silicon oxides, silicon nitrides, and silicon oxynitrides, wherein each of said relatively narrow sidewall spacers has a tapered width profile varying relatively wider at the lower end thereof adjacent said substrate surface to relatively narrower at the upper end thereof.17. A method of manufacturing a silicon-based MOS-type transistor, which method comprises the sequential steps of:(a) providing a MOS transistor precursor structure comprising a silicon semiconductor wafer substrate of a first conductivity type and a layer stack formed on a portion of a surface of said wafer, said layer stack comprising: i. a thin gate insulating layer comprising a silicon oxide layer about 25-50 Å thick in contact with said wafer surface; and ii. a gate electrode layer comprising heavily-doped polysilicon formed on said gate insulating layer, said layer stack comprising first and second opposing side surfaces and a top surface; (b) forming first and second relatively narrow, insulative, tapered sidewall spacers on respective ones of said first and second opposing side surfaces, said first and second relatively narrow sidewall spacers comprising a first, relatively etch-resistant dielectric material selected from silicon oxides, silicon nitrides, and silicon oxynitrides; (c) forming first and second relatively wide, insulative, tapered sidewall spacers on respective ones of said first and second relatively narrow sidewall spacers, said first and second relatively wide sidewall spacers comprising a second dielectric material comprising a UV-nitride having an as-deposited etch resistance; (d) selectively positioning a masking material over the sidewall spacers on the first opposing side surface of said layer stack; (e) selectively implanting the exposed second, relatively wide sidewall spacer on the second opposing side surface of said layer stack with impurities for reducing the etch resistance thereof from its as-deposited state to a more readily-etchable state; (f) removing the masking material from over the sidewall spacers on the first opposing side surface of said layer stack; (g) selectively removing the reduced etching resistance, second relatively wide sidewall spacer layer by etching with dilute aqueous HF; (h) selectively implanting dopant impurities of a second, opposite conductivity type into exposed portions of said substrate surface adjacent said first relatively wide sidewall spacer and adjacent said second relatively narrow sidewall spacer to form a pair of spaced-apart, heavily-doped implants in said wafer; (i) removing the first relatively wide sidewall spacer by an etching process with dilute aqueous HF; (j) performing rapid thermal annealing to diffuse and activate the dopant impurities implanted in step (h), thereby forming a pair of heavily-doped source/drain regions in said wafer substrate, each having a junction therewith at a predetermined depth below said wafer surface, a first one of said pair of heavily-doped source/drain regions being laterally spaced away from a respective proximal edge of said gate insulating layer by a distance substantially equal to the combined width of the lower ends of said relatively narrow and relatively wide sidewall spacers adjacent said wafer surface and a second one of said pair of heavily-doped source/drain regions extending to just beneath a respective proximal edge of said gate insulating layer; (k) selectively implanting second, opposite conductivity type dopant impurities into the exposed portion of said wafer surface intermediate said gate insulating layer and said first, laterally spaced-away, heavily-doped source/drain region to form a lightly- or moderately-doped extension region; and (l) performing rapid thermal annealing to diffuse and activate the dopant impurities implanted in step (k), thereby forming a shallow-depth, lightly- or moderately-doped source/drain extension extending from the proximal edge of the first, laterally spaced-away, heavily-doped source/drain region to just beneath the respective proximal edge of said gate insulating layer. 18. The method as in claim 17, wherein: step (e) comprises implanting ions selected from Si<+>, Ge<+>, and p and n type dopant ions; and step (g) comprises etching the second relatively wide insulative spacer with 1:100 HF/H2O at about 20-35[deg.] C. |
RELATED APPLICATIONSThis application claims priority from U.S. Provisional Patent Application Ser. No. 60/155,605, filed Sep. 24, 1999, and is incorporated herein by reference.FIELD OF THE INVENTIONThe present invention relates to a method of manufacturing semiconductor devices, e.g., asymmetric MOS-type transistors and integrated circuits comprising such devices, with improved processing methodology resulting in increased reliability and quality, increased manufacturing throughput, and reduced fabrication cost. The present invention is also useful in the manufacture of asymmetric CMOS semiconductor devices and has particular applicability in fabricating high-density integration semiconductor devices with design features below about 0.18 [mu]m, e.g., about 0.15 [mu]m and below.BACKGROUND OF THE INVENTIONThe escalating requirements for high density and performance associated with ultra large-scale integration (ULSI) semiconductor devices requires design features of 0.18 [mu]m and below, such as 0.15 [mu]m and below, increased transistor and circuit speeds, high reliability, and increased manufacturing throughput for economic competitiveness. The reduction of design features to 0.18 [mu]m and below challenges the limitations of conventional semiconductor manufacturing techniques.As feature sizes of MOS and CMOS devices have decreased to the sub-micron range, so-called "short-channel" effects have arisen which tend to limit device performance. For n-channel MOS transistors, the major limitation encountered is caused by hot-electron-induced instabilities. This problem occurs due to high electrical fields between the source and drain, particularly near the drain, such that charge carriers, either electrons or holes, are injected into the gate or semiconductor substrate. Injection of hot carriers into the gate can cause gate oxide charging and threshold voltage instabilities which accumulate over time and greatly degrade device performance. In order to counter and thus reduce such instabilities, lightly-doped source/drain extension type transistor structures have been developed, as described below.For p-channel MOS transistors of short-channel type, the major limitation on performance arises from "punch-through" effects which occur with relatively deep junctions. In such instances, there is a wider sub-surface depletion effect and it is easier for the field lines to go from the drain to the source, resulting in the above-mentioned "punch-through" current problems and device shorting. To minimize this effect, relatively shallow junctions are employed in forming p-channel MOS transistors.The most satisfactory solution to date of hot carrier instability problems of MOS devices is the provision of lightly- or moderately-doped source/drain extensions driven just under the gate region, while the heavily-doped drain region is laterally displaced away from the gate by use of a sidewall spacer on the gate. Such structures are particularly advantageous because they do not have problems with large lateral diffusion and the channel length can be set precisely.Several processing sequences or schemes have been developed for the manufacture of source/drain extension-type MOS and CMOS transistors for use in high-density integration applications, with a primary goal of simplifying the manufacturing process by reducing and/or minimizing the requisite number of processing steps. Conventional processing schemes for making such MOS transistors generally employ disposable spacers made of various materials, e.g., polysilicon, silicon oxides, silicon nitrides, silicon oxynitrides, and combinations thereof.According to one conventional process scheme, a precursor structure comprising a semiconductor substrate of one conductivity type having a layer stack comprising a thin gate oxide layer and an overlying gate electrode formed on a portion of a surface thereof is subjected to ion implantation prior to sidewall spacer formation, for forming lightly- or moderately-doped implants therein. Following post-implantation annealing, sidewall spacers are formed on the pair of opposing side surfaces of the layer stack by first depositing a dielectric spacer material layer over the substrate surfaces and then removing same from the horizontally-oriented regions, i.e., the top surface of the gate electrode layer and the source and drain regions, by means of anisotropic etching. Such processing results in sidewall spacers left on the side surfaces of the gate layer stack that have an approximately quarter-circular shaped cross-section. The dielectric sidewall spacers typically remain through the balance of junction formation processing. After sidewall spacer formation, a heavy source/drain implantation is performed, with the gate layer stack and associated sidewall spacers acting as implantation masking materials. As a consequence of the separate implantations, the heavily-doped source/drain regions are laterally displaced from the gate edges by the thickness of the sidewall spacer material and the lightly- or moderately-doped regions beneath the sidewall spacers act as source/drain extensions.According to another conventional process scheme, which scheme employs disposable (i.e., removable) sidewall spacers, a precursor structure as described above and comprising a semiconductor substrate of one conductivity type having a layer stack comprising a thin gate oxide layer and an overlying gate electrode layer formed on a portion of a surface thereof is subjected to blanket-type dielectric layer deposition and patterning to form sidewall spacer layers on opposing side surfaces of the layer stack. Opposite conductivity type p- or n-type dopant impurities are then implanted into the substrate using the layer stack with sidewall spacers formed thereon as an implantation mask, to thereby form moderately- to heavily-doped implants. High temperature annealing is then performed to thermally activate the implanted dopant by diffusion and reduce lattice damage due to implantation, thereby forming source/drain regions and junctions at a predetermined density and depth below the substrate surface. The effective length of the channel of such transistors is determined by the width of the gate insulator/gate electrode layer stack and the width of the sidewall spacers formed thereon. After activation annealing, the sidewall spacers are removed, as by etching, and a second implantation process for implanting n- or p-type opposite conductivity type dopant impurities is performed using only the gate insulating layer/gate electrode layer stack as an implantation mask, thereby forming shallow-depth, lightly- or moderately-doped implants in the substrate in the spaces between the deeper, heavily-doped source/drain regions. Following this implantation, a second activation process, e.g., rapid thermal annealing (RTA), is performed for effecting dopant diffusion/activation and relaxation of implantation-induced lattice damage of the implants, to form shallow-depth, lightly- or moderately-doped source/drain extensions extending from respective proximal edges of the heavily-doped source/drain regions to just below the respective proximal edges of the gate insulator layer/gate electrode layer stack.In a variant of the above-described process, the sidewall spacers are comprised of a layer of a first (or inner) dielectric material and a layer of a second (or outer) dielectric material. According to the process methodology of this variant, only the second, or outer, dielectric sidewall spacer layer is removed subsequent to annealing for forming the moderately- to heavily-doped source/drain regions. The first, or inner, dielectric sidewall spacer layer is retained for protecting the gate insulator/gate electrode layer stack during subsequent processing, e.g., for contact formation.Each of the above-described variants employ removable sidewall spacers as part of an implantation mask for defining the channel lengths, and each incurs a drawback in that the materials conventionally used for the sidewall spacers, such as those enumerated above, frequently are difficult and time consuming to remove by standard etching methodologies, particularly when densified as a result of high temperature processing for post-implantation annealing for dopant activation/lattice damage relaxation. For example, and as described in U.S. Pat. No. 5,766,991, removal of silicon nitridebased spacer layers can require etching in a hot phosphoric acid (H3PO4) bath at about 180[deg.] C. for approximately 1.5 hours. Such long etching time results in reduced manufacturing throughput and the extended exposure to and concomitant attack by the corrosive etchant at high temperature results in undesired etching and defect formation. Moreover, portions of the workpiece substrate not intended to be etched must be provided with an etch-resistant protective barrier layer, e.g., of silicon oxide, prior to etching. However, the etching resistance of the silicon oxide layer itself to the hot phosphoric acid may be insufficient, in which case the resistance thereof must be increased prior to etching, e.g., by first annealing it at about 900[deg.] C. in an oxygen ambient. Alternatively, resistance to attack by the hot H3PO4 may be obtained by use of an oxide-polysilicon bi-layer. In either case, such requirement for provision of at least one layer for protecting from acid attack disadvantageously adds processing time, complexity, and fabrication cost. Etching of annealed, densified silicon oxide and/or silicon oxynitridebased sidewall spacer layers is similarly difficult.Another approach towards alleviating or eliminating the problems of "short-channel" effects in sub-micron dimensioned MOS transistors, such as the above-mentioned "hot carrier" injection and "punch-through" phenomena, is the formation of "asymmetric" source/drain structures, i.e., structures where the source and drain regions, including their associated lightly-doped, shallow-depth extensions, are not identically formed and constituted. For example, U.S. Pat. No. 5,510,279 issued Apr. 23, 1996, discloses a method of fabricating an asymmetric lightly doped drain transistor device, wherein the drain region is shielded with a barrier layer when ion implantation is conducted for implanting a highly doped source region. Following this implantation, a large angle implantation of opposite conductivity type dopant ions is performed for establishing a lightly doped "pocket" region adjacent the highly doped source region. The angled implantation which forms the pocket region increases the doping concentration along the device's source side, thereby increasing the threshold voltage and, consequently diminishing "short-channel" effects.In another approach, disclosed in U.S. Pat. No. 5,811,338 issued Sep. 22, 1998, "DIBL" (i.e., Drain-Induced Barrier Lowering) and "hot electron" short-channel effects in MOS transistors are alleviated by forming a second polarity internal junction region entirely within one of otherwise similar, first polarity source and drain regions. In yet another approach, disclosed in U.S. Pat. No. 5,547,885 issued Aug. 20, 1996, the widths of sidewall spacers on opposite side surfaces of gate insulator/gate electrode layer stacks are different; as a consequence thereof, the heavily doped source and drain regions, along with their respective shallow depth, lightly-doped extensions are of different lengths, resulting in formation of an asymmetric transistor structure wherein the hot carrier effect is suppressed by reducing peak field strength of a drain depletion layer caused by "pinch-off".A need exists for improved semiconductor manufacturing methodology for fabricating MOS and CMOS transistors exhibiting reduced short-channel effects such as are obtainable by formation of asymmetric structures as described above, by a process which employs removable spacer technology yet does not suffer from the above-described drawbacks associated with the difficulty in conveniently and rapidly removing densified sidewall spacers according to conventional etching techniques. Moreover, there exists a need for an improved process for fabricating asymmetrically-configured MOS transistor-based devices which is fully compatible with conventional process flow and provides increased manufacturing throughput and product yield.The present invention fully addresses and solves the above-described problems and drawbacks attendant upon the application of conventional processing methodology for forming submicron-dimensioned, asymmetrically configured MOS and CMOS transistors for use in high-density semiconductor integrated circuit devices, particularly in providing a process utilizing a pair of removable dielectric sidewall spacer layers formed of a dielectric material, one of the pair being selectively subjected to a post-formation treatment for increasing the etchability thereof vis-a-vis that of its as-deposited state, wherein the treated spacer is readily removed by etching prior to a heavy ion implantation for defining heavily-doped source and drain regions, followed by thermal annealing treatment for dopant activation/lattice damage relaxation, which thermal annealing may disadvantageously density and thus increase the etching resistance of the spacer material. As a consequence of the selective removal of only one of the sidewall spacers, the heavy ion implantation results in asymmetric source/drain formation, i.e., the source or drain region formed at the side of the gate insulator/gate electrode layer stack from which the sidewall spacer has been removed is formed with its proximal edge reaching to just beneath the respective proximal edge of the gate insulator layer, whereas the source or drain region formed at the other side of the gate insulator/gate electrode layer stack having the sidewall spacer thereon is spaced-away therefrom by a distance approximately equal to the width of the spacer at its lower end adjacent the substrate surface. Following the heavy ion implantation, the remaining one of the sidewall spacers is removed by etching, and thermal annealing for diffusion/activation of the implanted dopant ions/atoms is performed. A second ion implantation is then performed for forming a shallow-depth, lightly-doped extension extending from the proximal edge of the spaced-away source or drain region to just beneath the respective proximal edge of the gate insulator layer. Relatively thin inner spacers formed of a dielectric material which is substantially less readily etched than the removable outer spacers are optionally provided on the opposing side surfaces of the gate insulator/gate electrode layer stack, which spacers are retained throughout processing for protecting the gate insulator/gate electrode layer stack from attack by corrosive atchant and during subsequent metallization for contact formation.DISCLOSURE OF THE INVENTIONAn advantage of the present invention is an improved method for manufacturing asymmetrically-configured MOS and/or CMOS transistor devices utilizing a removable spacer.Another advantage of the present invention is an improved method for increasing the etchability of dielectric spacer materials utilized in the manufacture of asymmetrically-configured MOS, CMOS, and other types of semiconductor devices.Yet another advantage of the present invention is an improved method for manufacturing asymmetrically-configured MOS and/or CMOS transistor devices utilizing a removable sidewall spacer formed of a readily etchable dielectric material.Still another advantage of the present invention is an improved method of manufacturing submicron-dimensioned asymmetrically-configured MOS transistors for use in high-density semiconductor integrated circuit devices at lower cost, higher manufacturing throughput, and increased product yield and reliability than are obtainable with conventional process methodology.Yet another advantage of the present invention is an improved asymmetrically-configured, submicron-dimensioned MOS transistor having a reduced amount of "short-channel" effects.Additional advantages and other features of the present invention will be set forth in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from the practice of the instant invention. The advantages of the present invention may be realized and obtained as particularly pointed out in the appended claims.According to an aspect of the present invention, the foregoing and other advantages are achieved in part by a method of manufacturing a semiconductor device, which method comprises the sequential steps of:(a) providing a device precursor structure comprising a semiconductor substrate of a first conductivity type and a layer stack formed on a portion of a surface of the substrate, the layer stack comprising:i. a thin gate insulating layer in contact with the substrate surface; andii. a gate electrode layer formed on the gate insulating layer, the layer stack comprising first and second opposing side surfaces and a top surface;(b) forming first and second insulative, tapered sidewall spacers on respective first and second opposing side surfaces of said layer stack, each of the sidewall spacers comprising a dielectric material having an as-deposited etch resistance;(c) selectively positioning a masking material over the first sidewall spacer on the first opposing side surface of the layer stack;(d) selectively treating the exposed second sidewall spacer on the second opposing side surface of the layer stack for reducing the etch resistance of the dielectric material from its as-deposited state to a more readily-etchable state;(e) removing the masking material from over the first sidewall spacer;(f) selectively removing the reduced etching resistance second sidewall spacer by an etching process;(g) selectively introducing dopant impurities of a second, opposite conductivity type into exposed portions of the substrate surface adjacent the first sidewall spacer and adjacent the second opposing side surface of the layer stack to form a pair of spaced-apart, heavily-doped regions in the substrate;(h) removing the first sidewall spacer by an etching process;(i) thermally treating the pair of spaced-apart, heavily-doped regions to form a pair of heavily-doped source/drain regions in the substrate each having a junction therewith at a predetermined depth below the substrate surface, a first one of the pair of heavily-doped source/drain regions being laterally spaced away from a respective proximal edge of the gate insulating layer by a distance substantially equal to the width of the lower end of the first sidewall spacer adjacent the substrate surface and a second one of the pair of heavily-doped source/drain regions extending to just beneath a respective proximal edge of the gate insulating layer;(j) selectively introducing second, opposite conductivity type dopant impurities into the exposed portion of the substrate surface intermediate the gate insulating layer and the first, laterally spaced-away, heavily-doped source/drain region to form a lightly- or moderately-doped extension region; and(k) thermally treating the lightly- or moderately-doped extension region to form a shallow-depth, lightly- or moderately-doped source/drain extension in the substrate extending from a proximal edge of the first, laterally spaced-away, heavily-doped source/drain region to just beneath the respective proximal edge of the gate insulating layer.In embodiments according to the present invention, step (a) comprises providing a silicon wafer substrate of n or p first conductivity type, the thin gate insulating layer comprises a silicon oxide layer about 25-50 Å thick, and the gate electrode layer comprises heavily-doped polysilicon; step (b) comprises forming the first and second sidewall spacers from a dielectric material having an as-deposited etch resistance and comprising a UV-nitride, each of the tapered sidewall spacers having a width profile varying from relatively wide at the lower end thereof adjacent the substrate surface to relatively narrow at the upper end thereof; step (c) comprises selectively forming a layer of photoresist over the first sidewall spacer or selectively positioning an ion implantation mask over the first sidewall spacer; step (d) comprises selective ion implantation of the second sidewall spacer, comprising selectively implanting impurity ions selected from Si<+>, Ge<+>, and p and n type dopant ions at preselected dosages and energies; step (f) comprises selectively removing the ion implanted, reduced etching resistance second insulative sidewall spacer by etching with dilute aqueous HF, e.g., 1:100 HF/H2O at about 20-35[deg.] C.; step (g) comprises selectively implanting dopant ions of second, opposite conductivity type at dosages of from about 5*10<14 >to about 5*10<15 >atoms/cm<2 >and energies of from about 20 to about 60 KeV; step (h) comprises etching the first sidewall spacer with dilute aqueous HF, e.g., 1:100 HF/H2O at about 20-35[deg.] C.; step (i) comprises rapid thermal annealing to diffuse and activate the second conductivity type dopant impurities introduced during step (g) to form the pair of heavily-doped source/drain regions; step (j) comprises selectively implanting dopant ions of second conductivity type at dosages of from about 5*10<13 >to about 5*10<14 >atoms/cm<2 >and energies of from about 5 to about 30 KeV; and step (k) comprises rapid thermal annealing to diffuse and activate the second conductivity type dopant impurities introduced during step (j) to form the shallow-depth, lightly- or moderately-doped source/drain extension.According to a further embodiment of the present invention, the method comprises forming a relatively narrow sidewall spacer on each of the first and second opposing side surfaces of the layer stack prior to performing step (b), the relatively narrow sidewall spacers comprising an etch resistant material which is retained throughout processing, and selected from silicon oxides, silicon nitrides, and silicon oxynitrides.According to another aspect of the present invention, a method of manufacturing an asymmetrically-configured silicon-based MOS-type transistor is provided, which method comprises the sequential steps of:(a) providing a MOS transistor precursor structure comprising a silicon semiconductor wafer substrate of a first conductivity type and a layer stack formed on a portion of a surface of the wafer, the layer stack comprising:i. a thin gate insulating layer comprising a silicon oxide layer about 25-50 Å thick in contact with the wafer surface; andii. a gate electrode layer comprising heavily-doped polysilicon formed on the gate insulating layer, the layer stack comprising first and second opposing side surfaces and a top surface;(b) forming first and second, relatively narrow insulative, tapered sidewall spacers on respective ones of said first and second opposing side surfaces, said first and second relatively narrow sidewall spacers comprising a first, relatively etch-resistant dielectric material selected from silicon oxides, silicon nitrides, and silicon oxynitrides;(c) forming first and second relatively wide, insulative, tapered sidewall spacers on respective ones of the first and second sidewall spacers, the first and second relatively wide sidewall spacers comprising a second dielectric material comprising a UV-nitride having an as-deposited etch resistance;(d) selectively positioning a masking material over the sidewall spacers on the first opposing side surface of the layer stack;(e) selectively implanting the exposed second, relatively wide sidewall spacer on the second opposing sidewall surface of the layer stack with impurities for reducing the etch resistance from its as-deposited state to a more readily-etchable state;(f) removing the masking material from over the sidewall spacers on the first opposing side surface of the layer stack;(g) selectively removing the reduced etching resistance, second relatively wide sidewall spacer layer by etching with dilute aqueous HF;(h) selectively implanting dopant impurities of a second, opposite conductivity type into exposed portions of the substrate surface adjacent the first relatively wide sidewall spacer and adjacent the second relatively narrow sidewall spacer to form a pair of spaced-apart, heavily-doped implants in the wafer;(i) removing the first relatively wide sidewall spacer by an etching process with dilute aqueous HF;(j) performing rapid thermal annealing to diffuse and activate the dopant impurities implanted in step (h), thereby forming a pair of heavily-doped source/drain regions in the wafer substrate, each having a junction therewith at a predetermined depth below the wafer surface, a first one of the pair of heavily-doped source/drain regions being laterally spaced away from a respective proximal edge of the gate insulating layer by a distance substantially equal to the width of the lower end of the relatively wide sidewall spacer adjacent the wafer surface and a second one of the pair of heavily-doped source/drain regions extending to just beneath a respective proximal edge of the gate insulating layer;(k) selectively implanting second, opposite conductivity type dopant impurities into the exposed portion of the wafer surface intermediate the gate insulating layer and the first, laterally spaced-away, heavily-doped source/drain region to form a lightly- or moderately-doped extension region; and(l) performing rapid thermal annealing to diffuse and activate the dopant impurities implanted in step (k), thereby forming a shallow-depth, lightly- or moderately-doped source/drain extension extending from the proximal edge of the first, laterally spaced-away, heavily-doped source/drain region to just beneath the respective proximal edge of the gate insulating layer.According to yet another aspect of the present invention, silicon-based, asymmetrically-configured MOS-type transistor devices formed by the method of the above-enumerated steps (a)-(l) are provided.According to still another aspect of the present invention, an asymmetrically-configured MOS-type transistor device comprises:(a) a semiconductor substrate of one conductivity type and having a surface;(b) a layer stack formed on a portion of the surface, the layer stack comprising:i. a thin gate insulating layer in contact with the substrate surface; andii. a gate electrode layer formed on the gate insulating layer; and(c) a pair of source and drain regions of opposite conductivity type formed within the substrate and extending to just beneath opposite edges of the gate insulating layer, wherein:i. a first one of the pair of source and drain regions comprises a first, heavily-doped portion laterally spaced away from the respective proximal edge of the gate insulating layer and having a relatively deep junction depth, and a second, shallow-depth, moderately or lightly-doped extension portion extending from the proximal edge of the first portion to just beneath the respective proximal edge of the gate insulating layer; andii. a second one of the pair of source and drain regions comprises a heavily-doped, relatively deep junction depth portion extending to just beneath the respective proximal edge of the gate insulating layer.In embodiments according to the invention, the semiconductor substrate comprises a monocrystalline silicon wafer of p or n first conductivity type, the thin gate insulating layer comprises a silicon oxide layer about 25-50 Å thick, and the gate electrode layer comprises heavily-doped polysilicon; and the first one the pair of source and drain regions comprises a source region and the second one of the pair of source and drain regions comprises a drain region, or alternatively, the first one of the pair of source and drain regions comprises a drain region and the second one of the pair of source and drain regions comprises a source region.Additional advantages and aspects of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein only the preferred embodiment of the present invention is shown and described, simply by way of illustration of the best mode contemplated for carrying out the method of the present invention. As will be described, the present invention is capable of other and different embodiments, and its several details are susceptible of modification in various obvious respects, all without departing from the spirit of the present invention. Accordingly, the drawing and description are to be regarded as illustrative in nature, and not as limitative.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1(A)-1(J) illustrate, in simplified, cross-sectional form, a sequence of processing steps for forming an asymmetrically-configured MOS-type transistor according to an embodiment of the present invention, wherein like reference numerals are employed throughout for designating like features and/or components of the invention.DESCRIPTION OF THE INVENTIONThe present invention addresses and solves problems arising from manufacturing submicron-dimensioned, asymmetrically-configured MOS and CMOS transistors suitable for use in high-density integration semiconductor devices, wherein, as part of the fabrication methodology, sidewall spacers which act as at least part of an implantation mask during the formation of moderately- to heavily-doped source/drain regions are removed, as by etching, prior to implantation for forming lightly- or moderately-doped source/drain extensions. More specifically, the present invention advantageously provides a significant and substantial reduction in the duration and corrosive severity of the requisite anisotropic etching step for selectively removing the implantation masking sidewall spacers, thereby increasing device reliability and manufacturing throughput, while decreasing fabrication cost and product yield problems associated with the conventional technology. In addition, the inventive method is fully compatible with other aspects of existing processing methodology.According to the present invention, a method of manufacturing asymmetrically-configured MOS and CMOS transistors is provided which utilizes a pair of relatively wide, removable sidewall spacer layers formed on opposing side surfaces of a gate insulator/gate electrode layer stack, one of the spacers comprising a dielectric material which has been selectively treated, as by ion implantation, to reduce the etching resistance thereof from its as-deposited state to an easily and rapidly etched state. Following easy removal of the treated sidewall spacer by etching, an ion implantation process is performed wherein the remaining one of the sidewall spacers performs a masking function for forming a moderately- to heavily-doped source/drain region spaced a predetermined distance away from the respective proximal edge of the gate insulator/gate electrode layer stack. By contrast, the heavily-doped source/drain region formed where the sidewall spacer has been selectively removed extends to just beneath the respective proximal edge of the gate insulator/gate electrode layer stack. The remaining sidewall spacer is then removed by etching prior to any post-implantation thermal annealing treatment for dopant activation and lattice damage relaxation, which thermal treatment may disadvantageously result in densification of the sidewall spacer material, with a concomitant increase in the etching resistance thereof. Thermal annealing treatment is performed subsequent to removal of the remaining spacer, followed by a second implantation process for forming a shallow-depth, lightly-doped source/drain extension extending from the proximal edge of the spaced-away, heavily-doped, source/drain region to just beneath the respective proximal edge of the gate insulator/gate electrode layer stack. Relatively thin, etch resistant spacers, which are retained throughout device processing for protecting the gate insulator/gate electrode layer stack from attack by corrosive etchant during removal of the relatively wide spacers (as well as during subsequent metallization for contact formation), may be optionally provided intermediate the opposing sidewall surfaces and the sidewall spacer layers.Referring now to FIGS. 1(A)-1(J), shown therein is a sequence of steps for performing an illustrative, but not limitative, MOS-based embodiment of the present invention, wherein similar reference numerals are used throughout to denote similar features. As will be apparent to one of ordinary skill in the art, the inventive process may be readily adapted for use in the manufacture of CMOS transistors and similar devices. Referring more particularly to FIG. 1(A), in a first step, a MOS device precursor 1, fabricated according to conventional techniques not described herein in order not to unnecessarily obscure the present invention, is provided, comprising a doped monocrystalline silicon (Si) substrate 10 of first conductivity type (p or n), with a thin gate dielectric layer 11, typically of a silicon oxide about 25-50 Å thick, formed on a portion of the substrate surface destined to overlie the channel region of the transistor. Contiguous and forming a layer stack with the gate dielectric layer 11 is a thicker gate electrode layer 12, typically of heavily-doped polysilicon, for providing electrical contact to the gate or channel region. First, or inner, insulative sidewall spacer layers 13 of a relatively etch resistant dielectric material, preferably a silicon oxide, are optionally formed in conventional manner (as by blanket deposition or thermal growth of a silicon oxide layer over the entire substrate surface, followed by selective anisotropic etching of the horizontally aligned surfaces thereof), on each of the opposing side surfaces of the gate insulator/gate electrode layer stack 11/12. Sidewall spacer layers 13 are relatively narrow and formed with a predetermined tapered width profile, the widths thereof varying from relatively wide at the lower ends in contact with the surface of the substrate 10 to relatively narrow at the upper ends. In addition to silicon oxides, the first, or inner, sidewall spacers 13 may also be comprised of silicon nitride or silicon oxynitride materials. Further, notwithstanding the substantial resistance of the as-deposited, undensified first sidewall spacer layer materials to etching with, e.g., dilute aqueous HF, the etching resistance thereof against dilute aqueous HF may be further increased by subjecting the as-deposited first sidewall spacer layers 13 to a thermal treatment for densification, e.g., rapid thermal annealing (RTA), to yield low etching rates with dilute aqueous HF.Referring now to FIG. 1(B), a layer 14 of a second dielectric material, the as-deposited etching resistance of which against dilute aqueous HF being substantially lower than that of the first dielectric material, e.g., a UV-nitride, is blanket-deposited over the surface of the dielectric gate oxide/gate electrode layer stack 11/12. UV-nitride layers 14 typically are undensified as-deposited and fairly easily etched with dilute aqueous HF, e.g., 1:100 HF/H2O at moderate temperatures of from about 20 to about 35[deg.] C. Densified UV-nitride layers, such as are obtained subsequent to thermal annealing treatment at elevated temperatures, are substantially more difficult-to-etch under essentially equivalent conditions; consequently, according to the inventive method, device processing at elevated temperatures is avoided prior to removal of the second sidewall spacer layers, as will be explained below.FIG. 1(C) shows the result of anisotropic etching of the horizontally oriented surfaces of the as-deposited, undensified UV-nitride layer 14 using dilute aqueous HF according to the conditions described supra. Selective removal of the horizontally oriented portions of the UV-nitride layer overlying the substrate 10 surface and the upper surface of the gate electrode layer 12 results in the formation of relatively wide, approximately quarter circle-shaped second, or outer, sidewall spacer layers 14' in contact with the outer surfaces of the first insulative sidewall spacers 13. The relatively wide second sidewall spacers 14' have a tapered width profile in cross-section varying from relatively wide at their lower ends thereof in contact with the surface of the substrate 10 to relatively narrower at their upper ends. Inasmuch as the relatively wide second sidewall spacers 14' provide the major portion of the masking function of the sidewall spacers during subsequent dopant ion implantation, the conditions for the selective anisotropic etching of the as-deposited, undensified UV-nitride layer are selected so as to yield a desired width of the second sidewall spacer layers 14' at their lower ends, which width is selected for optimization of the subsequently formed lightly- or moderately-doped source/drain extension region of the asymmetrically configured transistor.Adverting to FIG. 1(D), and according to an essential feature of the present invention, following formation of second sidewall spacers 14' of predetermined width profile, a layer of a masking material, e.g., a photoresist layer PR, is selectively formed, as by conventional photolithographic masking and etching techniques not described herein for brevity, to cover a portion of substrate 10 surface and a portion of the gate insulator/gate electrode layer stack 11/12 and one of the associated sidewall spacer layer pairs 13/14'. Alternatively, a patterned ion implantation mask may be positioned above the substrate for defining areas to be selectively implanted. The exposed sidewall spacer layer pair 13/14' is then selectively treated, as by ion implantation thereinto, for further reducing the resistance of the outer, relatively wide spacer layer 14' against etching, e.g., with dilute aqueous HF at about room temperature. Stated differently, the exposed outer, relatively wide sidewall spacer 14' is effectively selectively treated by ion implantation thereinto to augment the difference in HF etch rate vis-a-vis the inner, relatively narrow sidewall spacer 13. Implantable species 15A which may be employed for increasing the etchability of e.g., oxide and nitride-based undensified dielectric materials employed as the outer, relatively wide sidewall spacer 14' according to the inventive method, include Si<+> and Ge<+> ions, as well as p and n type dopants such as boron, phosphorus, arsenic, and antimony-containing ions. Suitable implantation dosages and energies of ions 15A can be selected for achieving satisfactory or optimum etching rates for a particular application. By way of illustration but not limitation, the etch rate with 1:100 HF/H2O (at 20-35[deg.] C.) of undensified, as-deposited UV-nitride outer sidewall spacers 14', formed as described above, is further increased as a result of implantation of Si<+> or Ge<+> ions. While the exact mechanism of the observed increase in etch rate is not known with certainty, and not desiring to being bound by any particular theory, it is believed that lattice damage of the undensified spacer material 14' upon ion implantation facilitates entry of etchant into the material along lattice faults and cleavage planes, resulting in increased etchant penetration. In any event, the ease with which the outer, relatively wide sidewall spacer 14' is etched, relative to the inner sidewall pacer 13, is substantially augmented as a consequence of the above-described selective implantation treatment.Referring now to FIG. 1(E), the photoresist layer PR is removed, as by conventional techniques, and the treated exposed outer, relatively wide sidewall spacer layer 14' is removed by etching with dilute aqueous HF at an augmented removal rate as described above (the order of removal not being critical to the practice of the invention). Dopant-containing ions 15B of second conductivity type opposite that of substrate 10 are then selectively implanted (optionally via a suitably patterned implantation mask, not shown), with layer stack 11/12 and the remaining relatively wide outer sidewall spacer 14' acting as implantation masks, to form moderately to heavily-doped regions 16A and 16B. Region 16A extends to just beneath the respective proximal edge of spacer 14' and thus is laterally spaced away from the respective proximal edge of the gate insulator/gate electrode layer stack 11/12 by a distance which substantially corresponds to the width of the relatively wide spacer 14' at its lower end adjacent the substrate 10 surface, whereas region 16B extends to just beneath the respective proximal edge of layer stack 11/12, thereby constituting an asymmetrically-configured structure. The heavy dopant ion implantation is performed at a dosage and energy selected for optimal transistor performance. For example, by way of illustration but not limitation, for a p-type Si substrate intended to comprise an n-channel transistor, n-type dopant impurities 15B, typically chosen from phosphorus (P), arsenic (As), and antimony (Sb), are implanted at a dosage of from about 5*10<14 >to about 5*10<15 >atoms/cm<2 >at an energy of from about 40 to about 60 KeV. Conversely, for an n-type Si substrate intended to comprise a p-channel transistor, p-type dopant impurities 15B (typically boron) are implanted at similar dosages but at lower energies of from about 20 to about 40 KeV.With reference now to FIG. 1(F), and according to another essential feature of the instant invention, in the next step of the process sequence, the remaining undensified outer, relatively wide sidewall spacer 14' is removed by etching, e.g., with dilute aqueous HF in the case of UV-nitride spacers. Inasmuch as the spacer removal step is performed on undensified (i.e., non-heat treated) spacer material, etching is performed at ambient temperatures (e.g., 20-35[deg.] C.) and at removal rates, such as described above, which result in significantly less undesired collateral etching, wafer damage, and defect formation. For example, removal by etching of the remaining undensified, as-deposited (i.e., not augmented by ion implantation) UV-nitride spacer 14' can be readily accomplished with 1:100 HF/H2O at 20-35[deg.] C. As compared with the about 1.5 hour interval required for etching of silicon nitride spacers with hot H3PO4 at 180[deg.] C. according to the prior art, the process of the instant invention provides a substantial and significant reduction in the requisite duration and severity of etching conditions, thereby simultaneously increasing manufacturing throughput and reducing corrosive attack and resultant damage to the wafer workpiece. As illustrated, the densified, relatively etch-resistant, first, or inner, sidewall spacers 13 are substantially unaffected by the aforesaid etching process and remain in place throughout subsequent device fabrication processing.Referring now to FIG. 1(G), following removal of sidewall spacer 14', the thus-formed transistor precursor is treated, as by rapid thermal annealing (RTA), so as to form asymmetrically-configured source/drain regions 17A and 17B, each having a predetermined junction depth below the substrate 10 surface. In the case of moderately to heavily-doped n-type source/drain regions 17, RTA is conducted at a temperature of from about 1,000 to about 1,100[deg.] C., typically about 1,050[deg.] C., for from about 10 to about 45 seconds, typically about 30 seconds, to activate and diffuse the implanted dopant ions 15B and reduce/relax lattice damage, stress, and distortion resulting from the implantation process. In the case of moderately to heavily-doped p-type source/drain regions 17A,17B, RTA is performed at a temperature of from about 900 to about 1,000[deg.] C., typically about 1,000[deg.] C., for from about 10 to about 45 seconds, typically about 10 seconds, for effecting dopant activation, diffusion, and lattice damage relaxation.Referring now to FIG. 1(H), in the next step according to the inventive method, dopant containing ions 15B of opposite, i.e., second, conductivity type to that of semiconductor substrate 10 are selectively implanted, as by use of a patterned implantation mask (not shown), to form a shallow, lightly or moderately-doped extension region 18 ("extension implant") in the space between the laterally spaced-away, moderately to heavily-doped first source/drain region 17A and just underlying the respective proximal edge of the gate insulator/gate electrode layer stack 11/12. Implantation is performed at lower dosages and energies than previously employed for forming source/drain regions 17A, 17B. In the case of n-type dopant impurities 15B, implantation is performed at a dosage of from about 5*10<13 >to about 5*10<14 >atoms/cm<2 >and at an energy of from about 10 to about 30 KeV. In the case of p-type dopant impurities 15B, implantation is performed at a similar dosage but at a lower energy of from about 5 to about 10 KeV.With reference to FIG. 1(I), the thus-implanted asymmetrically configured MOS-type structure is then subjected to a thermal treatment, typically RTA, for activating/diffusing the implanted dopant impurities 15B in the shallow, lightly or moderately-doped extension region 18 and for relaxation of lattice damage and stress resulting from the implantation process, thereby forming shallow, lightly or moderately-doped source/drain extension 19 with junction 19' depth.As shown in FIG. 1(J), outer, relatively wide insulative sidewall spacers 20 are then re-formed over the inner, relatively narrow sidewall spacers 13, as by conventional techniques not described herein for brevity, along the opposing side surfaces of the gate insulator/gate electrode layer stack 11/12, for protecting the layer stack and source/drain extension 19 during subsequent processing, e.g., for contact formation and metallization processing. The re-formed second sidewall spacers 20 may comprise one or more dielectric material layers selected from polysilicon, silicon oxides, silicon nitrides, silicon oxynitrides, and UV-nitrides.The present invention thus enables formation of reliable, defect-free submicron-dimensioned, asymmetrically-configured MOS transistors at increased rates of manufacturing throughput, by utilizing an augmented etch rate material for one of the outer, relatively wide dielectric sidewall spacers, which material provides a very significant reduction in the time necessary for etching processing. In addition, the etching is performed under milder conditions, vis-a-vis the hot phosphoric acid etching according to the prior art, whereby deleterious effects due to long periods of exposure to the hot etchant are significantly reduced or eliminated.The present invention is applicable to the formation of various types of submicron-dimensioned, asymmetrically-configured transistors, including CMOS transistors as well as MOS transistors, and is fully compatible with conventional process flow for automated manufacture of high-density integration semiconductor devices.In the previous description, numerous specific details are set forth, such as specific materials, structures, reactants, processes, etc. in order to provide a better understanding of the present invention. However, the present invention can be practiced without resorting to the details specifically set forth. In other instances, well-known processing materials and techniques have not been described in order not to unnecessarily obscure the present invention.Only the preferred embodiment of the present invention and but a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the present invention is capable of use in various other combinations and environments and is susceptible of changes or modifications within the scope of the inventive concept as expressed herein. |
A transistor device on an SOI wafer includes a metal connect that is in contact with an underside (a bottom surface) of a body of the device. A part of the metal connect is between an active semiconductor region of the device and an underlying buried insulator layer. The metal connect is also in contact with a source of the device, thereby providing some electrical coupling between the source and the body, and as a result reducing or eliminating floating body effects in the device. A method of forming the metal interconnect includes etching away part of the buried insulator layer, for example by lateral etching or isotropic etching, and filling with metal, for example by chemical vapor deposition. |
What is claimed is: 1. A method of forming a semiconductor-on-insulator (SOI) device, comprising:forming a source, a drain, and a body in an active semiconductor region atop an insulator layer of an SOI wafer; and forming a metal connector having a part between the insulator layer and at least part of the active region; wherein the part of the metal connector is in direct contact with a bottom surface of the body. 2. The method of claim 1, wherein the metal connector is also in contact with the source.3. The method of claim 2, wherein the metal connector is in contact with the source along a bottom surface of the source and along a side surface of the source.4. The method of claim 1, wherein the forming the metal connector includes removing a portion of the insulator layer which underlies at least part of the active region, and filling the portion with metal.5. The method of claim 4, wherein the removing the portion includes etching the portion using a lateral or isotropic etch of the insulator layer.6. The method of claim 5, further comprising, prior to the etching the portion, exposing an source-side exposed area of the insulator layer on a source side of the active region, and wherein the etching includes etching through the exposed area.7. The method of claim 6, wherein the exposing includes exposing a drain-side exposed area on a drain side of the active region; further comprising masking the drain-side exposed area prior to the etching the portion, and unmasking the drain-side exposed area after the etching the portion; and wherein the filling includes depositing metal, the depositing metal includes depositing metal to make another metal connector, and the another metal connector is in direct contact with the drain.8. The method of claim 1, wherein the forming the metal connector includes forming the part of the metal connector with a curved interface between the part and the insulator layer.9. A method of forming a semiconductor-on-insulator (SOI) device, comprising:forming a source, a drain, and a body in an active semiconductor region atop an insulator layer of an SOI wafer; forming a hollow underneath the active semiconductor region; and filling the hollow with metal to form a metal connector having a part between the insulator layer and at least part of the active region. 10. The method of claim 9, further comprising, prior to forming the hollow, making an opening to expose a portion of the insulator layer; wherein the hollow is in communication with the opening.11. The method of claim 10, wherein the filling includes filling at least part of the opening with the metal.12. The method of claim 11,wherein the part of the metal connector formed by the filling is in direct contact with a bottom surface of the body; and wherein a portion of the metal connector formed by the filling of the opening is in direct contact with a side surface of the source. 13. The method of claim 9, wherein the forming the hollow includes forming the hollow with a curved interface between the hollow and the insulator layer.14. The method of claim 13, wherein the part of the metal connector formed by the filling is in direct contact with a bottom surface of the body.15. The method of claim 9, wherein the forming the hollow includes removing some of the insulator layer by etching.16. The method of claim 15, wherein the forming the hollow includes maintaining a portion of the insulator layer between the hollow and a substrate of the SOI wafer, such that the hollow is not in contact with the substrate.17. The method of claim 9, wherein the filling the hollow includes filling the hollow with tungsten.18. The method of claim 9, further comprising forming a second metal connector in contact with a side surface of the active region.19. A method of forming a semiconductor-on-insulator (SOI) device, comprising:forming a source, a drain, and a body in an active semiconductor region atop an insulator layer of an SOI wafer; making an opening to expose a portion of the insulator layer; forming a hollow underneath the active semiconductor region, wherein the hollow is in communication with the opening; filling the hollow and at least part of the opening with metal to form a first metal connector on a first side of the active region; and forming a second metal connector on a second side of the active region; wherein the first metal connector is in direct contact with the source and the body; wherein the second metal connector is in direct contact with the drain; wherein a part of the first metal connector is between the insulator layer and at least part of the active region; wherein the part of the first metal connector is in direct contact with a body surface of the body; and wherein the forming the hollow includes forming the hollow with a curved interface between the hollow and the insulator layer. |
This application is a division of U.S. application Ser. No. 09/773,037, now U.S. Pat. No. 6,441,435, filed Jan. 31, 2001.BACKGROUND OF THE INVENTION1. Technical FieldThe present invention relates generally to semiconductor-on-insulator (SOI) devices and methods of making, and more specifically to SOI transistor devices having body contacts.2. Description of the Related ArtConventional or bulk semiconductor devices are formed in semiconductor material by implanting a well of either P-type or N-type conductivity silicon in a silicon substrate wafer of the opposite conductivity. Gates and source/drain diffusions are then manufactured using commonly known processes. These form devices known as metal-oxide-semiconductor (MOS) field effect transistors (FETs). When a given chip uses both P-type and N-type, it is known as a complimentary metal oxide semiconductor (CMOS). Each of these transistors must be electrically isolated from the others in order to avoid shorting the circuits. A relatively large amount of surface area is needed for the electrical isolation of the various transistors. This is undesirable for the current industry goals for size reduction. Additionally, junction capacitance between the source/drain and the bulk substrate and "off" state leakage from the drain to the source both increase power consumption. Junction capacitance also slows the speed at which a device using such transistors can operate. These problems result in difficulties in reducing the size, power consumption, and voltage of CMOS technology devices.In order to deal with the junction capacitance and "off state" leakage problem as well as obtain reduced size, semiconductor-on-insulator technology (SOI) has been gaining popularity. A SOI wafer may be formed from a bulk silicon wafer by using conventional oxygen implantation techniques to create a buried oxide layer at a predetermined depth below the surface. The implanted oxygen oxidizes the silicon into insulating silicon dioxide in a gaussian distribution pattern centered at the predetermined depth to form the buried oxide layer. Field effect transistors formed on SOI substrates also may be able to achieve higher speed operation with higher drive currents, when compared with FETs formed on conventional bulk silicon substrates.However, one problem with forming field effect transistors on an SOI wafer is the floating body effect. The floating body effect occurs because the buried oxide layer isolates the body of the transistor from the fixed potential silicon substrate and therefore the body takes on charge based on recent operation of the transistor. The floating body effect causes the threshold voltage for operating the transistor to fluctuate, which in turn causes the current-to-voltage curve for the transistor to distort or kink. This problem is particularly apparent for passgate devices such as those used in dynamic random access memory (DRAM) wherein it is critical that the threshold voltage remain fixed such that the transistor remains in the "off" position to prevent charge leakage from the storage capacitor.One way of controlling floating body effects is to make a body contact, an electrical contact to the body that can be tied to an external voltage source. One known method of making a body contact is to extend the body to a relatively large area beyond a gate. An example of such a body contact is shown in U.S. Pat. No. 5,317,181, to Tyson. However, a body contact arrangement such as that disclosed in Tyson disadvantageously requires a relatively large amount of space on the chip.An alternative body contact is that described in U.S. Pat. No. 5,965,917, to Maszara et al., wherein a metal conductor directly contacts the sides of both a source or drain and a body of a transistor device, thereby providing a body contact that can be used to control floating body effects. However, the arrangement described in Maszara et al. requires the body to extend to the side of an active silicon region of the transistor, fully under the source or drain. Thus it cannot be used where the source and drain extend fully down to a buried insulator layer.Accordingly, there is a strong need in the art for a body contact that does not include the disadvantages of the prior art devices.SUMMARY OF THE INVENTIONA transistor device on an SOI wafer includes a metal connect that is in contact with an underside (a bottom surface) of a body of the device. A part of the metal connect is between an active semiconductor region of the device and an underlying buried insulator layer. The metal connect is also in contact with a source of the device, thereby providing some electrical coupling between the source arid the body, and as a result reducing or eliminating floating body effects in the device. A method of forming the metal interconnect includes etching away part of the buried insulator layer, for example by lateral etching or isotropic etching, and filling with metal, for example by chemical vapor deposition.According to an aspect of the invention, a semiconductor-on-insulator (SOI) device includes a semiconductor substrate; an insulator layer over the semiconductor substrate; an active semiconductor region over the insulator layer, the active semiconductor region including a source, a drain, and a body between the source and the drain; and a metal connector, wherein part of the metal connector is directly in contact with the body and is interposed between the insulator layer and at least part of the body.According to another aspect of the invention, a semiconductor-on-insulator (SOI) device includes a semiconductor substrate; an insulator layer over the semiconductor substrate; an active semiconductor region over the insulator layer, the active semiconductor region including a source, a drain, and a body between the source and the drain, wherein the source extends from a top surface of the active layer to a bottom surface of the active layer; and a metal connector, wherein part of the metal connector is directly in contact with the source and the body along the bottom surface, and wherein the metal conductor is not in contact with the substrate.According to yet another aspect of the invention, a method of forming a semiconductor-on-insulator (SOI) device includes the steps of forming a source, a drain, and a body in an active semiconductor region atop an insulator layer of an SOI wafer; and forming a metal connector having a part between the insulator layer and at least part of the active region.To the accomplishment of the foregoing and related ends, the invention comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative embodiments of the invention. These embodiments are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSIn the annexed drawings:FIG. 1 is a cross-sectional view of a semiconductor device in accordance with the present invention; andFIGS. 2-10 are cross-sectional views of various steps in a method of fabricating the semiconductor device of FIG. 1.DETAILED DESCRIPTIONA transistor device on an SOI wafer includes a metal connect which is in direct contact with the undersides (bottom surfaces) of a source and a body of the device. The metal connect wraps around a side surface of the source and partially between an active semiconductor region of the device and an underlying buried insulator layer. The metal connect provides some electrical coupling between the source and the body, thereby reducing or eliminating floating body effects in the device.Referring initially to FIG. 1, a semiconductor device 10 includes an SOI wafer 12 with a transistor 14 formed thereupon. The SOI wafer 12 includes a semiconductor substrate 16 and a surface semiconductor layer 18, with a buried insulator layer 20 therebetween. The semiconductor substrate 16 and the surface semiconductor layer 18 may be made of silicon, and the buried insulator layer 20 may be made of a silicon oxide such as SiO2, although it will be appreciated that other suitable materials may be used instead or in addition.The transistor 14 includes a gate 22 formed on an active semiconductor region 24 of the surface semiconductor layer 18. The gate 22 includes a gate dielectric 26 and a gate electrode 28. In addition, spacers 30 and 32 are on respective opposite sides of the gate 22. Exemplary materials for the gate dielectric 26 are SiO2 and Si3N4. The gate electrode 28 may be made of polysilicon or another semiconductor, or may be made in whole or in part of metal. An exemplary material for the spacers 30 and 32 is SiN.The active region 24 includes a body 38, with a source 40 and a drain 42 on respective opposite sides of the body. The source 40 and the drain 42 have respective source and drain extensions 46 and 48. The body includes a surface channel region 50 operatively coupled to the source 46 and the drain 48. As is conventional, the body 38 is primarily of different conductivity semiconductor material than the source 40 and the drain 42. For instance, the body 38 may be P-conductivity silicon while the source 40 and the drain 42 may be N-conductivity silicon. Alternatively, the body 38 may be N-conductivity silicon while the source 40 and the drain 42 may be P-conductivity silicon. As shown in FIG. 1, the source 40 and the drain 42 may both extend from a top surface of the active region 24 to a bottom surface of the active region.The body 38, the source 40, and the drain 42, are operatively coupled with the gate 22 to function as a transistor. The source 40 and the drain 42 have respective source and drain electrically-conducting metal-semiconductor compound regions 54 and 56 (also referred to as "silicide regions"), to facilitate electrical connection to the source and drain. The gate electrode 28 likewise may includes an upper conductive portion 60 to facilitate electrical connection.The active region 24 is laterally isolated from other structures of the device 10 by insulator-filled trenches (not shown) on opposite sides of the active region. The insulator-filled trenches may be trenches filled with silicon dioxide (SiO2) using known shallow trench isolation (STI) techniques.The device 10 includes a source-side metal connect 80 and a drain-side metal connect 82 on respective opposite sides of the active region 24. The metal connects 80 and 82 pass through a dielectric layer 86.The source-side metal connect 80 is in contact with a side surface 90 of the source 40 all the way down to the insulator layer 20. Thus the source-side metal connect 80 is electrically connected to the source 40, and in particular to the source silicide region 54. Similarly, the drain-side metal connect 82 is in direct contact with a side surface 92 of the drain 42 all the way down to the insulator layer 20. Thus the drain-side metal connect 82 is electrically connected to the drain 42, and in particular to the drain silicide region 56.The source-side metal connect 80 also has a protruding portion 94 which is between the semiconductor layer 18 and the insulator layer 20. In particular, a part of the protruding portion 94 is between the active layer 24 and the underlying portion of the semiconductor layer 20. This part of the protruding portion 94 is in direct contact with a bottom surface (underside) 98 of the active region 24, in particular with the bottom surfaces of both the source 40 and the body 38. Thus the source-side metal connect 80 is electrically connected to the body 38, and electrically couples the body 38 to the source 40. The electrical connection to the body 38 reduces the tendency of the body to build up a floating body potential, and thus reduces floating body effects in the transistor 14. The protruding portion 94 is not in contact with the semiconductor substrate 16.The connects 80 and 82 may be made of a conductive metal, such as tungsten. It will be appreciated that alternatively the connects 80 and 82 may be made of one or more of a variety of other suitable conductive materials.It will be appreciated that many variants on the above-described structure of the metal connects 80 and 82 are possible. For example, one or both of the metal connects 80 and 82 may be in contact with a top surface of the active region 24. As another example, the drain-side connect 82 may be in contact with only a part of the drain side surface 92, and/or may be in contact with a top surface of the drain 42. The protruding portion 94 of the source-side metal connect 80 may be asymmetric about the remainder of the source-side metal contact.Various steps in the fabrication of the above-described semiconductor device 10 are illustrated in FIGS. 2-10. Referring initially to FIG. 2, starting initially with the SOI wafer 12, a light doping of the surface semiconductor layer 18 is performed to create a channel-doped surface layer 100. It will appreciated that the channel doping may be omitted if it is not required for controlling the threshold voltage of the resulting device. Then, also as shown in FIG. 2, the gate 22 is formed on the SOI wafer 12. The gate 22, including the gate dielectric 26 and the gate electrode 28, may be formed through well-known processes such as deposition of material, for example using low pressure chemical vapor deposition (LPCVD), followed by selective removal through well known processes such as lithographic processes.Insulator-filled trenches may then be created in the SOI wafer 12 to define and laterally isolated the active region 24 of the surface semiconductor layer 18. The insulator-filled trenches may be formed using conventional well-known shallow trench isolation (STI) techniques. An exemplary process for forming an insulating trench includes forming a thin layer of oxide, approximately 150-200 Angstroms thick, on the wafer surface 101 and a top surface of the gate 22, and forming a silicon nitride mask thereon. The mask covers and protects the substrate in the area where the active region 24 are to be formed while leaving exposed the area where the insulator-filled trenches are to be formed.Thereafter, the unmasked portions of the semiconductor surface layer 18 (e.g. the portions where the silicon nitride mask has been etched away) are etched away to form an open trench extending at least past the upper surface of the buried insulator layer 20. The etching process for a silicon substrate is typically an anisotropic dry etch using hydrogen bromide (HBr) which has selectivity characteristics such that it etches the silicon substrate but not the silicon nitride mask.The open trench is filled by depositing silicon dioxide (SiO2), formed by a chemical reaction involving SiH4 or TEOS, to form insulating trenches 82 and 84. After filling the open trench the surface of the wafer is polished using a chemical mechanical polish to remove any excess silicon dioxide layer and the remaining silicon nitride mask.It will be appreciated that the trenching may be performed at another point in the process, either earlier or later, if desired.Thereafter, as illustrated in FIGS. 3-5, well-known suitable means are employed for formation of the source 40 and the drain 42. Portions of the silicon on opposing sides of the channel regions that are not masked by the gate 22 then may be doped to produce the source 40 and the drain 42. Such doping may be formed in a two-step doping process, with a low-energy doping 102 (FIG. 3) to create the extensions 46 and 48, followed by formation of the spacers 30 and 32 (FIG. 4), and then a high-energy doping 104 (FIG. 5) to create the remainder of the source 40 and the drain 42. Because the ions cannot penetrate the gate 22, the gate effectively operates as a doping mask, protecting the region of the semiconductor layer 18 underneath the gate from doping.To form the spacers 30 and 32, a conformal dielectric layer (e.g., SiN) may be deposited on the SOI wafer 12 and on the gate 22. Parts of the dielectric layer are then selectively removed to leave respective gate source-side and drain-side spacers 30 and 32. The deposit of the dielectric material and its selective removal may be accomplished by conventional means, for example chemical vapor deposition (CVD) such as LPCVD or plasma enhanced chemical vapor deposition (PECVD), of silicon dioxide, followed by anisotropic etching using suitable, well-known etchants, an exemplary etchant being CHF3.Alternatively tilted implants may be used to form the source extension 46 and the drain extension 48.Turning now to FIG. 6, the silicide regions 54 and 56 are then formed. Silicidation may then be accomplished as follows. A layer of metal is deposited upon the gate 22, the spacers 30 and 32, and the exposed portions of the surface semiconductor layer 18. The metal layer may be of a metal such as titanium, cobalt, or nickel, which is suitable for forming a conducting compound, such as a silicide, with the semiconductor material. The metal layer may be deposited, for example, by sputtering.Then a compound such as a silicide is formed between the metal of the metal layer and the exposed portions of the surface semiconductor layer 18. Suitable methods for formation of such electrically-conducting compounds (e.g., silicidation) are well known, an exemplary method being raising temperature of the semiconductor device 10 to a suitable level for a suitable length of time (annealing). An exemplary temperature is between about 500 and 700[deg.] C., and an exemplary suitable length of time is between 10 seconds and 10 minutes. Rapid thermal annealing (RTA) may also be employed, for example subjecting the semiconductor device 10 to a temperature between 600 and 900[deg.] C. for about 5 to 120 seconds. It will be appreciated that other temperatures and heating times may be employed. Finally, excess metal of the metal layer is removed by conventional, well-known means.As illustrated in FIG. 7, the dielectric layer 86 is deposited and planarized. This may occur first by deposition of a layer of insulator material, for example silicon nitride, by a process such as CVD. Then well-known chemical-mechanical polishing (CMP) processes may be employed to planarize the surface of the layer.Then, as shown in FIG. 8, openings 114 and 116 are etched into the dielectric layer 86 and the surface semiconductor layer 18, to allow access to the sides 90 and 92 of the active region 24, and to a portion of the insulator layer 20. The etching to form the openings114 and 116 may include one or more dry etch processes such as plasma etching, ion milling, reactive ion beam etching, and/or may include other processes suitable for carrying out the invention.Referring now to FIG. 9, a mask element 120 is created to mask off the opening 116 for the subsequent etching step shown in FIG. 10 and described below. The mask element 120 may be performed by well-known lithographic processes such as photolithography-a layer of resist material such as photoresist may be deposited; then the photoresist may be selectively exposed, with the exposed or unexposed photoresist removed to leave the mask element 120. It will be appreciated that other suitable methods for forming the mask element 120 may be employed.As shown in FIG. 10, etching is used to form a hollow 124 in the insulator layer 20. The etching is performed through the opening 114, and may include lateral or isotropic etching of the insulator layer 20. An example of a suitable etchant is HF. It will be appreciated that the exposed side surfaces of the opening 114 may have a material deposited on them that is resistant to the etchant.Following the etching to create the hollow 124, the mask element 120 is removed, for example by use of well-known solvents for stripping photoresist, and the metal connects 80 and 82 are then formed. The connects 80 and 82 may be formed by a metal deposition process, for example by chemical vapor deposition (CVD). The resulting structure is that shown in FIG. 1 and described above.It will be appreciated that the above-described structure and method are only exemplary, and that many suitable variations may be employed. For example, the semiconductor material may be silicon or another suitable semiconductor material. It may be possible to substitute oxides for nitrides, and/or vice versa, in the above structure and/or in the above fabrication method.Although the invention has been shown and described with respect to a certain embodiment or embodiments, it is obvious that equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In particular regard to the various functions performed by the above described elements (components, assemblies, devices, compositions, etc.), the terms (including a reference to a "means") used to describe such elements are intended to correspond, unless otherwise indicated, to any element which performs the specified function of the described element (i.e., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary embodiment or embodiments of the invention. In addition, while a particular feature of the invention may have been described above with respect to only one or more of several illustrated embodiments, such feature may be combined with one or more other features of the other embodiments, as may be desired and advantageous for any given or particular application. |
Described apparatuses and methods enable communication between a host device (104) and a memory device (108) to establish relative delays between different data lines (304). If data signals propagate along a bus with the same timing, simultaneous switching output (SSO) and crosstalk can adversely impact channel timing budget parameters. An example system includes an interconnect (106) having multiple data lines that couple the host device to the memory device. In example operations, the host device can transmit to the memory device a command (122) indicative of a phase offset between two or more data lines (304, 306) of the multiple data lines. The memory device can implement the command by transmitting or receiving signals via the interconnect with different relative phase offsets (312) between data lines (314, 316). The host device (e.g., a memory controller) can determine appropriate offsets for a given apparatus. Lengths of the offsets can vary. Further, a system can activate the phase offsets based on frequency. |
CLAIMSWhat is claimed is:1. A method comprising: receiving a command indicative of a phase offset for a first data line of multiple data lines relative to a second data line of the multiple data lines, the multiple data lines associated with a memory array; and transmitting data on the multiple data lines based on the command.2. The method of claim 1, further comprising: mapping the command to an entry of multiple entries of a register, the entry corresponding to respective phase offset indications for respective data lines of the multiple data lines; and transmitting the data on the multiple data lines based on the respective phase offset indications of the entry.3. The method of claim 1 or claim 2, further comprising: receiving another command indicative of another phase offset for the first data line of the multiple data lines relative to the second data line of the multiple data lines; and receiving other data on the multiple data lines based on the other command.4. The method of claim 3, further comprising: latching the other data from data signals, which are propagated over the multiple data lines, using one or more clock signals for individual data lines of the multiple data lines, wherein the one or more clock signals are based on other phase offset indications that correspond to the other command.5. The method of any one of claims 1-4, further comprising: transmitting second data on the second data line before first data on the first data line across multiple memory read operations based on the phase offset for the first data line as indicated by the command.296. The method of any one of claims 1-5, further comprising: transmitting a second data bit on the second data line of the multiple data lines based on the command; and delaying transmission of a first data bit on the first data line of the multiple data lines relative to the transmitting of the second data bit based on the command.7. The method of any one of claims 1 -6, further comprising: determining a first clock signal for the first data line of the multiple data lines based on the command; determining a second clock signal for the second data line of the multiple data lines based on the command; transmitting a first data bit on the first data line based on the first clock signal; and transmitting a second data bit on the second data line based on the second clock signal.8. A method comprising: reading, from a memory device, read data having one or more phase offsets corresponding to multiple data lines of an interconnect; determining a phase offset for a first data line of the multiple data lines relative to a second data line of the multiple data lines based on the reading; and transmitting to the memory device a command indicative of the phase offset.9. The method of claim 8, further comprising: writing, to the memory device, write data; comparing the read data to the write data; and determining the phase offset based on the comparing.10. The method of claim 8 or claim 9, further comprising: reading first data of the read data according to an entry of multiple entries of a register defining the one or more phase offsets corresponding to the multiple data lines; reading second data of the read data according to another entry of the multiple entries of the register defining the one or more phase offsets corresponding to the multiple data lines; selecting between at least the entry of the multiple entries and the other entry of the multiple entries based on at least one criterion; and determining the command based on the selecting.3011. A memory device comprising: an interface configured to be coupled to an interconnect including multiple data lines; a register having multiple entries, an entry of the multiple entries configured to store respective phase offset indications for respective data lines of the multiple data lines; a memory array configured to store data; and logic circuitry configured to communicate with the memory array, the register, and the interface, the logic circuitry configured to process memory operations with the respective phase offset indications based on a command, the command mapped to the entry of the multiple entries.12. The memory device of claim 11, wherein: the logic circuitry comprises multiple delay circuits, respective delay circuits of the multiple delay circuits corresponding to respective data lines of the multiple data lines.13. The memory device of claim 11 or claim 12, wherein the logic circuitry comprises: clock tree circuitry comprising multiple clock lines corresponding to multiple phase offsets; at least one multiplexor coupled to the multiple clock lines, the at least one multiplexor configured to output a clock signal having a phase offset corresponding to one of the respective phase offset indications based on the command; and at least one latch configured to receive the clock signal from the at least one multiplexor and forward the data according to the clock signal.14. A host device comprising: an interface configured to be coupled to an interconnect including multiple data lines; and a memory controller coupled to the interface and including control logic, the memory controller configured to perform memory operations according to different phase offsets across the multiple data lines, the control logic configured to: transmit to a memory device a command indicative of a phase offset for a first data line of the multiple data lines relative to a second data line of the multiple data lines.15. The host device of claim 14, wherein: the command is indicative of a first set of phase offsets for the memory device to receive data; the control logic is configured to transmit data to the memory device in accordance with a second set of phase offsets different from the first set of phase offsets; and the second set of phase offsets is inverse to the first set of phase offsets.16. The host device of claim 14 or claim 15, wherein: the command maps to an entry of multiple entries of a register, the entry corresponding to respective phase offset indications for respective data lines of the multiple data lines; and the command is indicative of at least one phase offset defined in the entry for the first data line of the multiple data lines relative to the second data line of the multiple data lines. |
PROGRAMMABLE MEMORY TIMINGBACKGROUND[0001] Computers, smartphones, and other electronic devices operate using processors and memories. A processor executes code based on data to run applications and provide features to a user. The processor obtains the code and the data from a memory that can store information. Thus, like a processor’s speed or number of cores, a memory’s characteristics can impact the performance of an electronic device. Different types of memory have different characteristics. Memory types include volatile memory and nonvolatile memory, such as random-access memory (RAM) and flash memory, respectively. RAM can include static RAM (SRAM) and dynamic RAM (DRAM).[0002] Demands on the different types of memory continue to evolve and grow. For example, as processors are engineered to execute code faster, such processors can benefit from accessing memories more quickly. Applications may also operate on ever-larger data sets that use ever-larger memories. Due to battery-powered electronic devices and power-hungry data centers, energy-usage constraints are becoming more prevalent for memory systems. Further, manufacturers may seek smaller memories as the form factors of portable electronic device continue to shrink. Accommodating these various demands is complicated by the diverse strengths and capabilities of different types of memories.BRIEF DESCRIPTION OF THE DRAWINGS[0003] Apparatuses of and techniques for programmable memory timing are described with reference to the following drawings. The same numbers are used throughout the drawings to reference like features and components:Fig. 1 illustrates example apparatuses that can implement programmable memory timing;Fig. 2 illustrates example computing systems that can implement aspects of programmable memory timing;Fig. 3-1 illustrates examples of an interconnect, a host device, and a memory device that can implement aspects of programmable memory timing, including by communicating a command indicative of a phase offset;Fig. 3-2 illustrates additional examples of an interconnect, a host device, and a memory device that can implement aspects of programmable memory timing, including by activating one or more phase offset circuits for communicating data signals based on a command;
Fig. 3-3 illustrates examples of data signals propagating over an interconnect between a host device and a memory device to implement aspects of programmable memory timing;Fig. 4 illustrates an example register that a host device can use to implement aspects of programmable memory timing;Fig. 5 illustrates an example register that a memory device can use to implement aspects of programmable memory timing;Fig. 6 illustrates an example clock tree that a host device can use to implement aspects of programmable memory timing;Fig. 7 illustrates an example clock tree that a memory device can use to implement aspects of programmable memory timing;Fig. 8 illustrates an example method that can implement aspects of programmable memory timing; andFig. 9 illustrates another example method that can implement aspects of programmable memory timing.DETAILED DESCRIPTIONOVERVIEW[0004] Computing systems can include a host device and a memory device. In some cases, the host device includes a processor and a memory controller, and the memory device includes a memory array to store data and other information. The memory controller can interact with the memory device to cause the memory device to hold information on behalf of the processor, perhaps many gigabytes of information that is stored in RAM. The memory controller communicates with the memory device via an interconnect that couples the host device to the memory device. For example, the memory controller can send commands to the memory device over the interconnect. Commands can relate to read operations to store data in the memory array and to write operations to retrieve data from the memory array. Commands can also direct the memory device to operate in a particular manner, engage a certain mode, or activate some functionality.[0005] The interconnect can include multiple data lines that extend between the host device and the memory device. The memory controller and the memory device can, therefore, transmit data at least partly in parallel by transmitting multiple bits over the multiple data lines. The parallel transmission, or at least partially simultaneous transmission, increases the rate of data transfer between the devices, which can increase the performance of the computing system. The parallel transmission over the multiple data lines can, however, create challenges. Assume, for
example, that the multiple data lines are proximate one another on a printed circuit board (PCB) or other substrate that is associated with one or more intrinsic permittivities (er). These permittivities can affect simultaneous switching output (SSO) noise and crosstalk, which can adversely impact signal propagation along the multiple data lines. For instance, SSO noise and crosstalk can interfere with data being transmitted between the host and memory devices such that data is not accurately received at a destination device. To compound these issues, available real estate for the conductive traces on PCBs and the associated circuits is ever shrinking, which can complicate potential strategies for mitigating SSO noise and crosstalk. Further, the impact of SSO noise and crosstalk can vary by system design because the length, spacing, layout, materials, and other physical characteristics can vary. This document addresses these and other challenges as described herein.[0006] In some situations, data that is transmitted over multiple data lines can be adjusted to reduce SSO noise and crosstalk between two or more data lines. Generally, adjacent or proximate data lines along the interconnect may have an increased susceptibility to SSO noise and crosstalk with respect to each other. To communicate along the interconnect, a device transmits respective data signals along respective ones of the multiple data lines of the interconnect. The data signals can be transmitted substantially simultaneously from one device toward another device as part of a parallel transmission, such as if eight bits are propagated on an interconnect with eight data lines.[0007] Consider, for example, an origin device transmitting toward a destination device two pulses as two data signals on to two adjacent data lines of an interconnect. As the pulses travel along the two adjacent data lines, the two pulses can mutually interfere with each other electromagnetically. The interference can be sufficiently great that the destination device is unable to properly interpret the pulses. This is especially relevant as the transmission frequency of the pulses increases, which is what occurs when communication frequencies are increased to increase data rates. If, however, the origin device transmits the pulses at different times, the mutual interference can be appreciably reduced. This document describes approaches to transmitting pulses and other types of signal modulation at different times by using phase offsets.[0008] In some implementations, the origin and destination devices can generate phase offsets for data signals by delaying a portion of the data signals relative to another portion using one or more delay circuits. The origin device can transmit multiple data signals using a first set of phase offsets. For instance, first, third, fifth, and seventh data signals may be delayed to create phase offsets on these four data signals. The second, fourth, sixth, and eighth data signals are thus transmitted relatively earlier. To properly receive or process the data signals, the destination device implements a second set of phase offsets, which is an inverse of the first set. In this
example, the destination device therefore delays the second, fourth, sixth, and eighth data signals to align them with the other four data signals that are delayed by the origin device. Although this example has phase offsets that alternate across the eight data signals, other phase offset patterns can be used. Moreover, another phase offset pattern may reduce interference better than this example alternating pattern. Accordingly, optimum results may not be obtained by employing a default phase offset pattern across multiple data signals for all origin or destination devices.[0009] In some cases, a superior, or optimum, pattern of phase offsets varies in different environments or circumstances. The variability may arise from many different factors. These factors can include those related to a physicality of a computing system, like the length, width, or course (e.g., straight, meandering, or number of angles/tums) of one or more traces that form the multiple data lines of the interconnect. Other factors may include transmission frequency, number of traces, PCB layout, other potentially interfering chips or circuitry, and so forth. In short, to better utilize relative phase offsets between two or more data signals to increase communication signaling performance, the phase offsets can be tailored to a given apparatus.[0010] Continuing with a host device and memory device environment, a memory controller of the host device can be implemented to control a pattern of phase offsets for data signaling along an interconnect between the host device and the memory device. To do so, the memory controller can transmit a command to the memory device with the command indicative of at least one phase offset between two or more data signals propagated over two or more data lines of the interconnect. For example, a command can specify an offset for a first data line of multiple data lines relative to a second data line of the multiple data lines. Thus, the command may establish a pattern of phase offsets across the multiple data lines of the interconnect. The patterns or sets of phase offsets can be inverted between the host and memory devices.[0011] In certain implementations, the memory device includes at least one register that maps commands to patterns of phase offsets across the multiple data lines of the interconnect. The memory device maps a received command to the corresponding pattern of phase offsets using an entry of multiple entries of the register. Because directionality of communication can impact SSO noise and crosstalk, the memory device can include two registers — one for read operations and one for write operations. During a write operation, the host device may phase offset or delay the write data transmitted on a portion of the data lines, and the memory device can “de-offsef ’ the write data during receipt of the write data by delaying the data signals on the inverse or complementary data lines to realign the data signals. During a memory read operation, the memory device may phase offset read data transmitted on a portion of the data lines, and the host device may de-offset the read data by delaying the inverse data lines to realign the read data received from the memory device on the multiple data lines. To support coordinating phase offset
patterns on both sides of the interconnect, including establishing inverse signal delays on opposite sides of the interconnect, the host device may also have at least one register to specify phase offset patterns.[0012] In operation generally, any number of commands may be employed to adjust multiphase communications across the multiple data lines of an interconnect. The commands that are issued by the memory controller to the memory device may change throughout operation of the system to facilitate signal integrity. In other words, the phase offset patterns may be adjusted as environmental conditions, such as temperature or frequency, change. As an example, high noise operations may warrant increased phase offset distances between two or more of the data lines. Indications of such circumstances may be received from the processor or another device, such as a sensor. In some cases, the memory controller can use one or more commands to activate multiphase communication across the two or more data lines of the multiple data lines of an interconnect as a communication frequency on the interconnect increases to improve reliability. Similarly, as the communication frequency on the interconnect decreases, the memory controller can issue one or more commands to deactivate the multiphase communication to save power. An appropriate frequency for activation/deactivation may be established as a default or determined through experimentation.[0013] As described above, an optimum or superior phase offset pattern may vary by system design, by manufactured device, or even over time for a single device. To accommodate this variability, the host device, such as the memory controller thereof, can determine a current phase offset pattern for use via testing. To perform the testing, the memory controller can transmit write data or receive read data using different phase offset patterns. The communications can be performed with different data values and at different transmission frequencies. The memory controller can select a phase offset pattern that meets at least one threshold, that performs superiorly over other phase offset patterns according to at least one criterion, and so forth. For instance, error may be calculated for each command associated with the phase offsets to determine the best available performance and the associated conditions. As another example, the memory controller may cycle through the available phase offset commands, or a portion thereof, with a predetermined data set during specific operating modes and operating conditions to determine an optimal command selection. The memory controller then adopts the selected phase offset pattern and transmits to the memory device a command mapping to the selected phase offset pattern in a register at the memory device.[0014] In example implementations, phase offsets may be achieved using a clock tree that routes different clock signals to latches holding or exposed to the data signals. The clock signals can trigger the latching or the releasing and forwarding of the data signals at different times to
realize transmission and reception according to different phase offsets. As such, data may be transmitted according to a selected clock signal, having a clock phase or clock phase offset, of the clock tree for each respective data line. In other implementations, components having fundamental behaviors (e.g., resistors, capacitors, or inductors) can be used to create signal delays.[0015] This document therefore describes examples of programmable memory timing. Relative phase offsets between two or more data signals can be implemented across multiple data lines of an interconnect. The interconnect can couple a host device to a memory device. In the described manners, the host device can access the memory device more reliably and at higher rates of speed than without employing programmable memory timing. A memory device can be installed and used in different systems and under different operating conditions with higher performance due to the phase offset tailoring that can be provided by the host device as described herein. Accordingly, access times and power usage can be reduced while increasing the performance of user applications. These are but a few examples of how the described techniques and devices may be used to improve the signal integrity of data transmitted between an origin device and a destination device, other examples and implementations of which are described throughout the document. This document now turns to an example operating environment, after which example devices, methods, and systems are described.EXAMPLE OPERATING ENVIRONMENTS[0016] Fig. 1 illustrates, at 100 generally, example apparatuses 102 that can implement programmable memory timing. The apparatus 102 can be realized as, for example, at least one electronic device. Example electronic-device implementations include an intemet-of-things (loTs) device 102-1, a tablet device 102-2, a smartphone 102-3, a notebook computer 102-4 (or desktop computer), a passenger vehicle 102-5, a server computer 102-6, a server cluster 102-7 that may be part of cloud computing infrastructure or a data center, and a portion of such devices (e.g., a printed circuit board (PCB)). Other examples of the apparatus 102 include a wearable device, such as a smartwatch or intelligent glasses; an entertainment device, such as a set-top box, a smart television, or a gaming device; a motherboard or server blade; a consumer appliance; a vehicle or drone, or the electronic components thereof; industrial equipment; a security or other sensor device; and so forth. Each type of electronic device or other apparatus can include one or more components to provide some computing functionality or feature.[0017] In example implementations, the apparatus 102 can include at least one host device 104, at least one interconnect 106, and at least one memory device 108. The host device 104 can include at least one processor 114, at least one cache memory 116, and at least one memory controller 118. The interconnect 106 can include at least one command and address bus 110 and
at least one data bus 112. Each bus may be implemented as a unidirectional bus or a bidirectional bus. The interconnect 106 may also include a clock bus that is part of or separate from the command and address bus 110. The memory device 108 may be realized, for example, with a dynamic random-access memory (DRAM) device or module, including with a three-dimensional (3D) stacked DRAM device, such as a high bandwidth memory (HBM) device or a hybrid memory cube (HMC) device.[0018] Regarding the host device 104, the processor 114 is communicatively coupled to the cache memory 116, and the cache memory 116 is communicatively coupled to the memory controller 118. The processor 114 is also communicatively coupled, directly or indirectly, to the memory controller 118. The host device 104 may include other components to form, for instance, a system-on-a-chip (SoC). The processor 114 may include or comprise a general-purpose processor, a central processing unit (CPU), a graphics processing unit (GPU), a neural network engine or accelerator, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) integrated circuit (IC), a communications processor (e.g., a modem or baseband processor), an SoC, and so forth. In operation, the memory controller 118 can provide a high- level or logical interface between the processor 114 and at least one memory (e.g., a memory that is external to the host device 104 like the memory device 108). The memory controller 118 can, for example, receive memory requests from the processor 114 and provide the memory requests to the external memory with appropriate formatting, timing, reordering, and so forth. The memory controller 118 can also forward to the processor 114 responses to the memory requests that are received from the external memory.[0019] Regarding connections that are external to the host device 104, the host device 104 is communicatively coupled to the memory device 108 via the interconnect 106. The depicted interconnect 106, as well as other interconnects (not shown) that communicatively couple together various components, enable commands and data to be transferred between two or more ones of the various components. Interconnect examples include a bus, a switching fabric, one or more wires that carry voltage or current signals, and so forth.[0020] In example implementations for programmable memory timing, the memory device 108 includes memory signal -timing circuitry 120. The memory signal -timing circuitry 120 enables the memory device 108 to transmit data or receive data using one or more phase offsets. During data communication signaling, one or more signals propagated over the data bus 112 on various data lines can have at least one phase offset that differs from that of at least one other signal propagating over the data bus 112. To enable the host device 104, or the memory controller 118 thereof, to customize or otherwise control the phase offsets, the host device 104 can instruct
the memory device 108 to use particular phase offsets or a pattern of phase offsets across the data lines of the data bus 112.[0021] To do so, the host device 104 transmits a command 122 to the memory device 108. Thus, the memory device 108 receives the command 122 from the host device 104 over the interconnect 106. The command 122 comprises or includes at least one indication 124 of a phase offset for a first data line of multiple data lines relative to a second data line of the multiple data lines of the data bus 112. The memory signal-timing circuitry 120 establishes one or more delays for communicating data signals over the data bus 112 based on the at least one indication 124. The memory signal-timing circuitry 120 can include a respective phase offset circuit (e.g., delay circuit) for each data line of at least a portion of the multiple data lines of the data bus 112. The phase offset circuit can delay a data signal that is being transmitted or received relative to another data signal on the data bus 112. The memory signal-timing circuitry 120 can therefore implement delays to realize phase offsets for data signals the memory device 108 is transmitting over the data lines.[0022] Similarly, the memory signal -timing circuitry 120 can “remove” one or more relative phase offsets to align data signals that are received from the host device 104 by the memory device 108. To align the data signals, the memory signal -timing circuitry 120 can delay those data signals that are not delayed by the memory controller 118 of the host device. Thus, the host device 104 and the memory device 108 implement reciprocal or inverse delays or phase offsets relative to each other. In other words, if the host device 104 activates delays on one-half of the multiple data lines for a transmission to the memory device 108, then the memory device 108 activates delays on the other half of the multiple data lines for a reception to remove the relative offsets and realign the data signals across the data bus 112. Causing the relative phase offsets to be present during signal propagation over the interconnect 106 can reduce SSO or crosstalk between data lines to improve signaling quality, including at higher frequencies. These and other implementations are described further below, starting with reference to Fig. 3-1.[0023] The depicted components of the apparatus 102 represent an example computing architecture with a hierarchical memory system. A hierarchical memory system may include memories at different levels, with each level having a memory with a different speed or capacity. As shown, the cache memory 116 may be logically coupled between the processor 114 and the memory device 108. Although not shown, the hierarchical memory system may include other memories or hierarchical levels. For example, the apparatus 102 may include a cache memory that is coupled between the host device 104 and the memory device 108, may include storage memory that is coupled “below” the memory device 108, and so forth.
[0024] Although various implementations of the apparatus 102 are depicted in Fig. 1 and described herein, an apparatus 102 can be implemented in alternative manners. For example, the host device 104 may include multiple cache memories, including multiple levels of cache memory, or may have no cache memory. In some cases, the host device 104 may omit the processor 114 or the cache memory 116. Also, another memory may have a respective “internal” or “local” cache memory (not shown). Further, the host device 104 may be coupled to multiple memory devices 108. Generally, the illustrated and described components may be implemented in alternative ways, including in distributed or shared memory systems. A given apparatus 102 may also include more, fewer, or different components.[0025] The host device 104 and the various memories may be realized in multiple manners. In some cases, the host device 104 and the memory device 108 can both be disposed on, or physically supported by, a same printed circuit board (PCB) (e.g., a rigid or flexible motherboard). The host device 104 and the memory device 108 may additionally be integrated on a same IC or fabricated on separate ICs but packaged together. A memory may also be communicatively coupled to multiple host devices 104 via one or more interconnects 106 and may be able to respond to memory requests from two or more of the host devices. Each host device 104 may include a respective memory controller 118, or multiple host devices 104 may share a common memory controller 118. An example architecture with at least one host device 104 and multiple processors that are communicatively coupled to a memory device 108 is described next.[0026] Fig. 2 illustrates an example computing system 200 that can implement aspects of programmable memory timing. In some implementations, the computing system 200 includes at least one memory device 202, at least one interconnect 204, and at least one processor 206. The memory device 202 can include, or be associated with, at least one memory array 208, at least one interface 218, and control circuitry 210 that is communicatively coupled to the memory array 208. The memory device 202 can correspond to the cache memory 116, the memory device 108, storage memory (not shown), and so forth. Thus, the memory array 208 can include an array of memory cells, including but not limited to memory cells of Dynamic Random- Access Memory (DRAM), Synchronous DRAM (SDRAM), three-dimensional (3D) stacked DRAM, Double Data Rate (DDR) memory, or Low-Power DDR (LPDDR) SDRAM. The memory array 208 and the control circuitry 210 may be components on a single semiconductor die or on separate semiconductor dies. The memory array 208 or the control circuitry 210 may also be distributed across multiple dies.[0027] The control circuitry 210, which may include logic circuitry, can include any of a number of components that can be used by the memory device 202 to perform various operations (e.g., communicate with other devices, manage performance, and perform memory read or write
operations). For example, the control circuitry 210 can include one or more registers 212, at least one instance of array control logic 214, and clock circuitry 216. The registers 212 may be implemented, for example, as one or more registers that can store information to be used by the control circuitry 210 or another part of the memory device 202. The array control logic 214 may be implemented as circuitry that can provide command decoding, address decoding, input/output functions, amplification circuitry, power supply management, power control modes, and other functions. The clock circuitry 216 may be implemented as circuitry that can provide synchronization of various components of the memory device 202 with one or more external clock signals that may be provided over the interconnect 204, such as a command/address clock (e.g., CK_t or CK_c) or a data clock (e.g., WCK_t or WCK_c), and/or with at least one clock signal that is generated internally.[0028] The interface 218 can couple the control circuitry 210 or the memory array 208 directly or indirectly to the interconnect 204. As shown in Fig. 2, the registers 212, the array control logic 214, and the clock circuitry 216 can be part of a single component (e.g., the control circuitry 210). In other implementations, one or more of the registers 212, the array control logic 214, or the clock circuitry 216 may be implemented as separate components, which can be provided on a single semiconductor die or disposed across multiple semiconductor dies. These components of the control circuitry 210 may be individually or jointly coupled to the interconnect 204 via the interface 218.[0029] The interconnect 204 may be implemented with any one or more of a variety of interconnects that communicatively couple together various components and enable commands, addresses, and/or other information and data to be transferred between two or more of the various components (e.g., between the memory device 202 and the one or more processors 206). For example, the interconnect 204 may be realized as the interconnect 106 described with reference to Fig. 1 or may be implemented in a manner similar to the interconnect 106. Although the interconnect 204 is represented with a single arrow in Fig. 2, the interconnect 204 may include a bus, a switching fabric, one or more wires or traces that carry voltage or current signals, at least one switch, one or more buffers, and so forth. Further, the interconnect 204 may be separated into at least a command-and-address (CA) bus and a data bus (as depicted in Fig. 1).[0030] In some aspects, the memory device 202 may be realized as a “separate” physical component relative to that of the host device 104 of Fig. 1 or any of the processors 206. Examples of physical components for the memory device 202, which may be separate from the host 104, include: a printed circuit board (PCB), which can be rigid or flexible; a memory card; a memory stick; a memory module, including a single in-line memory module (SIMM) or a dual in-line memory module (DIMM); and so forth. Alternatively, the memory device 202 may be packaged
or integrated with other physical components, including the host device 104 or a processor 206, such as by being combined on a common PCB or together in a single device package.[0031] The apparatuses and methods that are described herein may be appropriate for memory that is designed for lower power operations or that is targeted for energy-efficient applications. Thus, the described principles may be incorporated into a low-power memory device or a memory controller that communicates with such a low-power memory device. An example of a memory standard that relates to low-power applications is the Low-Power Double Data Rate (LPDDR) standard for synchronous DRAM (SDRAM) as promulgated by the Joint Electron Device Engineering Council (JEDEC) Solid State Technology Association. Some terminology in this document may draw from one or more of these standards or versions thereof, like the LPDDR5 standard, for clarity. The described principles, however, are also applicable to memories that comport with other standards, including other LPDDR standards (e.g., earlier versions or future versions like LPDDR6), and to memories that do not adhere to a public standard.[0032] As shown in Fig. 2, the one or more processors 206 may include a computer processor 206-1, a baseband processor 206-2, and/or an application processor 206-3. These processors may be coupled to the memory device 202 through the interconnect 204. The processors 206 may each be a CPU, a GPU, an SoC, an ASIC, an FPGA, or the like. In some cases, a single processor comprises multiple processing cores or other resources, each dedicated to different functions, such as modem management, applications, graphics, central processing, or the like. In some implementations, the baseband processor 206-2 may include or be coupled to a modem (not shown in Fig. 2) and may be referred to as a modem processor. The modem and/or the baseband processor 206-2 may be coupled wirelessly to a network via, for example, cellular, Wi-Fi®, Bluetooth®, near field, or another technology or protocol for wireless communication.[0033] In some implementations, the processors 206 may be connected directly to the memory device 202 (e.g., via the interconnect 204). In other implementations, one or more of the processors may be indirectly connected to the memory device 202 (e.g., over a network connection or through one or more other devices), which indirect connection may also include the interconnect 204. Further, each processor 206 may be realized similarly to the processor 114 of Fig. 1. Accordingly, a respective processor 206 can include or be associated with a respective memory controller, like the memory controller 118 depicted in Fig. 1. Alternatively, two or more processors 206 may access the memory device 202 using a shared or system memory controller 118.[0034] In example implementations for programmable memory timing, the control circuitry 210 can include the memory signal -timing circuitry 120. Accordingly, the control circuitry 210 can include one or more phase offset (e.g., delay) circuits to create or indirectly remove (e.g., by
delaying complementary/inverse data signals) at least one phase offset for a data signal across multiple data lines. The memory signal-timing circuitry 120 can process a command 122 (e.g., of Fig. 1) having at least one indication 124 of one or more phase offsets for receiving data with a write operation or for transmitting data with a read operation. The indication 124 can be mapped to at least one entry of a register included in the registers 212. These techniques are described further in the following section.EXAMPLE TECHNIQUES AND HARDWARE[0035] Fig. 3-1 illustrates examples of an interconnect 106, a host device 104, and a memory device 108 that can implement aspects of programmable memory timing, including by communicating a command 122 that is indicative of a phase offset. The host device 104, which may be realized as an SoC, can include a memory controller 118 and a host interface 302. The host interface 302 is coupled to, (e.g., in electrical communication with) an interconnect 106. The interconnect 106 includes multiple data lines 304 that provide electrical communication between the host interface 302 and the interface 218, or memory interface 218. The multiple data lines 304 include at least a first data line 306 and a second data line 308. It should be appreciated that any of the multiple data lines 304 may be referred to as a first data line 306 and a second data line 308. For example, in addition to lines that propagate data bits, the multiple data lines 304 may include a read strobe, a write clock, or a write mask. Thus, any of these lines may be dubbed a first data line 306 or a second data line 308. The data bus 112 (of Fig. 1) may include the multiple data lines 304.[0036] In example implementations, the memory controller 118 may include host logic circuitry 301 or control logic 301. The control logic 301, which may include logic circuitry, of the memory controller 118 may be coupled to (e.g., in electrical communication with) the host interface 302. As shown, the host interface 302 can send the command 122 over the command and address bus 110. The command 122 may, however, be sent over any communication bus. The command 122 may include or otherwise comprise an indication 124. The indication 124 indicates at least one phase offset for the first data line 306 relative to the second data line 308. The indication 124 may, for instance, map to a pattern of phase offsets with a respective phase offset for a respective data line of the multiple data lines.[0037] Thus, the memory device 108 may communicate with the host device 104 over the command and address bus 110. The memory interface 218 may receive the command 122 over the command and address bus 110. The memory device 108 can implement the indicated phase offset(s) of the command 122 for read operations or write operations, including for both. For a write operation, the memory device 108 may receive data (e.g., write data) from the memory
controller 118 over the multiple data lines 304 via the memory interface 218. The memory interface 218 may send the data received over the multiple data lines 304, including over the first data line 306 and the second data line 308, to the memory signal-timing circuitry 120. For a read operation, the memory signal -timing circuitry 120 can forward data (e.g., read data) to the memory interface 218. The memory interface 218 transmits the read data to the host interface 302 over the multiple data lines 304 of the interconnect 106. The memory device 108 and the memory controller 118 can transmit data and/or receive data in accordance with selected or indicated phase offsets as described herein.[0038] Fig. 3-2 illustrates examples of an interconnect 106, a host device 104, and a memory device 108 that can implement aspects of programmable memory timing, including by activating one or more phase offset circuits for communicating data signals based on a command. As shown, the first data signal 314 traverses, in either direction, the first data line 306 between the host interface 302 and the memory interface 218 over the interconnect 106. The second data signal 316 traverses, in either direction, the second data line 308 between the host interface 302 and the memory interface 218 over the interconnect 106. Upon receipt or transmission with the host interface 302, the first data signal 314 and the second data signal 316 may be affected by (e.g., delayed by) host phase-offset circuitry 380 and 390, respectively, of the host device 104. The host phase-offset circuitry 380 and 390 may be controlled by at least one of two registers, such as a first register 340 and a second register 350, based on the command 122. In some cases, one of the two registers 340 and 350 is for read operations, and the other is for write operations. Accordingly, each of the first and second registers 340 and 350 may control the host phase-offset circuitries 380 and 390 based on a respective block or column in a given entry, as is described below with reference to Fig. 4.[0039] Analogously, upon receipt or transmission with the memory interface 218, the first data signal 314 and the second data signal 316 may be affected by (e.g., delayed by) memory phase-offset circuitry 360 and 370, respectively, of the memory device 108. The memory phaseoffset circuitry 360 and 370 may be respectively controlled by at least one of two registers, such as a first register 320 and a second register 330, based on the command 122. In some cases, one of the two registers 320 and 330 is for read operations, and the other is for write operations. Accordingly, each of the first and second registers 320 and 330 may control the memory phaseoffset circuitries 360 and 370 based on a respective block or column of a given entry, as is described below with reference to Fig. 5.[0040] As one example, during a read operation, the memory device 108 may delay the first data signal 314 with the memory phase-offset circuitry 360 relative to the second data signal 316 to reduce crosstalk and noise between the first data signal 314 and the second data signal 316
during transit over the interconnect 106. Upon receipt, the host device 104 may delay the second data signal 316 relative to the first data signal 314 using the host phase-offset circuitry 390 to realign the first data signal 314 and the second data signal 316 for further processing. A write operation may be performed in a reciprocal manner by delaying one data signal at the host device 104 and the other data signal (of an example two data signals) at the memory device 108 to create mismatched phases during propagation to reduce interference and to reestablish matching phases at the memory device 108 to perform the write operation with the data.[0041] Fig. 3-3 illustrates examples of data signals propagating over the interconnect 106 between the host device 104 and the memory device 108 to implement aspects of programmable memory timing. Fig. 3-3 depicts one of many possible implementations of a phase offset 312 in combination with phase-offset circuitries 360, 370, 380, and 390 (of Fig. 3-2). The phase offset 312 may be any amount represented in degrees or another measure of phases. As an example, and with respect to the first data line 306, the phase offset 312 may be defined based on a cycle or period of the first data signal 314. The first data signal 314 may be a square wave having a rise time and a fall time as shown. The first data signal 314 may be a differential signal.[0042] As shown, the phase offset 312 corresponds to a delay of the second data signal 316 relative to the first data signal 314, assuming a read operation with data propagating toward the host interface 302. As such, the first data line 306 is said to have a phase offset 312 from, or relative to, the second data line 308. Generally, the phase offset 312 may be positive or negative (forward or backward) with respect to time. In some implementations, however, phase offsets are implemented as delays. An example of a memory standard that relates to low-power applications is the Low-Power Double Data Rate (LPDDR) standard for synchronous DRAM (SDRAM) as promulgated by the Joint Electron Device Engineering Council (JEDEC) Solid State Technology Association. An LPDDR SDRAM may include DQ pins in accordance with one or more versions of the standard. The DQ pins depicted in Fig. 3-3 comport with LPDDR5. Nonetheless, other implementations may have more pins, fewer pins, and/or different pins, as well as more bits, fewer bits, and/or different bits.[0043] Fig. 4 illustrates an example register that a host device 104 can use to implement aspects of programmable memory timing. Fig. 5 illustrates an example register that a memory device 108 can use to implement aspects of programmable memory timing. In Fig. 4, at least one example register that can implement aspects of programmable memory timing and a portion or aspect of the control logic 301 of memory controller 118 are shown. A command 122 (e.g., an operational command 122) may be sent from the host interface 302 to a memory device 108 (e.g., of Fig. 5). The command 122 may comprise an operational code have multiple bits. The
command 122 may be any number of bits or bytes and is shown as an 8-bit command with op codes OPO to OP7.[0044] The command 122 can be mapped to a first register 340 (e.g., a register corresponding to lower data byte), a second register 350 (e.g., a register corresponding to an upper byte), or any combination thereof. Any number of registers 340 and 350 may be used. The registers 340 and 350 may include multiple entries 423 and 433 that are selected by the corresponding command 122. As one of many possible examples, a command 122 equal to “00010001” would correspond to entry 422 “10101010101” from the first register 340 and entry432 “10101010101” for the second register 350. The registers as described throughout may be of any length or number, including those commensurate with current or future LPDDR standards.[0045] As shown, the entries 422 and 432 are respectively part of multiple entries 423 and433 of respective registers 340 and 350. These entries 422 and 432, as specified by the command 122, can be transmitted over control buses 424 and 434, respectively, and interpreted by a host multiplexor 440. The registers 340 and 350 may also include designations for the amount of delay, which may include more than one bit and designations for each data line of the multiple data lines 304. The registers 340 and 350 may further include phase offset designations for the write clock, the write mask, the read strobe, and other data lines.[0046] The host multiplexor 440 may send or receive signals over internal host communication lines 442 and 444 to the processor 114 and the host interface 302 depending on whether the control logic 301 of the memory controller 118 is performing a read or a write command. As an example, during a read command, data is received from the host interface 302 over the internal host communication lines 442. The phase offsets 312 associated with the multiple data lines 304 are removed in the host multiplexor 440 based on the command 122, and the read data is sent to the processor 114 over the internal host communication lines 444. As another example, during a write command, data is sent from the processor 114 to the multiplexor 440 over internal host communication lines 444. The host multiplexor 440 may implement phase offsets 312 based on the command 122. The host multiplexor 440 then transmits the write data to the host interface 302 over internal host communication lines 442, and the host interface 302 transmits the write data across the interconnect 106.[0047] Referring to Fig. 5, a similar implementation for the memory device 108 is illustrated by way of example. The command 122 is received from the host device 104 as sent by the control logic 301. The command 122 may be received over the interconnect 106 through the memory interface 218; through a private, separate, or out-of-band channel; or any combination thereof. The command 122 similarly corresponds to entries 522 and 532 in a first register 320 (e.g., corresponding to a lower byte of data) and a second register 330 (e.g., corresponding to an upper
byte of data), respectively. Any number of registers 320 and 330 may be used, such as additional registers for read versus write operations.[0048] The first and second registers 320 and 330 may include multiple entries 523 and 533 that are selected by the corresponding command 122. As one of many possible examples, a command 122 equal to “00010001” corresponds to an entry 522 “10101010101” from the first register 320 and to an entry 532 “10101010101” for the second register 330. The entries 522 and 532 are part of the multiple entries 523 and 533. These entries 522 and 532, as specified by the command 122, are transmitted over control buses 524 and 534, respectively, and interpreted by a memory multiplexor 540. The registers 320 and 330 may also include designations for the amount of delay, which may include more than one bit, and designations for each data line of the multiple data lines 304. The registers 320 and 330 may further include phase offset designations for the write clock, the write mask, the read strobe, and other data lines.[0049] In some cases, the same command 122 may correspond to different entries for the registers 340, 350, 320, and 330. For instance, the same command may correspond to phase offsets 312 that are inverse, opposite, or reciprocal for the memory devise 108 as they are for the host device 104. As an example, a command 122 may comprise or include “00010001” that can correspond to the entry 422 “10101010101” for the host side and the entry 522 “01010101010” for the memory side. As such, the multiple data lines 304 associated with the lower-byte registers 340 and 320 are phase offset by the host multiplexor 440 and realigned by the memory multiplexor 540 based on inverse or complementary bits of the lower-byte registers 340 and 320 for a write operation. The inverse bits may denote the use of an inverse clock signal as shown in Figs. 6-7.[0050] Fig. 6 illustrates an example clock tree that a host device 104 can use to implement aspects of programmable memory timing. Fig. 7 illustrates another example clock tree that a memory device 108 can use to implement aspects of programmable memory timing. Thus, Fig. 6 illustrates an example host multiplexor 440. The host multiplexor 440 and the memory multiplexor 540 (of Figs. 5 and 7) may have the same or similar arrangements. The host multiplexor 440 includes a host clock tree 618 having clock tree circuitry. The host clock tree 618 provides a variety of clock signals according to a host clock signal 600 or an inverse host clock signal 608. Host line delays 602, 604, 606, or other delay implements may be additive and may delay the host clock signal 600 and the inverse host clock signal 608.[0051] As shown in one example, the host line delay 602 is associated with the phase offset 312, the host line delay 604 and the host line delay 602 are associated with an additional phase offset 614. Further, the host line delay 606 and the host line delays 602 and 604 are associated with another additional phase offset 616 according to the host clock signal 600. The inverse host clock signal 608 may have the same phase as the host clock signal 600; they may even be the same
signal with one produced via a multiplexor or another switching apparatus. The inverse host clock signal 608 along with the host line delay 606 can produce a complementary phase offset to phase offset 312, as can the inverse host clock signal 608 along with the host line delay 604 and the host line delay 606 to additional phase offset 614, and as can inverse clock signal 608 along with the host line delays 602, 604, and 606 to produce a complementary phase offset to the other additional phase offset 616.[0052] The signals formed are then transmitted along multiple clock lines 610 having clock signals 617 to host clock multiplexors 620 and 630. Although two are shown for clarity, more host clock multiplexors 620 and 630 may be used. In one example, the host clock multiplexors 620 and 630 may be used for each of the multiple data lines 304 to select a clock signal from the host clock tree 618. Control bits from control buses 424 and 434 may be used to select the respective clock signals from the host clock tree 618. Two control bits are explicitly shown to select the respective clock signals; however, any number of control bits may be used with corresponding designations in registers 340 and 350. One of the control bits may correspond with the delay bit of the registers 340 and 350 to provide extended access to additional clocks.[0053] Output from the host clock multiplexors 620 and 630 is fed to host latches 640 and 650 respectively for offsetting the phase of the first data line 306 and the second data line 308. Although the first data line 306 and the second data line 308 are shown as examples, any of the multiple data lines 304 may be similarly offset. The host latches 640 and 650 may be output to internal host communication lines 442 and 444 for write and read operations, respectively. Fig. 6 is shown in a simplified form in the interest of clarity. Any number of host clock multiplexors 620 and 630 may be used along with any number host latches 640 and 650, depending on a width of the data bus. Depending on the operation, the host latches 640 and 650 may be situated to output to any of the internal host communication lines 442 and 444 based on whether a read or write operation is performing the operation.[0054] As an example operation, the host device 104 may issue a command 122 that delays a first data line 306 with respect to a second data line 308. Control bus 434 may select the clock signal 600 without a relative delay from line delays 602, 604, and 606 based on the command 122. Control bus 424, on the other hand, may select the host clock signal 600 with line delay 602 having a phase offset 312. As such, the latch 640 transmits data on the first data line 306 to the internal host communication lines 442 to the host interface 302 across interconnect 106 after the transmission on the second data line 308 is initiated.[0055] Continuing with Fig. 7, an example memory multiplexor 540 is illustrated. The host multiplexor 440 and the memory multiplexor 540 may have the same or similar arrangements. The arrangements may be reciprocal to compensate for the phase delays 312, 612, and 614
produced by the other device. The memory multiplexor 540 includes a memory clock tree 718 having clock tree circuitry. The memory clock tree 718 provides a variety of clock signals according to a memory clock signal 700, which provides opposite phase offsets 312, 612, and 614, and an inverse memory clock signal 708, which provides the same phase offsets 312, 612, and 614. Memory line delays 702, 704, 706, or other delay implements may be additive and may delay the memory clock signal 700 and the inverse memory clock signal 708.[0056] As shown in one example, the memory line delay 702 is associated with the phase offset 312, the memory line delay 704 and the memory line delay 702 are associated with an additional phase offset 614. Further, the memory line delay 706 and the memory line delays 702 and 704 are associated with another additional phase offset 616 according to the memory clock signal 700. The inverse memory clock signal 708 may have the same phase as the memory clock signal 700, for one may be derived from the other using a multiplexor or another switching apparatus. The inverse memory clock signal 708 along with the line delay 706 can produce a complementary phase offset to phase offset 312, as can inverse memory clock signal 708 along with line delay 704 and line delay 706 to additional phase offset 614, and as can inverse clock signal 708 along with line delays 702, 704, and 706 to produce complementary phase offsets to the other additional phase offset 616.[0057] The signals formed are then transmitted along multiple clock lines 710 having clock signals 717 to memory clock multiplexors 720 and 730. Although two are shown, more memory clock multiplexors 720 and 730 may be used. In one example of many, memory clock multiplexors 720 and 730 may be used for each of the multiple data lines 304 to select a clock signal from the memory clock tree 718. Control bits from memory control buses 524 and 534 may be used to select the respective clock signals from the memory clock tree 718. Although two control bits are used to select the respective clock signals, any number of control bits may be used with corresponding designations in registers 320 and 330 (e.g., of Fig. 5). One of the control bits may correspond with the delay bit of the registers 320 and 330 to provide extended access to additional clocks.[0058] Output from the memory clock multiplexors 720 and 730 is fed to memory latches 740 and 750 for respectively offsetting the phase of the first data line 306 and the second data line 308. Although the first data line 306 and second data line 308 are shown as examples, any of the multiple data lines 304 may be similarly offset. The memory latches 740 and 750 may be output to respective internal memory communication lines 542 and 544 for write and read operations, respectively. Fig. 7 is shown in a simplified form in the interest of clarity; thus, any number of memory clock multiplexors 720 and 730 may be used along with any number of memory latches 740 and 750.
[0059] Depending on the operation, the memory latches 740 and 750 may be situated to output to any of the respective internal memory communication lines 542 and 544 based on the applicable read or write operation being performed. As used herein, the term inverse may include any number of opposing or reciprocal output signals. As an example, if the phase offset 312 is 45° offset from the base clock signal, then the inverse, opposing, or reciprocal output signal may be —45° offset from the same base clock signal. Additionally or alternatively, if the phase offset 312 is 45° offset from the base clock signal, then the inverse, opposing or reciprocal output signal may correspond to a 45° phase offset applied to the non-delayed data lines. As such, the example circuits shown in Figs. 4-7 can be implemented as phase offset circuits for respective data lines of the multiple data lines 304.[0060] As an example, the host device 104 may issue a command 122 that delays a first data line 306 with respect to a second data line 308 on the host side. The control bus 524 may select the memory clock signal 700 without a relative delay from memory line delays 702, 704, and 706 based on the command 122. On the other hand, the control bus 534 may select the memory clock signal 700 with the line delay 702 having a phase offset 312 opposite of the host phase offset 312 to realign the signaling on the first data line 306 and the second data line 308. As such, the memory latch 740 can forward data received from the data line 306 via the internal memory communication line 544 and the memory interface 218 prior to the memory latch 750 receiving data from the data line 308.[0061] It should be appreciated that first data line 306 and second data line 308 are merely depicted as examples and may be different traces on a PCB that are contiguous between the host device 104 and the memory device 108. That is, the first data line 306 may convey data between the host device 104 and the memory device 108, taking any number of stops, buffers, or detours along the way.EXAMPLE METHODS[0062] This section describes example methods for programmable memory timing with reference to the flow chart(s) and flow diagram(s) of Figs. 8 and 9. These descriptions may also refer to components, entities, and other aspects depicted in Figs. 1-7, which reference is made only by way of example. The described methods are not necessarily limited to performance by one entity or multiple entities operating on one device.[0063] With reference to Fig. 8, at block 802, a command 122 is received as discussed throughout this document. The command 122 may be received by any implement. For example, the command 122 may be received by a host device 104, a memory device 108, or particular circuitry included therein. The reception may be inter-chip or intra-chip. The host device 104
may, for instance, transmit the command 122 to the memory device 108. In some cases, the command 122 may include bits mappable to a register 340 or 350 and 320 or 330, corresponding respectively to entries 422, 432, 522, and 532.[0064] At block 804, the command 122 can be mapped to an entry of multiple entries 423, 433, 523, and 533 as shown in Figs. 4 and 5. As an example, the command 122 may comprise or include “00010001” that can correspond to the entry 422 “10101010101” and the entry 432 “10101010101” of the multiple entries 423 and 433 of the host device 104. The command 122 may also or instead comprise or include “00010001” that can correspond to the entry 522 “10101010101” and the entry 532 “10101010101” of the multiple entries 523 and 533 of the memory device 108. The mapping may occur through circuitry that applies direct logic as a lookup table (e.g., if X then Y). The mapping may additionally or alternatively implement mathematical or binary operations (e.g., if X then X). The entries 422 and 432 may be selected based on the command 122 by a multiplexor or another implement. Thus, the command 122 may serve as the control inputs of the multiplexor and the entries 423, 433, 523, and 533 serve as selectable outputs of the multiplexor. In operation, such multiplexors may be similar to the clock multiplexors 620, 630, 720, and 730.[0065] In some cases, the command 122 may be related to a particular operation or operating situation of the memory device 108 or the host device 104. As an example, the command 122 may be associated with a read command or a write command. The command 122 may be sent in connection with the read command or the write command on the same or different data lines 304. The command 122 may include a first portion (e.g., OPO, OP1, OP2, and OP3) that relates to the lower byte (e.g., DQ0, DQ1, DQ2, DQ3, DQ4, DQ5, DQ6, and DQ7) and a second portion (e.g., OP4, OP5, OP6, and OP7) that relates to the upper byte (e.g., DQ8, DQ9, DQ10, DQ11, DQ12, DQ13, DQ14, and DQ15). The mapping may include applying the phase offset indications to the corresponding data line of the multiple data lines 304. As an example, the mapping may use a host clock tree 618 and the host clock multiplexors 620 and 630 to offset the first data line 306 from the second data line 308.[0066] At block 806, respective phase offset circuits can be activated. An activated phase offset circuit may create a phase offset, such as by delaying propagation of a signal, on a per-data- line basis. The activation of respective phase offset circuits may be based on the command 122. The entry of a register 340, 350, 320, or 330 to which the command 122 is mapped may control which phase offset circuits are activated. This is shown for an example clock-based activation mechanism in Figs. 6 and 7 with respect to control lines 424, 434, 524, and 534 and latches 640, 650, 740, and 750, in conjunction with associated multiplexors 620, 630, 720, and 730. Other activation mechanisms, however, can alternatively be employed.
[0067] At block 808, data, as shown in one example as data signal 316, can be transmitted across the multiple data lines 304. The data may be of any form, structure, or type and may traverse the interconnect 106 as binary voltages that designate data bits. As an example, first data line 306 propagates a first data signal 314 having a first data bit 315 with a high or low voltage. The second data line 308 propagates a second data signal 316 having a second data bit 317. As shown, the second data bit 317 is delayed with respect to the first data bit 315. In accordance with relative phase offsets, the phase offset 312 between the first data bit 315 and the second data bit 317 is interchangeable; in other words, the first data bit 315 may be delayed with respect to the second data bit 317 or vice versa. With reference to Fig. 3, the multiple data lines 304 may intersect with an SoC including the host device 104 and with the memory device 108 before conduct! vely connecting with the host interface 302 and the memory interface 218. As such, the host device 104 or the memory device 108 may transmit data on the multiple data lines 304 based on the command 122, depending on the direction of travel as shown in block 808.[0068] At block 810, another command can be received. As an example, the other command may be “00100010.” The other command (or a second command) may be similar to the command 122. The other command may be based on changes to the operating environment of the host device 104 or the memory device 108. The other command may have the same number of bits as the command 122. It should be appreciated that any device, including the host device 104 or the memory device 108, may receive the command 122 or the other command to update a pattern of phase offsets as defined in a different entry of a register 340, 350, 320, and 330.[0069] At block 812, additional data can be received. As an example, the first data signal 314 and the second data signal 316 may include bits that are categorized as additional data. The additional data may have phase offsets 312 different from the original or previous data of the first data signal 314 and the second data signal 316. The additional data may have phase offsets 312 that are different from the original data as defined the by the other command received at block 810. As such, the host device 104 or the memory device 108 may use the phase offset circuitry to realign or “de-offsef ’ data on the multiple data lines 304 to receive the additional data based on the other command.[0070] Turning to Fig. 9, a method 900 is shown. This document has discussed, in one or more examples, a command 122 that defines the relative phase offsets or phase offset pattern of the multiple data lines 304. The command 122 may be determined by a learning or training algorithm associated with the accuracy or error of data traversing the multiple data lines 304. As one example, operating conditions of the apparatus 102 may alter the signal integrity of data traversing the multiple data lines 304. For instance, a power mode or communication frequency associated with the apparatus 102 or another environmental or operating factor may cause signal
integrity to change. The command 122 may be initially selected or updated based on these circumstances to maximize, or at least increase, signal integrity.[0071] At block 902, test data can be written to the memory device 108 by the host device 104. The host device 104 may write the test data in accordance with a set of phase offsets that correspond to a command 122. At block 904, read data can be read by the host device 104 from the memory device 108. The data that is written or read may be of any type or structure. The write data may include, for instance, a string of bits generated by a random or pseudo-random function. The bits may traverse the multiple data lines 304 and be affected by SSO noise or crosstalk. The data may be read and reread by the host device 104 (e.g., sent and resent by the memory device 108) according to a second command and a third command. As an example, the second command may be an operating command having a value of “00000000.” The third command may be an operating command having a value of “00010001.” The testing may be conducted during initialization or manufacturing or during operation.[0072] At block 906, the written data can be compared to the read data communicated over the multiple data lines 304 or a separate channel 310 to compute the accuracy or error associated with reading and writing under a given command 122. At least one criterion associated with the test commands may be used to determine the command 122 that indicates the phase offset pattern to be used during operation of the apparatus 102. As an example, a criterion may be an accuracy or error associated with the read data or other test data.[0073] At block 908, a phase offset can be determined. For example, the control logic 301 of the memory controller 118 may determine a phase offset based on write data, read data, and at least one criterion with the data propagate in accordance with multiple commands 122. In some cases, a list of commands 410 and associated environmental or operating conditions may be stored on the apparatus 102 for use by the host device 104. The command 122 may be selected based on the test command having the lowest error or highest accuracy for a given operating condition. As such, a phase offset for particular data lines of the multiple data lines 304 may be determined and stored in the apparatus 102 as the command 122.[0074] After initialization or as a phase offset update at block 910, the selected command 122 can be transmitted, intra-chip or inter-chip, to define a pattern of the phase offsets 312 for communication between the host device 104 and the memory device 108. The command 122 may be revised, updated, or issued according to testing and analysis responsive to the accuracy and error determined through the comparison.[0075] For the flow charts and flow diagrams described above, the orders in which operations are shown and/or described are not intended to be construed as a limitation. Any number or combination of the described process operations can be combined or rearranged in any
order to implement a given method or an alternative method. Operations may also be omitted from or added to the described methods. Further, described operations can be implemented in fully or partially overlapping manners.[0076] Aspects of these methods may be implemented in, for example, hardware (e.g., fixed- logic circuitry or a processor in conjunction with a memory), firmware, software, or some combination thereof. The methods may be realized using one or more of the apparatuses or components shown in Figs. 1-7, the components of which may be further divided, combined, rearranged, and so on. The devices and components of these figures generally represent hardware, such as electronic devices, packaged modules, IC chips, or circuits; firmware or the actions thereof; software; or a combination thereof. With reference to Figs. 1-7, in some cases, a host device 104 or a memory device 108 may individually perform the operations of these methods. In other cases, a host device 104 and a memory device 108 may jointly perform the operations. Figs. 1-7 illustrate example circuitry that may perform at least some of these operations. Thus, these figures illustrate some of the many possible systems or apparatuses capable of implementing the described methods.[0077] Examples of multiple implementations are described below.[0078] Example 1 : A method comprising: receiving a command indicative of a phase offset for a first data line of multiple data lines relative to a second data line of the multiple data lines, the multiple data lines associated with a memory array; and transmitting data on the multiple data lines based on the command.[0079] Example 2: The method of example 1, further comprising: mapping the command to an entry of multiple entries of a register, the entry corresponding to respective phase offset indications for respective data lines of the multiple data lines.[0080] Example 3: The method of example 1 or example 2, further comprising: selecting the entry based on the command; and transmitting the data on the multiple data lines based on the respective phase offset indications of the entry.[0081] Example 4: The method of any one of the preceding examples, further comprising: activating respective phase offset circuits for the respective data lines of the multiple data lines based on the respective phase offset indications corresponding to the entry.[0082] Example 5: The method of any one of the preceding examples, further comprising: receiving the command from a host device, wherein: the command is associated with a memory read operation and comprises eight (8) bits; and the command includes a first portion relating to an upper byte of the data and a second portion relating to a lower byte of the data.[0083] Example 6: The method of any one of the preceding examples, further comprising: receiving another command indicative of another phase offset for the first data line of the multiple
data lines relative to the second data line of the multiple data lines; and receiving other data on the multiple data lines based on the other command.[0084] Example 7: The method of any one of the preceding examples, further comprising: latching the other data from data signals, which are propagated over the multiple data lines, using one or more clock signals for individual data lines of the multiple data lines, wherein the one or more clock signals are based on other phase offset indications that correspond to the other command.[0085] Example 8: The method of any one of the preceding examples, further comprising: receiving the other command from a host device via an interconnect that includes the multiple data lines, wherein: the other command is associated with a memory write operation and comprises eight (8) bits; and the other command includes a first portion relating to an upper byte of the other data and a second portion relating to a lower byte of the other data.[0086] Example 9: The method of any one of the preceding examples, further comprising: transmitting second data on the second data line before first data on the first data line across multiple memory read operations based on the phase offset for the first data line as indicated by the command.[0087] Example 10: The method of any one of the preceding examples, further comprising: transmitting a second data bit on the second data line of the multiple data lines based on the command; and delaying transmission of a first data bit on the first data line of the multiple data lines relative to the transmitting of the second data bit based on the command.[0088] Example 11 : The method of any one of the preceding examples, further comprising: starting transmission of the first data bit after starting the transmitting of the second data bit based on the command.[0089] Example 12: The method of any one of the preceding examples, further comprising: determining a first clock signal for the first data line of the multiple data lines based on the command; and determining a second clock signal for the second data line of the multiple data lines based on the command.[0090] Example 13: The method of any one of the preceding examples, further comprising: transmitting a first data bit on the first data line based on the first clock signal; and transmitting a second data bit on the second data line based on the second clock signal.[0091] Example 14: A method comprising: reading, from a memory device, read data having one or more phase offsets corresponding to multiple data lines of an interconnect; determining a phase offset for a first data line of the multiple data lines relative to a second data line of the multiple data lines based on the reading; and transmitting to the memory device a command indicative of the phase offset.
[0092] Example 15: The method of example 14, further comprising: writing, to the memory device, write data; comparing the read data to the write data; and determining the phase offset based on the comparing.[0093] Example 16: The method of example 14 or example 15, further comprising: reading first data of the read data according to an entry of multiple entries of a register defining the one or more phase offsets corresponding to the multiple data lines; reading second data of the read data according to another entry of the multiple entries of the register defining the one or more phase offsets corresponding to the multiple data lines; selecting from the entry of the multiple entries and the other entry of the multiple entries; and determining the command based on the selecting.[0094] Example 17: The method of any one of examples 14-16, further comprising: transmitting to the memory device a first command indicative of the entry of the multiple entries prior to the reading of the first data; and transmitting to the memory device a second command indicative of the other entry of the multiple entries prior to the reading of the second data.[0095] Example 18: The method of any one of examples 14-17, further comprising: selecting between at least the entry of the multiple entries and the other entry of the multiple entries based on at least one criterion.[0096] Example 19: The method of any one of examples 14-18, wherein the at least one criterion relates to an accuracy of at least one of the first data or the second data relative to test data written to the memory device.[0097] Example 20: The method of any one of examples 14-19, wherein the at least one criterion characterizes an amount of cross-talk between two or more data lines of the multiple data lines.[0098] Example 21: A memory device comprising: an interface configured to be coupled to an interconnect including multiple data lines; a register having multiple entries, an entry of the multiple entries configured to store respective phase offset indications for respective data lines of the multiple data lines; a memory array configured to store data; and logic circuitry configured to communicate with the memory array, the register, and the interface, the logic circuitry configured to process memory operations with the respective phase offset indications based on a command, the command mapped to the entry of the multiple entries.[0099] Example 22: The memory device of example 21, wherein: the logic circuitry comprises multiple delay circuits, respective delay circuits of the multiple delay circuits corresponding to respective data lines of the multiple data lines.[0100] Example 23: The memory device of example 21 or example 22, wherein the logic circuitry comprises: clock tree circuitry comprising multiple clock lines corresponding to multiple phase offsets; and at least one multiplexor coupled to the multiple clock lines, the at least one
multiplexor configured to output a clock signal having a phase offset corresponding to one of the respective phase offset indications based on the command.[0101] Example 24: The memory device of any one of examples 21-23, wherein the logic circuitry comprises at least one latch configured to receive the clock signal from the at least one multiplexor and forward the data according to the clock signal.[0102] Example 25: The memory device of any one of examples 21-24, wherein: the memory array is configured to send the data to the at least one latch; and the interface is configured to receive the data from the at least one latch.[0103] Example 26: The memory device of any one of examples 21-25, wherein: the interface is configured to send the data to the at least one latch; and the memory array is configured to receive the data from the at least one latch.[0104] Example 27: The memory device of any one of examples 21-26, wherein the command is indicative of a phase offset defined in the entry for a first data line of the multiple data lines relative to a second data line of the multiple data lines.[0105] Example 28: A host device comprising: an interface configured to be coupled to an interconnect including multiple data lines; and a memory controller coupled to the interface and including control logic, the memory controller configured to perform memory operations according to different phase offsets across the multiple data lines, the control logic configured to: transmit to a memory device a command indicative of a phase offset for a first data line of the multiple data lines relative to a second data line of the multiple data lines.[0106] Example 29: The host device of example 28, wherein: the command is indicative of a first set of phase offsets for the memory device to receive data; and the control logic is configured to transmit data to the memory device in accordance with a second set of phase offsets different from the first set of phase offsets.[0107] Example 30: The host device of example 28 or example 29, wherein the second set of phase offsets is inverse to the first set of phase offsets.[0108] Example 31: The host device of any one of examples 28-30, wherein the control logic is configured to: transmit a second command to the memory device to deactivate multiphase communication across two or more data lines of the multiple data lines.[0109] Example 32: The host device of any one of examples 28-31, wherein the control logic is configured to: transmit a third command to the memory device to activate the multiphase communication across the two or more data lines of the multiple data lines.[0110] Example 33: The host device of any one of examples 28-32, wherein the control logic is configured to: determine the phase offset for the first data line of the multiple data lines relative to the second data line of the multiple data lines.
[0111] Example 34: The host device of any one of examples 28-33, wherein the control logic is configured to: perform memory operations with multiple relative phase offsets between the first data line and the second data line to determine the phase offset.[0112] Example 35: The host device of any one of examples 28-34, wherein the control logic comprises: clock tree circuitry including multiple clock lines having respective clock phases; and at least one multiplexor configured to receive the multiple clock lines and to output a clock signal having a clock phase offset based on the command.[0113] Example 36: The host device of any one of examples 28-35, wherein the control logic comprises a latch configured to receive the clock signal and to output data according to the clock signal.[0114] Example 37: The host device of any one of examples 28-36, wherein the command maps to an entry of multiple entries of a register, the entry corresponding to respective phase offset indications for respective data lines of the multiple data lines.[0115] Example 38: The host device of any one of examples 28-37, wherein the command is indicative of at least one phase offset defined in the entry for the first data line of the multiple data lines relative to the second data line of the multiple data lines.[0116] Example 39: The host device of any one of examples 28-38, wherein: the command is associated with a memory read operation and comprises eight (8) bits; and the command includes a first portion relating to a lower byte of data and a second portion relating to an upper byte of the data.[0117] Unless context dictates otherwise, use herein of the word “or” may be considered use of an “inclusive or,” or a term that permits inclusion or application of one or more items that are linked by the word “or” (e.g., a phrase “A or B” may be interpreted as permitting just “A,” as permitting just “B,” or as permitting both “A” and “B”). Also, as used herein, a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members. For instance, “at least one of a, b, or c” can cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c, or any other ordering of a, b, and c). Further, items represented in the accompanying figures and terms discussed herein may be indicative of one or more items or terms, and thus reference may be made interchangeably to single or plural forms of the items and terms in this written description.CONCLUSION[0118] Although implementations for programmable memory timing have been described in language specific to certain features and/or methods, the subject of the appended claims is not
necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations for programmable memory timing. |
A solder is deposited on a heat sink. The solder is first reflowed at a first temperature that is below about 120° C. The solder is second heat aged at a temperature that causes the first reflowed solder to have an increased second reflow temperature. The heat aging process results in less compressive stress in a die that uses the solder as a thermal interface material. The solder can have a composition that reflows and adheres to the die and the heat sink without the use of organic fluxes. |
CLAIMS What is claimed is: 1. A process comprising: forming a solder on a heat sink surface, wherein the solder is selected from an InSn solder; a eutectic InSn solder; an InSn laminate; and an In layer on the heat sink and an Au flash layer above and on the In layer; bonding the solder to the heat sink surface at a first processing temperature to achieve a solder having a first remelting temperature; and heat aging the solder at a second processing temperature, wherein heat aging the solder achieves a solder having a second remelting temperature that is greater than the solder first remelting temperature. 2. The process of claim 1 , wherein bonding includes bonding the heat sink surface that has a thickness range from about 1 mm to about 2.4 mm. 3. The process of claim 1, wherein bonding includes bonding the heat sink surface that is a copper-surface heat spreader. 4. The process of claim 1, wherein bonding includes bonding the heat sink surface that is a heat spreader selected from copper, copper-clad aluminum-silicon- carbide (AlSiC), and copper-clad graphite. 5. The process of claim 1, wherein the In-Sn laminate is used in amounts of about 2.88 [mu]m In and about 3.12 [mu]m Sn, wherein bonding at the first processing temperature is carried out in a range from about 115[deg.] C to about 125[deg.] C, and wherein heat aging at the second processing temperature is carried out in a range from about 150[deg.] C to about 215[deg.] C. 6. The process of claim 1, wherein the In-Sn laminate is used in amounts of about 2.88 [mu]m In and about 3.12 [mu]m Sn, wherein heat aging at the second processing temperature is carried out in a range from about 150[deg.] C to about 215[deg.] C and a time range from about 10 minutes to about 2 hours. 7. The process of claim 1 , wherein the eutectic InSn solder is used at a concentration of 50.9In49.1Sn, wherein bonding at the first processing temperature is carried out in a range from about 117[deg.] C to about 123[deg.] C, and wherein heat aging at the second processing temperature is carried out in a range from about 205[deg.] C to about 215[deg.] C. 8. The process of claim 1, wherein the eutectic InSn solder is used at a concentration of 50.9In49.1Sn, and wherein heat aging at the second processing temperature is carried out in a range from about 205[deg.] C to about 215[deg.] C and a time range from about 10 minutes to about 2 hours. 9. The process of claim 1, wherein the eutectic InSn solder is used at a concentration of about 52In48Sn, wherein bonding at the first processing temperature is carried out in a range from about 117[deg.] C to about 123[deg.] C, and wherein heat aging at the second processing temperature is carried out in a range from about 205[deg.] C to about 215[deg.] C. 10. The process of claim 1, wherein the eutectic InSn solder is used at a concentration of about 52In48Sn, and wherein heat aging at the second processing temperature is carried out in a range from about 205[deg.] C to about 215[deg.] C and a time range from about 10 minutes to about 2 hours. 11. The process of claim 1 , wherein the eutectic InSn solder is used at a concentration of about 52In48Sn, and wherein heat aging at the second processing temperature is carried out in a range from about 180[deg.] C to about 210[deg.] C and a time range from about 60 minutes to about 100 minutes. 12. The process of claim 1 , wherein the In layer and the Au flash layer are used, wherein the In layer has a thickness in a range from about 5 [mu]m to about 13 [mu]m, wherein the Au flash layer has a thickness in a range from about 0.08 [mu]m to about 0.52 [mu]m, and wherein heat aging at the second processing temperature is carried out in a range from about 50[deg.] C to about 100[deg.] C above the first processing temperature. 13. The process of claim 1 , wherein In layer and Au flash layer are used, and wherein heat aging at the second processing temperature is carried out in a range from about 50[deg.] C to about 100[deg.] C above the first processing temperature a time range from about 10 minutes to about 20 minutes. 14. The process of claim 1, wherein at least one of bonding at the first processing temperature and heat aging at the second processing temperature is done in a non-oxidizing gas environment. 15. The process of claim 1 , wherein at least one of bonding at the first processing temperature and heat aging at the second processing temperature is done in a non-oxidizing gas environment, and with an overpressure in a range from about 7.25 kPa (50 psi) to about 14.5 kPa (100 psi). 16. A process comprising: forming a solder on a heat sink surface, wherein the solder is an In layer on the heat sink and an Au flash layer above and on the In layer; bonding the solder to the heat sink surface at a first processing temperature to achieve a solder first remelting temperature and a remelted solder, and wherein the remelted solder and the heat sink surface are substantially free of organic flux and organic flux residue. 17. The process of claim 16, wherein the In layer has a thickness in a range from about 5 [mu]m to about 13 [mu]m, wherein the Au flash layer has a thickness in a range from about 0.08 [mu]m to about 0.52 [mu]m, the process further including: heat aging at a second processing temperature range from about 50[deg.] C to about 100[deg.] C above the first processing temperature. 18. The process of claim 17, wherein at least one of bonding at the first processing temperature and heat aging at the second processing temperature is done in a non-oxidizing gas environment. 19. The process of claim 17, wherein at least one of bonding at the first processing temperature and heat aging at the second processing temperature is done in a non-oxidizing gas environment and with an overpressure in a range from about 7.25 kPa (50 psi) to about 14.5 kPa (100 psi). 20. The process of claim 16, further including: heat aging at a second processing temperature range from about 50[deg.] C to about 100[deg.] C above the first processing temperature a time range from about 10 minutes to about 2 hours. 21. An article comprising: a copper heat sink surface; a die disposed below the copper heat sink surface; and a thermal interface material (TIM) solder disposed between the copper heat sink surface and the die, wherein the TIM solder is selected from an InAuCu intermetallic, an InSnCu intermetallic, and an InSnAuCu intermetallic, and wherein between the die and the copper heat sink surface, the TIM solder is substantially free of organic flux and organic flux residue. 22. The article of claim 21 , wherein TIM solder has a thickness in a range from about 5.05 [mu]m to about 13.6 [mu]m. 23. The article of claim 21 , wherein the solder TIM layer has a void presence of less than or equal to about 1 %. 24. The article of claim 21 wherein the die has been thinned to a thickness in a range from about 10 [mu]m to about 300 [mu]m, and wherein the die includes a backside metallurgy layer including a refractory metal. 25. A system comprising: a copper heat sink surface; a die disposed below the copper heat sink surface; and a thermal interface material (TIM) solder disposed between the copper heat sink surface and the die, wherein the TIM solder is an InAu material, and wherein between the die and the copper heat sink surface, the TIM solder is substantially free of organic flux and organic flux residue; and dynamic random-access memory coupled to the die. 26. The system of claim 25, wherein the solder TIM has a thickness in a range from about 2 [mu]m to about 50 [mu]m. 27. The system of claim 25, wherein the system is disposed in one of a computer, a wireless communicator, a hand-held device, an automobile, a locomotive, an aircraft, a watercraft, and a spacecraft. 28. The system of claim 25, wherein the die is selected from a data storage device, a digital signal processor, a micro controller, an application specific integrated circuit, and a microprocessor. |
SOLDER DEPOSITION AND THERMAL PROCESSING OF THIN-DIE THERMAL INTERFACE MATERIALTECHNICAL FIELDEmbodiments relate generally to integrated circuit fabrication. More particularly, embodiments relate to heat management technology with microelectronic devices.TECHNICAL BACKGROUNDHeat spreaders are used to remove heat from structures such as an integrated circuit (IC). An IC die is often fabricated into a microelectronic device such as a processor. The increasing power consumption of processors results in tighter thermal budgets for a thermal solution design when the processor is employed in the field. Accordingly, a thermal interface solution is often needed to allow the die to reject heat more efficiently.Various techniques have been employed to transfer heat away from a die. These techniques include passive and active configurations. One passive configuration involves a conductive material in thermal contact with the backside of a packaged die. This conductive material is often a heat pipe, heat sink, a slug, a heat spreader, or an integrated heat spreader (IHS). Adhesion of the IHS to the die is accomplished with a thermal interface material (TIM) such as a solder. The TIM adheres to the backside of the die and to the die-side of the IHS. Proper reflow of the TIM often is at temperatures in excess of 280[deg.] C. Because heating of the TIM also results in heating of the IHS and the IC die, subsequent cooling transfers significant compressive stresses to the die.As die thicknesses grow smaller, the use of fluxes to protect the TIM composition during reflow, can hinder the adhesion of the TIM to the IHS and to the die backside. If not conducted properly, a flux-assisted TIM reflow can cause significant voids between the die and the IHS. BRIEF DESCRIPTION OF THE DRAWINGSIn order to depict the manner in which the embodiments are obtained, a more particular description of embodiments briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments that are not necessarily drawn to scale and are not therefore to be considered to be limiting of its scope, the embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:FIG. 1 is a cross-section elevation diagram of a photomicrograph that exhibits a solder thermal interface material between a die and a heat spreader according to an embodiment;FIG. 2 is a flow chart that describes a process flow according to an embodiment;FIG. 3 is a heat-versus-time processing graphic according to an embodiment;FIG. 4 is a cut-away elevation that depicts a computing system according to an embodiment; andFIG. 5 is a schematic of a computing system according to an embodiment.DETAILED DESCRIPTIONEmbodiments in this disclosure relate to a solder thermal interface material (TIM) that is disposed between a die and a heat spreader. Embodiments also relate to solder TIM metallurgies that are useful for heat solutions with microelectronic devices that are integrated into a die.The following description includes terms, such as upper, lower, first, second, etc. that are used for descriptive purposes only and are not to be construed as limiting. The embodiments of an apparatus or article described herein can be manufactured, used, or shipped in a number of positions and orientations. The terms "die" and "chip" generally refer to the physical object that is the basic workpiece that is transformed by various process operations into the desired integrated circuit device. A die is usually singulated from a wafer, and wafers may be made of semiconducting, non-semiconducting, or combinations of semiconducting and non-semiconducting materials. A board is typically a resin- impregnated fiberglass structure that acts as a mounting substrate for the die.Reference will now be made to the drawings wherein like structures will be provided with like suffix reference designations. In order to show the structures of various embodiments more clearly, the drawings included herein are diagrammatic representations of integrated circuit structures. Thus, the actual appearance of the fabricated structures, for example in a photomicrograph, may appear different while still incorporating the essential structures of the illustrated embodiments. Moreover, the drawings may show only the structures necessary to understand the illustrated embodiments. Additional structures known in the art have not been included to maintain the clarity of the drawings.FIG. 1 is a cross-section elevation diagram of a photomicrograph 100 that exhibits a solder thermal interface material (TIM) 112 between a die 110 and a heat spreader 118 according to an embodiment. The computer image photomicrograph 100 is depicted in exaggerated dimensions for illustrative purposes. The die 110 includes an active surface 114 and a backside surface 116. The active surface 110 exhibits bond pads, one of which is illustrated with reference numeral 111. In an embodiment, the TIM 112 is better represented by a bracket 112 (FIG. 1, left side) as it is a composite of a laminate that has been reflowed. Such embodiments are depicted herein.In an embodiment, the die 110 is a semiconductive material such as monocrystalline silicon that has been processed into integrated circuits (ICs). In an embodiment, the die 110 is a thinned die that has been size-reduced by a process such as backside grinding or the like. In an embodiment, the die 110 has a thickness in a range from about 100 [mu]m to about 300 [mu]m. In an embodiment, the die 110 has a thickness in a range from about 125 [mu]m to about 200 [mu]m.In an embodiment where the die 110 has a thickness in a range from about 125 [mu]m to about 200 [mu]m, the heat spreader 118 has a thickness in a range from about 1 mm to about 3 mm. In an embodiment where the die 110 has a thickness in a range from about 125 [mu]m to about 300 [mu]m, the heat spreader 118 has a thickness of about 2.4 mm. In an embodiment, the die 110 has a backside metallization (BSM) 128 including a titanium first layer 120, a nickel- vanadium second layer 122 and a gold third layer 124. The BSM can be referred to as the BSM 128 in an embodiment. In an embodiment, a conventional BSM 128 is used. Other BSMs that can be used may have various numbers of layers and types of materials.As depicted in FIG. 1, the die 110 has been bonded to the heat spreader 118 with the solder TIM 112 according to an embodiment. In an embodiment, the heat spreader 118 has been prepared with a cladding 126 that includes a metal such as a copper layer 126.In an embodiment, FIG. 1 refers to a portion of an article 100 in which the solder TIM 112 has a voids fraction that is less than about 1% by volume. In an embodiment, the article 100 includes a solder TIM 112 that has a voids fraction that is less than about 0.5%. In an embodiment, the article 100 includes a solder TIM 112 that has a voids fraction that is less than about 0.1%. The voids fraction can be analyzed by any known method, such as the Archimedes method, which determines a known density for a given eutectic solder. The voids fraction can also be determined by use of a scanning acoustic microscope (SAM).In an embodiment, the copper surface cladding 126 interfaces with the TIM 118. In an embodiment, the copper surface cladding 126 has only the area dimension of the die 116. In an embodiment, the copper surface cladding 126 in FIG. 1 shows the surface area dimension of the heat spreader 118. In an embodiment, the heat spreader 118 has an aluminum-silicon-carbide (AlSiC) with a copper surface cladding 126 on the die side of the heat spreader 118. In an embodiment, the heat spreader 118 has is a graphite material with a copper surface cladding 126 on the die side of the heat spreader 118. In an embodiment, the heat spreader 118 is copper and no cladding 126 is present such that the solder TIM 112 embodiment makes direct contact with the copper heat spreader 118.In an embodiment, a eutectic solder is used that blends with the copper of the heat spreader 118, or with the copper of the copper surface cladding 126 to make an intermetallic material with a high melting point, but which initially reflows at a low melting point. This intermetallic material has the strength and adhesion that is required for a thinned die in the embodiment ranges and thinner, but it imparts a significantly lower compressive stress, if at all, to the die 110.In various embodiments, a plated metal cladding layer 126, suitable for creating the intermetallic compound with the solderable TIM 112, is used on the die side of the heat spreader 118 to facilitate wetting. In various embodiments, the metal cladding layer 126 is copper. A copper heat spreader 118 or a copper-cladded heat spreader is used. The heat spreader 118 can be constructed of a wide range of heat-spreader materials that have suitable thermal dissipation. Additionally, for an embodiment where the heat spreader material 116 includes copper, a plating process for the heat spreader 116 is eliminated to further decrease cost and time in the production cycle. hi an embodiment, a bi-metallic solder composition is used as the solder TIM 112. The solder TIM 112 is produced without the use of a flux. The solder TIM 112 is placed against the copper of the heat spreader 118. hi an embodiment, the solder TIM 112 is an alloy when layered. Accordingly, the solder TIM 112 is a layered structure that will reflow and in situ alloy with itself and the copper of the heat spreader 118. In an embodiment, the in situ alloy forms an intermetallic material.The solder TIM 112, whether an alloy or a layered structure, is bonded between the die 110 and the heat spreader 118 at a first processing temperature of about 120[deg.] C and for a time period from about 1 minute to about 5 minutes in a non- oxidizing environment. Thereafter, the solder TIM 112 is heat aged in a non- oxidizing environment. Heat aging is carried out at the same temperature according to an embodiment, hi an embodiment, heat aging is carriedout at a second processing temperature of about 180[deg.] C to about 210[deg.] C hi a non-oxidizing environment, hi an embodiment, the non-oxidizing environment is a nitrogen (N2) environment. This second processing is carried out for about 5 minutes to about 2 hours at a processing temperature of about 180[deg.] C to about 210[deg.] C. hi an embodiment, the non-oxidizing environment can be effected at an overpressure in a range from about 7.25 kPa (50 psi) to about 14.5 kPa (100 psi). The solder TIM 112 that results is observed to be substantially void free. Beside the Archimedes method, an analytical technique that can be used to detect the presence of voids is using a scanning acoustic microscope (SAM).Because of the processing conditions carried out in the disclosed embodiments, the article that results is substantially free of an organic flux or an organic flux residue. By "substantially free" it is meant that under clean-room conditions that are used during die bonding, analytical evaluation of the article at the level of the TIM 112 will result in no detectable flux or flux residue, absent a false positive. Such no detectable flux means that if there were any organic present, it would be below detection, and if not below detection, it would be tracked to a contaminant and not to a residue of a process that was used.In an embodiment, the solder TIM 112 is an indium-tin (InSn) solder composition such as 52In48Sn as rated by weight comparisons. In an embodiment, the solder TIM 112 is the InSn eutectic solder composition containing 50.9In49.1 Sn as rated by weight comparisons.In an embodiment, the solder TIM 112 was produced without the use of a flux. The solder TIM 112 is formed from an indium layer and a tin layer in a laminate. In an embodiment, the indium layer was about 2.88 [mu]m thick and the tin layer was about 3.12 [mu]m thick. The layered structure was bonded between the die 110 and the heat spreader 118 at a first processing temperature of about 120[deg.] C. Thereafter, the solder TIM 112 was heat aged at a second processing temperature of about 180[deg.] C to about 210[deg.] C in a non-oxidizing environment such as in N2, for about 5 minutes to about 15 minutes. The solder TIM 112 was observed to be substantially void free. Because no flux was used, the article that results is substantially free of a flux or a flux residue.In an embodiment, the solder TIM 112 was produced without the use of a flux. The solder TIM 112 is formed from an indium layer and a tin layer in a laminate. In an embodiment, the indium layer was from about 5 [mu]m to about 10 [mu]m thick and the tin layer was from about 0.2 [mu]m to about 0.5 [mu]m thick. The layered structure was bonded between the a 125 [mu]m-thick die 110 and a 2.4 mm-thick heat spreader 118 at a first processing temperature of about 120[deg.] C. Thereafter, the solder TIM 112 was heat aged at a second processing temperature of about 180[deg.] C to about 210[deg.] C in an N2 environment for about 5 minutes to about 15 minutes. The solder TIM 112 was observed to be substantially void free. Because no flux was used, the article that results is substantially free of a flux or a flux residue.In an embodiment, the solder TIM 112 was produced without the use of a flux. A eutectic InSn solder was plated on the heat spreader 118. The solder TIM 112 was about 7 [mu]m thick. The solder TIM 112 was bonded between the die 110 and the heat spreader 118 at a first processing temperature of about 120[deg.] C. Thereafter, the solder TIM 112 was heat aged at a second processing temperature of about 180[deg.] C to about 210[deg.] C in an N2 environment for about 5 minutes to about 15 minutes. The solder TIM 112 was observed to be substantially void free. Accordingly, the article that results is also substantially free of a flux or a flux residue.In an embodiment, the solder TIM 112 was produced without the use of a flux. A eutectic InSn solder was formed on the heat spreader 118. The solder TIM 112 was about 7 [mu]m thick. The solder TIM 112 was bonded between the die 110 and the heat spreader 118 at a first processing temperature of about 120[deg.] C. Thereafter, the solder TIM 112 was heat aged at a second processing temperature of about 210[deg.] C in an N2 environment for about 16 minutes. The solder TIM 112 was observed to be completely void free. Accordingly, the article that results is also substantially free of an organic flux or an organic flux residue.In an embodiment, a solder TIM 112 was prepared that included a pure indium layer of about 6 [mu]m to about 12 [mu]m thickness that was plated with a gold flash layer that was from about 0.1 [mu]m to about 0.5 [mu]m. The indium portion of the solder TIM 112 was placed on the heat sink 118 (or the copper cladding 126 if the heat sink 118 is not copper). During processing the gold flash layer and the indium layer dissolved into each other and reflow was carried out. The solder TIM 112 was bonded between the die 110 and the heat spreader 118 at a first processing temperature of about 120[deg.] C. Thereafter, the solder TIM 112 was heat aged at a second processing temperature of about 180[deg.] C to about 210[deg.] C in an N2 environment for about 5 minutes to about 15 minutes. The solder TIM 112 was observed to be completely void free. Accordingly, the article that results is also substantially free of an organic flux or an organic flux residue.In an embodiment, the solder TIM 112 was produced without the use of a flux. A eutectic InSn solder was plated on the heat spreader 118. The solder TIM 112 was about 6 [mu]m thick, with a gold flash layer that was about 0.05 [mu]m thick. The solder TIM 112 was bonded between the die 110 and the heat spreader 118 at a first processing temperature of about 120[deg.] C. Thereafter, the solder TIM 112 was heat aged at a second processing temperature of about 175[deg.] C in an N2 environment for about 2 minutes. The solder TIM 112 was observed to be completely void free. Accordingly, the article that results is also substantially free of an organic flux or an organic flux residue.In an embodiment, the solder TIM 112 was produced without the use of a flux. A eutectic InSn solder was plated on the heat spreader 118. The solder TIM 112 was about 12 [mu]m thick, with a gold flash layer that was about 0.2 [mu]m thick. The solder TIM 112 was bonded between the die 110 and the heat spreader 118 at a first processing temperature of about 120[deg.] C. Thereafter, the solder TIM 112 was heat aged at a second processing temperature of about 210[deg.] C in an N2 environment for about 16 minutes. The solder TIM 112 was observed to be completely void free. Accordingly, the article that results is also substantially free of an organic flux or an organic flux residue.FIG. 2 is a flow chart 200 that describes a process flow according to an embodiment. The various processes are depicted in schematic form and several incidental processes are not illustrated for simplicity.At 210 the process includes locating a die and a heat sink within a tool.At 220, the process includes bonding the solder to the heat sink surface at a first processing temperature to achieve a solder TIM with a first remelting temperature. By way of non-limiting example, a 52In49Sn solder embodiment is placed upon a die and a heat spreader is pressed against the solder. The first processing temperature is in the range from 115[deg.] C to about 125[deg.] C.At 222, the process includes purging the tool. Purging the tool allows for a substantially oxidation-free atmosphere. A substantially oxidation-free atmosphere can eliminate oxide formations on the solder precursors, such that a selected amount of a reflowed solder TIM is formed without the use of an organic flux. In an embodiment, the process includes purging 222 and bonding 220.At 230, the process includes heat aging the solder TIM at a second processing temperature that is higher than the first processing temperature. By way of continuing the non-limiting example, the heat aging of a solder TIM is done in a range from about 175[deg.] C to about 215[deg.] C and for a time from about five minutes to about two hours. The solder TIM thereafter has a solder second remelting temperature that is higher than the solder first remelting temperature. By this process embodiment, a high melting-point solder is manufactured that first reflows in the temperature range from about 115[deg.] C to about 125[deg.] C. This low first temperature allows for a significantly lower compressive stress in the die.In an embodiment, the process can repeat at least one of purging 222 and reducing the gas pressure 224 as a pre-process for heat aging. In this way, the need for a solder flux is also reduced.FIG. 3 is a heat-versus-time processing graphic according to an embodiment. The process includes first bonding the solder TIM to the heat sink at a first processing temperature 360. A first ramp-up rate 358 is depicted at an arbitrary slope. The first processing temperature is in the range from about 115[deg.] C to about 125[deg.] C. This low first temperature allows for a significantly lower compressive stress in the die. After first bonding, the process includes a second heat aging of the solder TIM at a second processing temperature 370. A second ramp-up rate 368 is depicted at an arbitrary slope, and the cool-down in the graph is depicted arbitrarily. The second processing temperature is in the range from about 175[deg.] C to about 215[deg.] C.FIG. 4 is a cut-away elevation that depicts a computing system 400 according to an embodiment. One or more of the foregoing embodiments of the reflowed solder TIM structures may be utilized in a computing system, such as a computing system 400 of FIG. 4. Hereinafter any flux-free or flux residue free solder TIM embodiment alone or in combination with any other embodiment is referred to as an embodiment(s) configuration. The computing system 400 includes at least one processor, which is enclosed in a package 410, a data storage system 412, at least one input device such as a keyboard 414, and at least one output device such as a monitor 416, for example. The computing system 400 includes a processor that processes data signals, and may include, for example, a microprocessor, available from Intel Corporation, hi addition to the keyboard 414, the computing system 400 can include another user input device such as a mouse 418, for example. The computing system 500 can include a structure, after processing as depicted in FIG. 1, including the die 110, the solder TIM 112, and the heat spreader 118.For purposes of this disclosure, a computing system 400 embodying components in accordance with the claimed subject matter may include any system that utilizes a microelectronic device system, which may include, for example, at least one of the reflowed solder TIM structure embodiments that is coupled to data storage such as dynamic random access memory (DRAM), polymer memory, flash memory, and phase-change memory. In this embodiment, the embodiment(s) is coupled to any combination of these functionalities by being coupled to a processor, hi an embodiment, however, an embodiment(s) configuration set forth in this disclosure is coupled to any of these functionalities. For an example embodiment, data storage includes an embedded DRAM cache on a die. Additionally in an embodiment, the embodiment(s) configuration that is coupled to the processor (not pictured) is part of the system with an embodiment(s) configuration that is coupled to the data storage of the DRAM cache. Additionally in an embodiment, an embodiment(s) configuration is coupled to the data storage 412. hi an embodiment, the computing system 400 can also include a die that contains a digital signal processor (DSP), a micro controller, an application specific integrated circuit (ASIC), or a microprocessor. In this embodiment, the embodiment(s) configuration is coupled to any combination of these functionalities by being coupled to a processor. For an example embodiment, a DSP (not pictured) is part of a chipset that may include a stand-alone processor and the DSP as separate parts of the chipset on the board 420. In this embodiment, an embodiment(s) configuration is coupled to the DSP, and a separate embodiment(s) configuration may be present that is coupled to the processor in the package 410. Additionally in an embodiment, an embodiment(s) configuration is coupled to a DSP that is mounted <">on the same board 420 as the package 410. It can now be appreciated that the embodiment(s) configuration can be combined as set forth with respect to the computing system 500, in combination with an embodiment(s) configuration as set forth by the various embodiments of the flux-free or flux residue free solder TIM within this disclosure and their equivalents.It can be appreciated that embodiments set forth in this disclosure can be applied to devices and apparatuses other than a traditional computer. For example, a die can be packaged with an embodiment(s) configuration, and placed in a portable device such as a wireless communicator or a hand-held device such as a personal data assistant and the like. Another example is a die that can be packaged with an embodiment(s) configuration and placed in a vehicle such as an automobile, a locomotive, a watercraft, an aircraft, or a spacecraft.FIG. 5 is a schematic of a computing system according to an embodiment. The electronic system 500 as depicted can embody the computing system 400 depicted in FIG. 4, but the electronic system is depicted more genetically and includes the flux-free or flux residue free solder TIM embodiment for at least one component. The electronic system 500 incorporates at least one electronic assembly 510, such as an IC die illustrated in FIG. 1. In an embodiment, the electronic system 500 is a computer system that includes a system bus 520 to electrically couple the various components of the electronic system 500. The system bus 520 is a single bus or any combination of busses according to various embodiments. The electronic system 500 includes a voltage source 530 that provides power to the integrated circuit 510. In some embodiments, the voltage source 630 supplies current to the integrated circuit 510 through the system bus 520.The integrated circuit 510 is electrically coupled to the system bus 520 and includes any circuit, or combination of circuits according to an embodiment. In an embodiment, the integrated circuit 510 includes a processor 512 that can be of any type. As used herein, the processor 512 means any type of circuit such as, but not limited to, a microprocessor, a microcontroller, a graphics processor, a digital signal processor, or another processor. Other types of circuits that can be included in the integrated circuit 510 are a custom circuit or an ASIC, such as a communications circuit 514 for use in wireless devices such as cellular telephones, pagers, portable computers, two-way radios, and similar electronic systems. In an embodiment, the processor 510 includes on-die memory 516 such as SRAM. In an embodiment, the processor 510 includes on-die memory 516 such as eDRAM.In an embodiment, the electronic system 500 also includes an external memory 540 that in turn may include one or more memory elements suitable to the particular application, such as a main memory 542 in the form of RAM, one or more hard drives 544, and/or one or more drives that handle removable media 546, such as diskettes, compact disks (CDs), digital video disks (DVDs), flash memory keys, and other removable media known in the art.In an embodiment, the electronic system 500 also includes a display device 550, an audio output 560. hi an embodiment, the electronic system 500 includes a controller 570, such as a keyboard, mouse, trackball, game controller, microphone, voice-recognition device, or any other device that inputs information into the electronic system 500.As shown herein, integrated circuit 510 can be implemented in a number of different embodiments, including an electronic package, an electronic system, a computer system, one or more methods of fabricating an integrated circuit, and one or more methods of fabricating an electronic assembly that includes the integrated circuit and the flux-free or flux residue free solder TIM as set forth herein in the various embodiments and their art-recognized equivalents. The elements, materials, geometries, dimensions, and sequence of operations can all be varied to suit particular packaging requirements.The Abstract is provided to comply with 37 C.F.R. [section]1.72(b) requiring an abstract that will allow the reader to quickly ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate preferred embodiment.It will be readily understood to those skilled in the art that various other changes in the details, material, and arrangements of the parts and method stages which have been described and illustrated in order to explain the nature of this invention may be made without departing from the principles and scope of the invention as expressed in the subjoined claims. |
In one embodiment, a processor includes: a plurality of cores, at least some having an advanced programmable interrupt controller (APIC) identifier associated therewith; a plurality of power management agents associated with the plurality of cores; and a power controller to receive an indication of an interrupt and a first APIC identifier and send a wake signal and the first APIC identifier to the plurality of power management agents to determine which of the plurality of cores is associated with the first APIC identifier. Other embodiments are described and claimed. |
What is claimed is: 1. A processor comprising:a plurality of cores, at least some of the plurality of cores having an advanced programmable interrupt controller (APIC) identifier associated therewith;a plurality of power management agents associated with the plurality of cores; and a power controller to receive an indication of an interrupt and a first APIC identifier and send a wake signal and the first APIC identifier to the plurality of power management agents to determine which of the plurality of cores is associated with the first APIC identifier. 2. The processor of claim 1, wherein the power controller is to send the wake signal and the first APIC identifier when the processor is in a package low power state. 3. The processor of claim 2, wherein responsive to the determination, the power controller is to cause a fabric coupled to the plurality of cores and the determined core associated with the first APIC identifier to wake up, while others of the plurality of cores are to remain in a low power state. 4. The processor of claim 3, wherein a power management agent associated with the determined core is to send a wake request to the power controller, responsive to a match between the first APIC identifier and an APIC identifier of the determined core stored in a storage associated with the power management agent. 5. The processor of claim 4, wherein the power management agent is to send the wake request via a power management sideband interconnect, while the fabric coupled to the plurality of cores is not in an active state. 6. The processor of claim 4, wherein the power management agent is to send a core identifier with the wake request, the core identifier to identify the destined core and different than the APIC identifier associated with the destined core.7. The processor of claim 3, wherein the power controller is to cause the determined core and the fabric to exit a low power state concurrently. 8. The processor of claim 2, further comprising a caching agent, wherein the caching agent is to send the interrupt directly to the core via the fabric, after the core and the fabric have entered an active state. 9. The processor of claim 1, further comprising a plurality of adapter units associated with the plurality of cores, wherein the plurality of adapter units comprises the plurality of power management agents. 10. The processor of claim 9, wherein the plurality of adapter units are to be maintained in an active state when the associated plurality of cores are in a first low power state. 11. A method comprising:receiving a wake signal and an interrupt destination identifier in a power control unit of a processor while the processor is in a low power state, responsive to receipt of an interrupt in the processor;sending the wake signal and the interrupt destination identifier to a plurality of power management agents of the processor via a power management sideband interconnect;receiving an indication of a core associated with the interrupt destination identifier; andconcurrently causing the core associated with the interrupt destination identifier and a fabric that couples the plurality of cores to exit the low power state concurrently. 12. The method of claim 11, further comprising broadcasting the wake signal and the interrupt destination identifier to the plurality of power management agents. 13. The method of claim 11, further comprising receiving the wake signal in the power control unit from an interface circuit of the processor.14. The method of claim 13, further comprising receiving the interrupt in the core associated with the interrupt destination identifier from a uncore logic coupled to the interface circuit. 15. The method of claim 11, further comprising maintaining others of the plurality of cores in the low power state while causing the core and the fabric to exit the low power state. 16. The method of claim 11, wherein receiving the indication of the core associated with the interrupt destination identifier comprises receiving a message from a power management agent associated with the core, the message to indicate that the core is associated with the interrupt destination identifier and to request the power control unit to wake the core from the low power state. 17. A computer-readable storage medium including computer-readable instructions, when executed, to implement a method as claimed in any one of claims 11 to 16. 18. An apparatus comprising means to perform a method as claimed in any one of claims 11 to 16. 19. A system comprising:a processor having a power controller, a core to execute instructions and a core perimeter logic coupled to the core, the core perimeter logic including a power management agent to access an interrupt destination identifier associated with the core, wherein the power management agent is to send a message to the power controller to request the power controller to cause the core to wake up, responsive to detection of a match between a first interrupt destination identifier of a broadcast message sent to a plurality of cores and the interrupt destination identifier associated with the core; anda dynamic random access memory (DRAM) coupled to the processor. 20. The system of claim 19, wherein the processor further comprises a fabric to couple the plurality of cores and a sideband interconnect to couple the power controller to a plurality of core perimeter logics, wherein the power controller is to send the broadcast message via the sideband interconnect responsive to receipt of an interrupt in the processor while the processor is in a low power state. 21. The system of claim 20, wherein the power controller is to cause the core and the fabric to exit the low power state concurrently. 22. The system of claim 20, wherein the power controller is to receive a wake signal and the interrupt destination identifier from an interface circuit of the processor, responsive to receipt of the interrupt in the processor while the processor is in a package low power state. 23. A processor comprising:a plurality of core means, at least some of the plurality of core means having an advanced programmable interrupt controller (APIC) identifier associated therewith;a plurality of power management means associated with the plurality of core means; anda power control means for receiving an indication of an interrupt and a first APIC identifier and sending a wake signal and the first APIC identifier to the plurality of power management agents to determine which of the plurality of cores is associated with the first APIC identifier. 24. The processor of claim 23, wherein the power control means for sending the wake signal and the first APIC identifier when the processor is in a package low power state. 25. The processor of claim 24, wherein responsive to the determination, the power control means for causing a fabric coupled to the plurality of core means and the determined core means associated with the first APIC identifier to wake up, while others of the plurality of core means are to remain in a low power state. 26. The processor of claim 25, wherein a power management means associated with the determined core means for sending a wake request to the power control means, responsive to a match between the first APIC identifier and an APIC identifier of the determined core means stored in a storage associated with the power management means. |
PROCESSOR HAVING CONCURRENT COREAND FABRIC EXIT FROM A LOW POWER STATETechnical Field[0001] Embodiments relate to power management of a system, and more particularly to power management of a multicore processor.Background[0002] Advances in semiconductor processing and logic design have permitted an increase in the amount of logic that may be present on integrated circuit devices. As a result, computer system configurations have evolved from a single or multiple integrated circuits in a system to multiple hardware threads, multiple cores, multiple devices, and/or complete systems on individual integrated circuits. Additionally, as the density of integrated circuits has grown, the power requirements for computing systems (from embedded systems to servers) have also escalated. Furthermore, software inefficiencies, and its requirements of hardware, have also caused an increase in computing device energy consumption. In fact, some studies indicate that computing devices consume a sizeable percentage of the entire electricity supply for a country, such as the United States of America. As a result, there is a vital need for energy efficiency and conservation associated with integrated circuits. These needs will increase as servers, desktop computers, notebooks, Ultrabooks™, tablets, mobile phones, processors, embedded systems, etc. become even more prevalent (from inclusion in the typical computer, automobiles, and televisions to biotechnology).Brief Description of the Drawings[0003] FIG. 1 is a block diagram of a portion of a system in accordance with an embodiment of the present invention.[0004] FIG. 2 is a block diagram of a processor in accordance with an embodiment of the present invention.[0005] FIG. 3 is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention.[0006] FIG. 4 is an embodiment of a processor including multiple cores. [0007] FIG. 5 is a block diagram of a micro-architecture of a processor core in accordance with one embodiment of the present invention.[0008] FIG. 6 is a block diagram of a micro-architecture of a processor core in accordance with another embodiment.[0009] FIG. 7 is a block diagram of a micro-architecture of a processor core in accordance with yet another embodiment.[0010] FIG. 8 is a block diagram of a micro-architecture of a processor core in accordance with a still further embodiment.[0011] FIG. 9 is a block diagram of a processor in accordance with another embodiment of the present invention.[0012] FIG. 10 is a block diagram of a representative SoC in accordance with an embodiment of the present invention.[0013] FIG. 11 is a block diagram of another example SoC in accordance with an embodiment of the present invention.[0014] FIG. 12 is a block diagram of an example system with which embodiments can be used.[0015] FIG. 13 is a block diagram of another example system with which embodiments may be used.[0016] FIG. 14 is a block diagram of a representative computer system.[0017] FIG. 15 is a block diagram of a system in accordance with an embodiment of the present invention.[0018] FIG. 16 is a block diagram of a processor in accordance with an embodiment of the present invention.[0019] FIG. 17 is a flow diagram of a method in accordance with an embodiment of the present invention.[0020] FIG. 18 is a flow diagram of a method in accordance with another embodiment of the present invention. [0021] FIG. 19 is a flow diagram of a method in accordance with yet another embodiment of the present invention.[0022] FIG. 20 is a timing diagram illustrating operations within a processor responsive to receipt of an interrupt in accordance with an embodiment of the present invention.[0023] FIG. 21 is a timing diagram of further details of issuance of a wake request in accordance with an embodiment of the present invention.Detailed Description[0024] In various embodiments, a multicore processor is provided with an interrupt mechanism to allow a concurrent waking up of a targeted core and a fabric or other interconnect structure to enable a received interrupt to be provided to the targeted core with reduced latency, where the targeted core and potentially a remainder of the processor are in a given low power state, such as a package low power state, when the interrupt is received.[0025] To this end, embodiments provide techniques to eliminate core low power state exit dependencies that are serialized behind low power state activities of a fabric domain. In this way, embodiments provide for greatly reduced latency when exiting from certain low power states, such as a package deep low power states. As such, embodiments may be used in systems that more readily leverage deep package low power states, as with the reduced exit latency, a processor can be placed into such a package deep low power state and exit in a sufficient amount of time to be able to handle one or more received interrupts within latency tolerances.[0026] As such, embodiments may further provide a greater ability to control decisions as a type of low power state to enter based on such reduced exit latencies. As examples, for a particular package deep low power states, the latency can be reduced, for example, by about 10-15 microseconds. As such, embodiments enable a processor to enter into a deeper low power state, where such deeper low power state may consume only a small portion (e.g., approximately 10%) of the power consumption of another low power state in which a processor would otherwise be placed.[0027] Although the following embodiments are described with reference to energy conservation and energy efficiency in specific integrated circuits, such as in computing platforms or processors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments are not limited to any particular type of computer systems. That is, disclosed embodiments can be used in many different system types, ranging from server computers (e.g., tower, rack, blade, micro-server and so forth), communications systems, storage systems, desktop computers of any configuration, laptop, notebook, and tablet computers (including 2: 1 tablets, phablets and so forth), and may be also used in other devices, such as handheld devices, systems on chip (SoCs), and embedded applications. Some examples of handheld devices include cellular phones such as smartphones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may typically include a microcontroller, a digital signal processor (DSP), network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, wearable devices, or any other system that can perform the functions and operations taught below. More so, embodiments may be implemented in mobile terminals having standard voice functionality such as mobile phones, smartphones and phablets, and/or in non- mobile terminals without a standard wireless voice function communication capability, such as many wearables, tablets, notebooks, desktops, micro-servers, servers and so forth. Moreover, the apparatuses, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the embodiments of methods, apparatuses, and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a 'green technology' future, such as for power conservation and energy efficiency in products that encompass a large portion of the US economy.[0028] Referring now to FIG. 1, shown is a block diagram of a portion of a system in accordance with an embodiment of the present invention. As shown in FIG. 1, system 100 may include various components, including a processor 110 which as shown is a multicore processor. Processor 110 may be coupled to a power supply 150 via an external voltage regulator 160, which may perform a first voltage conversion to provide a primary regulated voltage to processor 110. [0029] As seen, processor 110 may be a single die processor including multiple cores 120a- 120n. In addition, each core may be associated with an integrated voltage regulator (IVR) 125a- 125nwhich receives the primary regulated voltage and generates an operating voltage to be provided to one or more agents of the processor associated with the IVR. Accordingly, an IVR implementation may be provided to allow for fine-grained control of voltage and thus power and performance of each individual core. As such, each core can operate at an independent voltage and frequency, enabling great flexibility and affording wide opportunities for balancing power consumption with performance. In some embodiments, the use of multiple IVRs enables the grouping of components into separate power planes, such that power is regulated and supplied by the IVR to only those components in the group. During power management, a given power plane of one IVR may be powered down or off when the processor is placed into a certain low power state, while another power plane of another IVR remains active, or fully powered.[0030] Still referring to FIG. 1, additional components may be present within the processor including an input/output interface 132, another interface 134, and an integrated memory controller 136. As seen, each of these components may be powered by another integrated voltage regulator 125x. In one embodiment, interface 132 may be enable operation for an Intel® Quick Path Interconnect (QPI) interconnect, which provides for point-to-point (PtP) links in a cache coherent protocol that includes multiple layers including a physical layer, a link layer and a protocol layer. In turn, interface 134 may communicate via a Peripheral Component Interconnect Express (PCIe™) protocol.[0031] Also shown is a power control unit (PCU) 138, which may include hardware, software and/or firmware to perform power management operations with regard to processor 110. As seen, PCU 138 provides control information to external voltage regulator 160 via a digital interface to cause the voltage regulator to generate the appropriate regulated voltage. PCU 138 also provides control information to IVRs 125 via another digital interface to control the operating voltage generated (or to cause a corresponding IVR to be disabled in a low power mode). In various embodiments, PCU 138 may include a variety of power management logic units to perform hardware-based power management. Such power management may be wholly processor controlled (e.g., by various processor hardware, and which may be triggered by workload and/or power, thermal or other processor constraints) and/or the power management may be performed responsive to external sources (such as a platform or management power management source or system software).[0032] While not shown for ease of illustration, understand that additional components may be present within processor 110 such as uncore logic, and other components such as internal memories, e.g., one or more levels of a cache memory hierarchy and so forth. Furthermore, while shown in the implementation of FIG. 1 with an integrated voltage regulator, embodiments are not so limited.[0033] Note that the power management techniques described herein may be independent of and complementary to an operating system (OS)-based power management (OSPM) mechanism. According to one example OSPM technique, a processor can operate at various performance states or levels, so-called P-states, namely from P0 to PN. In general, the PI performance state may correspond to the highest guaranteed performance state that can be requested by an OS. In addition to this PI state, the OS can further request a higher performance state, namely a P0 state. This P0 state may thus be an opportunistic or turbo mode state in which, when power and/or thermal budget is available, processor hardware can configure the processor or at least portions thereof to operate at a higher than guaranteed frequency. In many implementations a processor can include multiple so-called bin frequencies above the PI guaranteed maximum frequency, exceeding to a maximum peak frequency of the particular processor, as fused or otherwise written into the processor during manufacture. In addition, according to one OSPM mechanism, a processor can operate at various power states or levels. With regard to power states, an OSPM mechanism may specify different power consumption states, generally referred to as C-states, CO, CI to Cn states. When a core is active, it runs at a CO state, and when the core is idle it may be placed in a core low power state, also called a core non-zero C-state (e.g., C1-C6 states), with each C-state being at a lower power consumption level (such that C6 is a deeper low power state than CI, and so forth).[0034] Understand that many different types of power management techniques may be used individually or in combination in different embodiments. As representative examples, a power controller may control the processor to be power managed by some form of dynamic voltage frequency scaling (DVFS) in which an operating voltage and/or operating frequency of one or more cores or other processor logic may be dynamically controlled to reduce power consumption in certain situations. In an example, DVFS may be performed using Enhanced Intel SpeedStep™ technology available from Intel Corporation, Santa Clara, CA, to provide optimal performance at a lowest power consumption level. In another example, DVFS may be performed using Intel TurboBoost™ technology to enable one or more cores or other compute engines to operate at a higher than guaranteed operating frequency based on conditions (e.g., workload and availability).[0035] Another power management technique that may be used in certain examples is dynamic swapping of workloads between different compute engines. For example, the processor may include asymmetric cores or other processing engines that operate at different power consumption levels, such that in a power constrained situation, one or more workloads can be dynamically switched to execute on a lower power core or other compute engine. Another exemplary power management technique is hardware duty cycling (HDC), which may cause cores and/or other compute engines to be periodically enabled and disabled according to a duty cycle, such that one or more cores may be made inactive during an inactive period of the duty cycle and made active during an active period of the duty cycle. Although described with these particular examples, understand that many other power management techniques may be used in particular embodiments.[0036] Embodiments can be implemented in processors for various markets including server processors, desktop processors, mobile processors and so forth. Referring now to FIG. 2, shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown in FIG. 2, processor 200 may be a multicore processor including a plurality of cores 210a- 210n. In one embodiment, each such core may be of an independent power domain and can be configured to enter and exit active states and/or maximum performance states based on workload. Each core 210 may be associated with a corresponding core perimeter logic 212a-212n. In general, core perimeter logic 212 may include one or more independent power/frequency domains that provide an interface between core circuitry and a remainder of the processor. Notably, one or more independent storage units of each core perimeter logic 212 may be adapted to store at least certain context information of the associated core to enable fast entry into and exit from particular low power states and to further enable certain processor operations (such as interrupt handling and snoop responses) to occur while a corresponding core is in a low power state. In addition, such perimeter logic 212 may provide interrupt information while core 210 is in a low power state, to enable faster low power state exits when a given core is targeted by an interrupt.[0037] The various cores may be coupled via an interconnect 215 to a system agent or uncore 220 that includes various components. As seen, the uncore 220 may include a shared cache 230 which may be a last level cache. In addition, the uncore may include an integrated memory controller 240 to communicate with a system memory (not shown in FIG. 2), e.g., via a memory bus. Uncore 220 also includes various interfaces 250 and a power control unit 255, which may include logic to perform the power management techniques described herein. In some cases, in addition to direct connections between given cores 210 and uncore 220, core perimeter logics 212 also may be directly coupled to at least portions of uncore 220.[0038] In addition, by interfaces 250a-250n, connection can be made to various off-chip components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment of FIG. 2, the scope of the present invention is not limited in this regard.[0039] Referring now to FIG. 3, shown is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention. As shown in the embodiment of FIG. 3, processor 300 includes multiple domains. Specifically, a core domain 310 can include a plurality of cores 3100— 310n, a graphics domain 320 can include one or more graphics engines, and a system agent domain 350 may further be present. In some embodiments, system agent domain 350 may execute at an independent frequency than the core domain and may remain powered on at all times to handle power control events and power management such that domains 310 and 320 can be controlled to dynamically enter into and exit high power and low power states. Each of domains 310 and 320 may operate at different voltage and/or power. Note that while only shown with three domains, understand the scope of the present invention is not limited in this regard and additional domains can be present in other embodiments. For example, multiple core domains may be present each including at least one core.[0040] In general, each core 310 may further include low level caches in addition to various execution units and additional processing elements. In turn, the various cores may be coupled to each other and to a shared cache memory formed of a plurality of units of a last level cache (LLC) 3400- 340n. In various embodiments, LLC 340 may be shared amongst the cores and the graphics engine, as well as various media processing circuitry. As seen, a ring interconnect 330 thus couples the cores together, and provides interconnection between the cores, graphics domain 320 and system agent circuitry 350. In one embodiment, interconnect 330 can be part of the core domain. However in other embodiments the ring interconnect can be of its own domain. As further shown, a plurality of core perimeter logics 3120-312neach may be associated with a given core and may provide for efficient storage and retrieval of context information, e.g., as used during low power entry and exit situations. In the illustration of FIG. 3, core perimeter logic 312 is shown coupled between a corresponding core 310 and ring interconnect 330, and may further be used to provide information for use in identifying a target core for an interrupt, while the core is in a low power state. However understand that direct connection between core 310 and ring interconnect 330 may be present, along with corresponding direct connection between core perimeter logic 312 and ring interconnect 330, in some embodiments.[0041] As further seen, system agent domain 350 may include display controller 352 which may provide control of and an interface to an associated display. As further seen, system agent domain 350 may include a power control unit 355 which can include logic to perform the power management techniques described herein.[0042] As further seen in FIG. 3, processor 300 can further include an integrated memory controller (IMC) 370 that can provide for an interface to a system memory, such as a dynamic random access memory (DRAM). Multiple interfaces 3800- 380nmay be present to enable interconnection between the processor and other circuitry. For example, in one embodiment at least one direct media interface (DMI) interface may be provided as well as one or more PCIe™ interfaces. Still further, to provide for communications between other agents such as additional processors or other circuitry, one or more QPI interfaces may also be provided. Although shown at this high level in the embodiment of FIG. 3, understand the scope of the present invention is not limited in this regard.[0043] Referring to FIG. 4, an embodiment of a processor including multiple cores is illustrated. Processor 400 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SoC), or other device to execute code. Processor 400, in one embodiment, includes at least two cores— cores 401 and 402, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 400 may include any number of processing elements that may be symmetric or asymmetric.[0044] In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.[0045] A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.[0046] Physical processor 400, as illustrated in FIG. 4, includes two cores, cores 401 and 402. Here, cores 401 and 402 are considered symmetric cores, i.e., cores with the same configurations, functional units, and/or logic. In another embodiment, core 401 includes an out-of-order processor core, while core 402 includes an in-order processor core. However, cores 401 and 402 may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native instruction set architecture (ISA), a core adapted to execute a translated ISA, a co-designed core, or other known core. Yet to further the discussion, the functional units illustrated in core 401 are described in further detail below, as the units in core 402 operate in a similar manner.[0047] As depicted, core 401 includes two hardware threads 401a and 401b, which may also be referred to as hardware thread slots 401a and 401b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor 400 as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers 401a, a second thread is associated with architecture state registers 401b, a third thread may be associated with architecture state registers 402a, and a fourth thread may be associated with architecture state registers 402b. Here, each of the architecture state registers (401a, 401b, 402a, and 402b) may be referred to as processing elements, thread slots, or thread units, as described above. As illustrated, architecture state registers 401a are replicated in architecture state registers 401b, so individual architecture states/contexts are capable of being stored for logical processor 401a and logical processor 401b. In core 401, other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block 430 may also be replicated for threads 401a and 401b. Some resources, such as re-order buffers in reorder/retirement unit 435, ILTB 420, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB 415, execution unit(s) 440, and portions of out-of-order unit 435 are potentially fully shared.[0048] Processor 400 often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. In FIG. 4, an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core 401 includes a simplified, representative out-of-order (OOO) processor core. But an in- order processor may be utilized in different embodiments. The OOO core includes a branch target buffer 420 to predict branches to be executed/taken and an instruction-translation buffer (I-TLB) 420 to store address translation entries for instructions.[0049] Core 401 further includes decode module 425 coupled to fetch unit 420 to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots 401a, 401b, respectively. Usually core 401 is associated with a first ISA, which defines/specifies instructions executable on processor 400. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic 425 includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. For example, decoders 425, in one embodiment, include logic designed or adapted to recognize specific instructions, such as transactional instruction. As a result of the recognition by decoders 425, the architecture or core 401 takes specific, predefined actions to perform tasks associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions.[0050] In one example, allocator and renamer block 430 includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads 401a and 401b are potentially capable of out-of-order execution, where allocator and renamer block 430 also reserves other resources, such as reorder buffers to track instruction results. Unit 430 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 400. Reorder/retirement unit 435 includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of- order.[0051] Scheduler and execution unit(s) block 440, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units.[0052] Lower level data cache and data translation buffer (D-TLB) 450 are coupled to execution unit(s) 440. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages.[0053] Here, cores 401 and 402 share access to higher-level or further-out cache 410, which is to cache recently fetched elements. Note that higher-level or further-out refers to cache levels increasing or getting further away from the execution unit(s). In one embodiment, higher-level cache 410 is a last-level data cache— last cache in the memory hierarchy on processor 400— such as a second or third level data cache. However, higher level cache 410 is not so limited, as it may be associated with or includes an instruction cache. A trace cache— a type of instruction cache— instead may be coupled after decoder 425 to store recently decoded traces.[0054] In the depicted configuration, processor 400 also includes bus interface module 405 and a power controller 460, which may perform power management in accordance with an embodiment of the present invention. In this scenario, bus interface 405 is to communicate with devices external to processor 400, such as system memory and other components.[0055] A memory controller 470 may interface with other devices such as one or many memories. In an example, bus interface 405 includes a ring interconnect with a memory controller for interfacing with a memory and a graphics controller for interfacing with a graphics processor. In an SoC environment, even more devices, such as a network interface, coprocessors, memory, graphics processor, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption.[0056] Referring now to FIG. 5, shown is a block diagram of a micro-architecture of a processor core in accordance with one embodiment of the present invention. As shown in FIG. 5, processor core 500 may be a multi-stage pipelined out-of-order processor. Core 500 may operate at various voltages based on a received operating voltage, which may be received from an integrated voltage regulator or external voltage regulator.[0057] As seen in FIG. 5, core 500 includes front end units 510, which may be used to fetch instructions to be executed and prepare them for use later in the processor pipeline. For example, front end units 510 may include a fetch unit 501, an instruction cache 503, and an instruction decoder 505. In some implementations, front end units 510 may further include a trace cache, along with microcode storage as well as a micro-operation storage. Fetch unit 501 may fetch macro-instructions, e.g., from memory or instruction cache 503, and feed them to instruction decoder 505 to decode them into primitives, i.e., micro-operations for execution by the processor.[0058] Coupled between front end units 510 and execution units 520 is an out-of-order (OOO) engine 515 that may be used to receive the micro-instructions and prepare them for execution. More specifically OOO engine 515 may include various buffers to re-order microinstruction flow and allocate various resources needed for execution, as well as to provide renaming of logical registers onto storage locations within various register files such as register file 530 and extended register file 535. Register file 530 may include separate register files for integer and floating point operations. For purposes of configuration, control, and additional operations, a set of machine specific registers (MSRs) 538 may also be present and accessible to various logic within core 500 (and external to the core). For example, power limit information may be stored in one or more MSR and be dynamically updated as described herein.[0059] Various resources may be present in execution units 520, including, for example, various integer, floating point, and single instruction multiple data (SIMD) logic units, among other specialized hardware. For example, such execution units may include one or more arithmetic logic units (ALUs) 522 and one or more vector execution units 524, among other such execution units.[0060] Results from the execution units may be provided to retirement logic, namely a reorder buffer (ROB) 540. More specifically, ROB 540 may include various arrays and logic to receive information associated with instructions that are executed. This information is then examined by ROB 540 to determine whether the instructions can be validly retired and result data committed to the architectural state of the processor, or whether one or more exceptions occurred that prevent a proper retirement of the instructions. Of course, ROB 540 may handle other operations associated with retirement.[0061] As shown in FIG. 5, ROB 540 is coupled to a cache 550 which, in one embodiment may be a low level cache (e.g., an LI cache) although the scope of the present invention is not limited in this regard. Also, execution units 520 can be directly coupled to cache 550. From cache 550, data communication may occur with higher level caches, system memory and so forth. While shown with this high level in the embodiment of FIG. 5, understand the scope of the present invention is not limited in this regard. For example, while the implementation of FIG. 5 is with regard to an out-of-order machine such as of an Intel® x86 instruction set architecture (ISA), the scope of the present invention is not limited in this regard. That is, other embodiments may be implemented in an in-order processor, a reduced instruction set computing (RISC) processor such as an ARM-based processor, or a processor of another type of ISA that can emulate instructions and operations of a different ISA via an emulation engine and associated logic circuitry.[0062] Referring now to FIG. 6, shown is a block diagram of a micro-architecture of a processor core in accordance with another embodiment. In the embodiment of FIG. 6, core 600 may be a low power core of a different micro-architecture, such as an Intel® Atom™- based processor having a relatively limited pipeline depth designed to reduce power consumption. As seen, core 600 includes an instruction cache 610 coupled to provide instructions to an instruction decoder 615. A branch predictor 605 may be coupled to instruction cache 610. Note that instruction cache 610 may further be coupled to another level of a cache memory, such as an L2 cache (not shown for ease of illustration in FIG. 6). In turn, instruction decoder 615 provides decoded instructions to an issue queue 620 for storage and delivery to a given execution pipeline. A microcode ROM 618 is coupled to instruction decoder 615.[0063] A floating point pipeline 630 includes a floating point register file 632 which may include a plurality of architectural registers of a given bit with such as 128, 256 or 512 bits. Pipeline 630 includes a floating point scheduler 634 to schedule instructions for execution on one of multiple execution units of the pipeline. In the embodiment shown, such execution units include an ALU 635, a shuffle unit 636, and a floating point adder 638. In turn, results generated in these execution units may be provided back to buffers and/or registers of register file 632. Of course understand while shown with these few example execution units, additional or different floating point execution units may be present in another embodiment.[0064] An integer pipeline 640 also may be provided. In the embodiment shown, pipeline 640 includes an integer register file 642 which may include a plurality of architectural registers of a given bit with such as 128 or 256 bits. Pipeline 640 includes an integer scheduler 644 to schedule instructions for execution on one of multiple execution units of the pipeline. In the embodiment shown, such execution units include an ALU 645, a shifter unit 646, and a jump execution unit 648. In turn, results generated in these execution units may be provided back to buffers and/or registers of register file 642. Of course understand while shown with these few example execution units, additional or different integer execution units may be present in another embodiment.[0065] A memory execution scheduler 650 may schedule memory operations for execution in an address generation unit 652, which is also coupled to a TLB 654. As seen, these structures may couple to a data cache 660, which may be a L0 and/or LI data cache that in turn couples to additional levels of a cache memory hierarchy, including an L2 cache memory.[0066] To provide support for out-of-order execution, an allocator/renamer 670 may be provided, in addition to a reorder buffer 680, which is configured to reorder instructions executed out of order for retirement in order. Although shown with this particular pipeline architecture in the illustration of FIG. 6, understand that many variations and alternatives are possible.[0067] Note that in a processor having asymmetric cores, such as in accordance with the micro-architectures of FIGs. 5 and 6, workloads may be dynamically swapped between the cores for power management reasons, as these cores, although having different pipeline designs and depths, may be of the same or related ISA. Such dynamic core swapping may be performed in a manner transparent to a user application (and possibly kernel also).[0068] Referring to FIG. 7, shown is a block diagram of a micro-architecture of a processor core in accordance with yet another embodiment. As illustrated in FIG. 7, a core 700 may include a multi-staged in-order pipeline to execute at very low power consumption levels. As one such example, processor 700 may have a micro-architecture in accordance with an ARM Cortex A53 design available from ARM Holdings, LTD., Sunnyvale, CA. In an implementation, an 8-stage pipeline may be provided that is configured to execute both 32-bit and 64-bit code. Core 700 includes a fetch unit 710 that is configured to fetch instructions and provide them to a decode unit 715, which may decode the instructions, e.g., macro- instructions of a given ISA such as an ARMv8 ISA. Note further that a queue 730 may couple to decode unit 715 to store decoded instructions. Decoded instructions are provided to an issue logic 725, where the decoded instructions may be issued to a given one of multiple execution units.[0069] With further reference to FIG. 7, issue logic 725 may issue instructions to one of multiple execution units. In the embodiment shown, these execution units include an integer unit 735, a multiply unit 740, a floating point/vector unit 750, a dual issue unit 760, and a load/store unit 770. The results of these different execution units may be provided to a writeback unit 780. Understand that while a single writeback unit is shown for ease of illustration, in some implementations separate writeback units may be associated with each of the execution units. Furthermore, understand that while each of the units and logic shown in FIG. 7 is represented at a high level, a particular implementation may include more or different structures. A processor designed using one or more cores having a pipeline as in FIG. 7 may be implemented in many different end products, extending from mobile devices to server systems.[0070] Referring to FIG. 8, shown is a block diagram of a micro-architecture of a processor core in accordance with a still further embodiment. As illustrated in FIG. 8, a core 800 may include a multi-stage multi-issue out-of-order pipeline to execute at very high performance levels (which may occur at higher power consumption levels than core 700 of FIG. 7). As one such example, processor 800 may have a microarchitecture in accordance with an ARM Cortex A57 design. In an implementation, a 15 (or greater)-stage pipeline may be provided that is configured to execute both 32-bit and 64-bit code. In addition, the pipeline may provide for 3 (or greater)-wide and 3 (or greater)-issue operation. Core 800 includes a fetch unit 810 that is configured to fetch instructions and provide them to a decoder/renamer/dispatcher 815, which may decode the instructions, e.g., macro-instructions of an ARMv8 instruction set architecture, rename register references within the instructions, and dispatch the instructions (eventually) to a selected execution unit. Decoded instructions may be stored in a queue 825. Note that while a single queue structure is shown for ease of illustration in FIG 8, understand that separate queues may be provided for each of the multiple different types of execution units.[0071] Also shown in FIG. 8 is an issue logic 830 from which decoded instructions stored in queue 825 may be issued to a selected execution unit. Issue logic 830 also may be implemented in a particular embodiment with a separate issue logic for each of the multiple different types of execution units to which issue logic 830 couples.[0072] Decoded instructions may be issued to a given one of multiple execution units. In the embodiment shown, these execution units include one or more integer units 835, a multiply unit 840, a floating point/vector unit 850, a branch unit 860, and a load/store unit 870. In an embodiment, floating point/vector unit 850 may be configured to handle SEVID or vector data of 128 or 256 bits. Still further, floating point/vector execution unit 850 may perform IEEE-754 double precision floating-point operations. The results of these different execution units may be provided to a writeback unit 880. Note that in some implementations separate writeback units may be associated with each of the execution units. Furthermore, understand that while each of the units and logic shown in FIG. 8 is represented at a high level, a particular implementation may include more or different structures.[0073] Note that in a processor having asymmetric cores, such as in accordance with the micro-architectures of FIGs. 7 and 8, workloads may be dynamically swapped for power management reasons, as these cores, although having different pipeline designs and depths, may be of the same or related ISA. Such dynamic core swapping may be performed in a manner transparent to a user application (and possibly kernel also).[0074] A processor designed using one or more cores having pipelines as in any one or more of FIGs. 5-8 may be implemented in many different end products, extending from mobile devices to server systems. Referring now to FIG. 9, shown is a block diagram of a processor in accordance with another embodiment of the present invention. In the embodiment of FIG. 9, processor 900 may be a SoC including multiple domains, each of which may be controlled to operate at an independent operating voltage and operating frequency. As a specific illustrative example, processor 900 may be an Intel® Architecture Core™-based processor such as an i3, i5, i7 or another such processor available from Intel Corporation. However, other low power processors such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, CA, an ARM-based design from ARM Holdings, Ltd. or licensee thereof or a MlPS-based design from MIPS Technologies, Inc. of Sunnyvale, CA, or their licensees or adopters may instead be present in other embodiments such as an Apple A7 processor, a Qualcomm Snapdragon processor, or Texas Instruments OMAP processor. Such SoC may be used in a low power system such as a smartphone, tablet computer, phablet computer, Ultrabook™ computer or other portable computing device.[0075] In the high level view shown in FIG. 9, processor 900 includes a plurality of core units 9100-910n. Each core unit may include one or more processor cores, one or more cache memories and other circuitry. Each core unit 910 may support one or more instructions sets (e.g., an x86 instruction set (with some extensions that have been added with newer versions); a MIPS instruction set; an ARM instruction set (with optional additional extensions such as NEON)) or other instruction set or combinations thereof. Note that some of the core units may be heterogeneous resources (e.g., of a different design). In addition, each such core may be coupled to a cache memory (not shown) which in an embodiment may be a shared level (L2) cache memory. A non-volatile storage 930 may be used to store various program and other data. For example, this storage may be used to store at least portions of microcode, boot information such as a BIOS, other system software or so forth.[0076] Each core unit 910 may also include an interface such as a bus interface unit to enable interconnection to additional circuitry of the processor. In an embodiment, each core unit 910 couples to a coherent fabric that may act as a primary cache coherent on-die interconnect that in turn couples to a memory controller 935. As also described herein, each core unit 910 may include a mailbox interface to enable interaction with a corresponding core perimeter logic (not specifically shown in FIG. 9), to enable enhanced communications and provide for efficient entry into and exit from low power states, among other functions. In turn, memory controller 935 controls communications with a memory such as a DRAM (not shown for ease of illustration in FIG. 9).[0077] In addition to core units, additional processing engines are present within the processor, including at least one graphics unit 920 which may include one or more graphics processing units (GPUs) to perform graphics processing as well as to possibly execute general purpose operations on the graphics processor (so-called GPGPU operation). In addition, at least one image signal processor 925 may be present. Signal processor 925 may be configured to process incoming image data received from one or more capture devices, either internal to the SoC or off-chip. [0078] Other accelerators also may be present. In the illustration of FIG. 9, a video coder 950 may perform coding operations including encoding and decoding for video information, e.g., providing hardware acceleration support for high definition video content. A display controller 955 further may be provided to accelerate display operations including providing support for internal and external displays of a system. In addition, a security processor 945 may be present to perform security operations such as secure boot operations, various cryptography operations and so forth.[0079] Each of the units may have its power consumption controlled via a power manager 940, which may include control logic to perform the various power management techniques described herein.[0080] In some embodiments, SoC 900 may further include a non-coherent fabric coupled to the coherent fabric to which various peripheral devices may couple. One or more interfaces 960a-960d enable communication with one or more off-chip devices. Such communications may be via a variety of communication protocols such as PCIe™, GPIO, USB, I2C, UART, MIPI, SDIO, DDR, SPI, HDMI, among other types of communication protocols. Although shown at this high level in the embodiment of FIG. 9, understand the scope of the present invention is not limited in this regard.[0081] Referring now to FIG. 10, shown is a block diagram of a representative SoC. In the embodiment shown, SoC 1000 may be a multi-core SoC configured for low power operation to be optimized for incorporation into a smartphone or other low power device such as a tablet computer or other portable computing device. As an example, SoC 1000 may be implemented using asymmetric or different types of cores, such as combinations of higher power and/or low power cores, e.g., out-of-order cores and in-order cores. In different embodiments, these cores may be based on an Intel® Architecture™ core design or an ARM architecture design. In yet other embodiments, a mix of Intel and ARM cores may be implemented in a given SoC.[0082] As seen in FIG. 10, SoC 1000 includes a first core domain 1010 having a plurality of first cores 10120- 10123. In an example, these cores may be low power cores such as in- order cores that may interface with corresponding core perimeter logic via a mailbox interface as described herein. In one embodiment these first cores may be implemented as ARM Cortex A53 cores. In turn, these cores couple to a cache memory 1015 of core domain 1010. In addition, SoC 1000 includes a second core domain 1020. In the illustration of FIG. 10, second core domain 1020 has a plurality of second cores 10220- 10223. In an example, these cores may be higher power-consuming cores than first cores 1012. In an embodiment, the second cores may be out-of-order cores, which may be implemented as ARM Cortex A57 cores. In turn, these cores couple to a cache memory 1025 of core domain 1020. Note that while the example shown in FIG. 10 includes 4 cores in each domain, understand that more or fewer cores may be present in a given domain in other examples.[0083] With further reference to FIG. 10, a graphics domain 1030 also is provided, which may include one or more graphics processing units (GPUs) configured to independently execute graphics workloads, e.g., provided by one or more cores of core domains 1010 and 1020. As an example, GPU domain 1030 may be used to provide display support for a variety of screen sizes, in addition to providing graphics and display rendering operations.[0084] As seen, the various domains couple to a coherent interconnect 1040, which in an embodiment may be a cache coherent interconnect fabric that in turn couples to an integrated memory controller 1050. Coherent interconnect 1040 may include a shared cache memory, such as an L3 cache, in some examples. In an embodiment, memory controller 1050 may be a direct memory controller to provide for multiple channels of communication with an off- chip memory, such as multiple channels of a DRAM (not shown for ease of illustration in FIG. 10).[0085] In different examples, the number of the core domains may vary. For example, for a low power SoC suitable for incorporation into a mobile computing device, a limited number of core domains such as shown in FIG. 10 may be present. Still further, in such low power SoCs, core domain 1020 including higher power cores may have fewer numbers of such cores. For example, in one implementation two cores 1022 may be provided to enable operation at reduced power consumption levels. In addition, the different core domains may also be coupled to an interrupt controller to enable dynamic swapping of workloads between the different domains.[0086] In yet other embodiments, a greater number of core domains, as well as additional optional IP logic may be present, in that an SoC can be scaled to higher performance (and power) levels for incorporation into other computing devices, such as desktops, servers, high performance computing systems, base stations forth. As one such example, 4 core domains each having a given number of out-of-order cores may be provided. Still further, in addition to optional GPU support (which as an example may take the form of a GPGPU), one or more accelerators to provide optimized hardware support for particular functions (e.g. web serving, network processing, switching or so forth) also may be provided. In addition, an input/output interface may be present to couple such accelerators to off-chip components.[0087] Referring now to FIG. 11, shown is a block diagram of another example SoC. In the embodiment of FIG. 11, SoC 1100 may include various circuitry to enable high performance for multimedia applications, communications and other functions. As such, SoC 1100 is suitable for incorporation into a wide variety of portable and other devices, such as smartphones, tablet computers, smart TVs and so forth. In the example shown, SoC 1100 includes a central processor unit (CPU) domain 1110. In an embodiment, a plurality of individual processor cores may be present in CPU domain 1110. As one example, CPU domain 1 110 may be a quad core processor having 4 multithreaded cores. Such processors may be homogeneous or heterogeneous processors, e.g., a mix of low power and high power processor cores.[0088] In turn, a GPU domain 1120 is provided to perform advanced graphics processing in one or more GPUs to handle graphics and compute APIs. A DSP unit 1130 may provide one or more low power DSPs for handling low-power multimedia applications such as music playback, audio/video and so forth, in addition to advanced calculations that may occur during execution of multimedia instructions. In turn, a communication unit 1140 may include various components to provide connectivity via various wireless protocols, such as cellular communications (including 3G/4G LTE), wireless local area protocols such as Bluetooth™, IEEE 802.11, and so forth.[0089] Still further, a multimedia processor 1150 may be used to perform capture and playback of high definition video and audio content, including processing of user gestures. A sensor unit 1160 may include a plurality of sensors and/or a sensor controller to interface to various off-chip sensors present in a given platform. An image signal processor 1170 may be provided with one or more separate ISPs to perform image processing with regard to captured content from one or more cameras of a platform, including still and video cameras. [0090] A display processor 1180 may provide support for connection to a high definition display of a given pixel density, including the ability to wirelessly communicate content for playback on such display. Still further, a location unit 1190 may include a GPS receiver with support for multiple GPS constellations to provide applications highly accurate positioning information obtained using as such GPS receiver. Understand that while shown with this particular set of components in the example of FIG. 11, many variations and alternatives are possible.[0091] Referring now to FIG. 12, shown is a block diagram of an example system with which embodiments can be used. As seen, system 1200 may be a smartphone or other wireless communicator. A baseband processor 1205 is configured to perform various signal processing with regard to communication signals to be transmitted from or received by the system. In turn, baseband processor 1205 is coupled to an application processor 1210, which may be a main CPU of the system to execute an OS and other system software, in addition to user applications such as many well-known social media and multimedia apps. Application processor 1210 may further be configured to perform a variety of other computing operations for the device.[0092] In turn, application processor 1210 can couple to a user interface/display 1220, e.g., a touch screen display. In addition, application processor 1210 may couple to a memory system including a non-volatile memory, namely a flash memory 1230 and a system memory, namely a dynamic random access memory (DRAM) 1235. As further seen, application processor 1210 further couples to a capture device 1240 such as one or more image capture devices that can record video and/or still images.[0093] Still referring to FIG. 12, a universal integrated circuit card (UICC) 1240 comprising a subscriber identity module and possibly a secure storage and cryptoprocessor is also coupled to application processor 1210. System 1200 may further include a security processor 1250 that may couple to application processor 1210. A plurality of sensors 1225 may couple to application processor 1210 to enable input of a variety of sensed information such as accelerometer and other environmental information. An audio output device 1295 may provide an interface to output sound, e.g., in the form of voice communications, played or streaming audio data and so forth. [0094] As further illustrated, a near field communication (NFC) contactless interface 1260 is provided that communicates in a NFC near field via an NFC antenna 1265. While separate antennae are shown in FIG. 12, understand that in some implementations one antenna or a different set of antennae may be provided to enable various wireless functionality.[0095] A power management integrated circuit (PMIC) 1215 couples to application processor 1210 to perform platform level power management. To this end, PMIC 1215 may issue power management requests to application processor 1210 to enter certain low power states as desired. Furthermore, based on platform constraints, PMIC 1215 may also control the power level of other components of system 1200.[0096] To enable communications to be transmitted and received, various circuitry may be coupled between baseband processor 1205 and an antenna 1290. Specifically, a radio frequency (RF) transceiver 1270 and a wireless local area network (WLAN) transceiver 1275 may be present. In general, RF transceiver 1270 may be used to receive and transmit wireless data and calls according to a given wireless communication protocol such as 3G or 4G wireless communication protocol such as in accordance with a code division multiple access (CDMA), global system for mobile communication (GSM), long term evolution (LTE) or other protocol. In addition a GPS sensor 1280 may be present. Other wireless communications such as receipt or transmission of radio signals, e.g., AM/FM and other signals may also be provided. In addition, via WLAN transceiver 1275, local wireless communications can also be realized.[0097] Referring now to FIG. 13, shown is a block diagram of another example system with which embodiments may be used. In the illustration of FIG. 13, system 1300 may be mobile low-power system such as a tablet computer, 2: 1 tablet, phablet or other convertible or standalone tablet system. As illustrated, a SoC 1310 is present and may be configured to operate as an application processor for the device.[0098] A variety of devices may couple to SoC 1310. In the illustration shown, a memory subsystem includes a flash memory 1340 and a DRAM 1345 coupled to SoC 1310. In addition, a touch panel 1320 is coupled to the SoC 1310 to provide display capability and user input via touch, including provision of a virtual keyboard on a display of touch panel 1320. To provide wired network connectivity, SoC 1310 couples to an Ethernet interface 1330. A peripheral hub 1325 is coupled to SoC 1310 to enable interfacing with various peripheral devices, such as may be coupled to system 1300 by any of various ports or other connectors.[0099] In addition to internal power management circuitry and functionality within SoC 1310, a PMIC 1380 is coupled to SoC 1310 to provide platform-based power management, e.g., based on whether the system is powered by a battery 1390 or AC power via an AC adapter 1395. In addition to this power source-based power management, PMIC 1380 may further perform platform power management activities based on environmental and usage conditions. Still further, PMIC 1380 may communicate control and status information to SoC 1310 to cause various power management actions within SoC 1310.[0100] Still referring to FIG. 13, to provide for wireless capabilities, a WLAN unit 1350 is coupled to SoC 1310 and in turn to an antenna 1355. In various implementations, WLAN unit 1350 may provide for communication according to one or more wireless protocols.[0101] As further illustrated, a plurality of sensors 1360 may couple to SoC 1310. These sensors may include various accelerometer, environmental and other sensors, including user gesture sensors. Finally, an audio codec 1365 is coupled to SoC 1310 to provide an interface to an audio output device 1370. Of course understand that while shown with this particular implementation in FIG. 13, many variations and alternatives are possible.[0102] Referring now to FIG. 14, shown is a block diagram of a representative computer system such as notebook, Ultrabook™ or other small form factor system. A processor 1410, in one embodiment, includes a microprocessor, multi-core processor, multithreaded processor, an ultra low voltage processor, an embedded processor, or other known processing element. In the illustrated implementation, processor 1410 acts as a main processing unit and central hub for communication with many of the various components of the system 1400. As one example, processor 1400 is implemented as a SoC.[0103] Processor 1410, in one embodiment, communicates with a system memory 1415. As an illustrative example, the system memory 1415 is implemented via multiple memory devices or modules to provide for a given amount of system memory.[0104] To provide for persistent storage of information such as data, applications, one or more operating systems and so forth, a mass storage 1420 may also couple to processor 1410. In various embodiments, to enable a thinner and lighter system design as well as to improve system responsiveness, this mass storage may be implemented via a SSD or the mass storage may primarily be implemented using a hard disk drive (HDD) with a smaller amount of SSD storage to act as a SSD cache to enable non-volatile storage of context state and other such information during power down events so that a fast power up can occur on re-initiation of system activities. Also shown in FIG. 14, a flash device 1422 may be coupled to processor 1410, e.g., via a serial peripheral interface (SPI). This flash device may provide for nonvolatile storage of system software, including a basic input/output software (BIOS) as well as other firmware of the system.[0105] Various input/output (I/O) devices may be present within system 1400. Specifically shown in the embodiment of FIG. 14 is a display 1424 which may be a high definition LCD or LED panel that further provides for a touch screen 1425. In one embodiment, display 1424 may be coupled to processor 1410 via a display interconnect that can be implemented as a high performance graphics interconnect. Touch screen 1425 may be coupled to processor 1410 via another interconnect, which in an embodiment can be an I2C interconnect. As further shown in FIG. 14, in addition to touch screen 1425, user input by way of touch can also occur via a touch pad 1430 which may be configured within the chassis and may also be coupled to the same I2C interconnect as touch screen 1425.[0106] For perceptual computing and other purposes, various sensors may be present within the system and may be coupled to processor 1410 in different manners. Certain inertial and environmental sensors may couple to processor 1410 through a sensor hub 1440, e.g., via an I2C interconnect. In the embodiment shown in FIG. 14, these sensors may include an accelerometer 1441, an ambient light sensor (ALS) 1442, a compass 1443 and a gyroscope 1444. Other environmental sensors may include one or more thermal sensors 1446 which in some embodiments couple to processor 1410 via a system management bus (SMBus) bus.[0107] Also seen in FIG. 14, various peripheral devices may couple to processor 1410 via a low pin count (LPC) interconnect. In the embodiment shown, various components can be coupled through an embedded controller 1435. Such components can include a keyboard 1436 (e.g., coupled via a PS2 interface), a fan 1437, and a thermal sensor 1439. In some embodiments, touch pad 1430 may also couple to EC 1435 via a PS2 interface. In addition, a security processor such as a trusted platform module (TPM) 1438 may also couple to processor 1410 via this LPC interconnect.[0108] System 1400 can communicate with external devices in a variety of manners, including wirelessly. In the embodiment shown in FIG. 14, various wireless modules, each of which can correspond to a radio configured for a particular wireless communication protocol, are present. One manner for wireless communication in a short range such as a near field may be via a NFC unit 1445 which may communicate, in one embodiment with processor 1410 via an SMBus. Note that via this NFC unit 1445, devices in close proximity to each other can communicate.[0109] As further seen in FIG. 14, additional wireless units can include other short range wireless engines including a WLAN unit 1450 and a Bluetooth unit 1452. Using WLAN unit 1450, Wi-Fi™ communications can be realized, while via Bluetooth unit 1452, short range Bluetooth™ communications can occur. These units may communicate with processor 1410 via a given link.[0110] In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, can occur via a WW AN unit 1456 which in turn may couple to a subscriber identity module (SEVI) 1457. In addition, to enable receipt and use of location information, a GPS module 1455 may also be present. Note that in the embodiment shown in FIG. 14, WW AN unit 1456 and an integrated capture device such as a camera module 1454 may communicate via a given link.[0111] An integrated camera module 1454 can be incorporated in the lid. To provide for audio inputs and outputs, an audio processor can be implemented via a digital signal processor (DSP) 1460, which may couple to processor 1410 via a high definition audio (HDA) link. Similarly, DSP 1460 may communicate with an integrated coder/decoder (CODEC) and amplifier 1462 that in turn may couple to output speakers 1463 which may be implemented within the chassis. Similarly, amplifier and CODEC 1462 can be coupled to receive audio inputs from a microphone 1465 which in an embodiment can be implemented via dual array microphones (such as a digital microphone array) to provide for high quality audio inputs to enable voice-activated control of various operations within the system. Note also that audio outputs can be provided from amplifier/CODEC 1462 to a headphone jack 1464. Although shown with these particular components in the embodiment of FIG. 14, understand the scope of the present invention is not limited in this regard.[0112] Embodiments may be implemented in many different system types. Referring now to FIG. 15, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 15, multiprocessor system 1500 is a point-to-point interconnect system, and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnect 1550. As shown in FIG. 15, each of processors 1570 and 1580 may be multicore processors, including first and second processor cores (i.e., processor cores 1574a and 1574b and processor cores 1584a and 1584b), although potentially many more cores may be present in the processors. Such processor cores may couple to corresponding core perimeter logics 1577a and 1577b and core perimeter logics 1587a and 1587b to enable efficient communication of context and other information, both for purposes of low power state entry and exit as well as for communication of information during normal operation. In addition, core perimeter logics 1577a, 1577b and 1587a, 1587b may receive interrupt information while the associated core is in a low power state and provide a matching indication, to enable interrupt handling with reduced latency. Each of the processors can include a PCU or other power management logic to perform processor-based power management as described herein.[0113] Still referring to FIG. 15, first processor 1570 further includes a memory controller hub (MCH) 1572 and point-to-point (P-P) interfaces 1576 and 1578. Similarly, second processor 1580 includes a MCH 1582 and P-P interfaces 1586 and 1588. As shown in FIG. 15, MCH's 1572 and 1582 couple the processors to respective memories, namely a memory 1532 and a memory 1534, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor 1570 and second processor 1580 may be coupled to a chipset 1590 via P-P interconnects 1562 and 1564, respectively. As shown in FIG. 15, chipset 1590 includes P-P interfaces 1594 and 1598.[0114] Furthermore, chipset 1590 includes an interface 1592 to couple chipset 1590 with a high performance graphics engine 1538, by a P-P interconnect 1539. In turn, chipset 1590 may be coupled to a first bus 1516 via an interface 1596. As shown in FIG. 15, various input/output (I/O) devices 1514 may be coupled to first bus 1516, along with a bus bridge 1518 which couples first bus 1516 to a second bus 1520. Various devices may be coupled to second bus 1520 including, for example, a keyboard/mouse 1522, communication devices 1526 and a data storage unit 1528 such as a disk drive or other mass storage device which may include code 1530, in one embodiment. Further, an audio I/O 1524 may be coupled to second bus 1520. Embodiments can be incorporated into other types of systems including mobile devices such as a smart cellular telephone, tablet computer, netbook, Ultrabook™, or so forth.[0115] As will be described herein, in various embodiments when a processor is in a package deep low power state and an interrupt is received, a targeted core can be caused to be woken in parallel with the wake activities of a fabric domain (such as a voltage ramp up time for the fabric domain). In addition to wakeup activities within the targeted core and fabric domain, voltage regulators to supply voltages to such components may also be controlled in parallel to increase their voltage capabilities. As such, all core exit dependencies are eliminated with respect to a fabric domain.[0116] To realize such operation, embodiments enable an identifier of a core targeted by an interrupt to be provided to a power controller of the processor in an early manner, to allow the power controller to determine the appropriate core to be caused to exit from the low power state in parallel with the fabric domain wake up. This determination by the power controller may be based at least in part on a broadcast message sent to power management agents associated with all cores via a separate interconnect mechanism than the fabric domain.[0117] Referring now to FIG. 16, shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown in FIG. 16, processor 1600 includes a core 1610 and various core perimeter logic. Understand that for ease of illustration only a single core 1610 is shown. However, in many embodiments a multicore processor includes a plurality of cores, each with its own core perimeter logic. In the high level shown in FIG. 16, the components of processor 1600 all may be implemented on a single semiconductor die. As seen, core 1610 includes a storage 1615, which in an embodiment may be a static random access memory (SRAM) in which various context or state information of the core is stored. Note that the terms "state information" and "context information" are used interchangeably herein, and refer to information such as control register values, data information, register- stored information, and other information associated with a thread being executed on a core or other logic. Such information can be saved when the corresponding thread is switched from the core, e.g., due to entry into a low power state or migration to another core.[0118] In an embodiment, storage 1615 may be configured to remain powered on while the core is in certain low power states. As an example, storage 1615 may maintain information while a core is in a given low power state (e.g., C6) and the processor package is in a package active state (CO). However, in other low power states, such power may not be available, and the context information may be sent to other storages as described herein. Core 1610 further includes an intra-die interconnect (IDI) interface 1618 to interface with an DDI 1670. Although not shown for ease of illustration, understand that IDI 1670 may couple core 1610 with various other circuitry within the processor (not shown for ease of illustration in FIG. 16), including one or more other cores, a peripheral controller hub (PCH), one or more cache memories and/or other uncore circuitry. To provide for an interface between core 1610 and other components within the processor that may operate at different frequencies, a clock crossing logic 1619 may be provided, which in an embodiment may be implemented as a bubble generator first in first out (FIFO) buffer.[0119] To enable core 1610 to enter into particular and deeper low power states when available, a first core perimeter logic, namely a fabric interface logic (FIL) 1620, is coupled to core 1610. FIL 1620 may be of a first sustain power domain, in that it is provided with power and clock signals when at least portions of the processor are in a low power state. As seen, FIL 1620 couples to core 1610 via both IDI 1670 and a second interconnect 1675, which in an embodiment is a control register interconnect (CRi). Interconnect 1675 may be a relatively simple and low performance interconnect to provide for communication of state information during save and restore operations for low power state entry and exit.[0120] In the embodiment shown in FIG. 16, FIL 1620 includes a storage 1622, which may be implemented as a plurality of registers configured to store the state information received from core 1610 prior to the core's entry into a given low power state. Power may be maintained to FIL 1620 until the processor package enters a deeper package low power state (e.g., a package C6 state) when a coherent fabric enters a low power state. As further shown, FIL 1620 includes a monitor logic 1624, an interrupt control logic 1626, and a snoop response logic 1628. In general, monitor logic 1624 may be configured, when core 1610 is in a low power state, to monitor one or more monitored locations for an update to a value stored therein. Upon such update, FIL 1620 may communicate a wakeup request to core 1610. In an embodiment, monitor logic 1624 may thus be configured to implement MONITOR/MWAIT operations while core 1610 is in a low power state. In turn, interrupt control logic 1626 may be configured to handle incoming interrupts while core 1610 is in a low power state. Such handling may include delaying the interrupt and/or sending a response to the interrupt. Still further, in some cases the handling may include causing core 1610 to wake up to handle the interrupt. Note that with the concurrent core and fabric wakeup described herein, in many situations core 1610 may be fully awake by the time an interrupt is received in FIL 1620. Also, FIL 1620 includes a snoop response logic 1628, which may be configured to send a snoop response to a snoop request that is incoming while core 1610 is in a low power state. That is, because there is no corresponding cache line present for a snoop request when the core is in a low power state, snoop response logic 1628 thus may send a response to indicate that core 1610 does not include a copy of a cache line associated with the snoop request.[0121] Still referring to FIG. 16, an additional core perimeter logic is a chassis adapter block (CAB) unit 1630. In general, CAB unit 1630 may be configured to provide an interface to other processor and system components via a sideband interconnect 1690, which may be a power management sideband interconnect. Still further, CAB unit 1630 may be configured to store state information of core 1610 when FIL 1620 itself is placed into a low power state. CAB unit 1630 may be of a second sustain power domain, in that it is provided with power and clock signals when other portions of processor 1600 (including FIL 1620) are in a low power state. CAB unit 1630 includes a storage 1632 that may be configured to store the state information obtained from FIL 1620. This state information may include a current or active advanced programmable interrupt controller (APIC) identifier (ID) for core 1610, to enable CAB unit 1630, and more specifically a power management agent (PMA) 1634 to respond to broadcast wake/APIC ID messages. In an embodiment, storage 1632 of CAB unit 1630 may be a fast storage array, e.g., implemented as a SRAM.[0122] In the embodiment shown, CAB unit 1630 includes a PMA 1634, a fuse puller logic 1636 that may include one or more finite state machines (FSMs) to perform save and restore operations, both with regard to storage 1622 and more distant portions of a memory hierarchy (e.g., a system memory) when CAB unit 1630 itself is to be placed into a low power state. For example, the information stored in storage 1622 may be flushed to system memory when the processor package enters a still deeper package low power state (e.g., a package CIO state). In an embodiment, these FSMs may be system on chip (SoC)-based FSMs as they enable interaction between core perimeter logic and other portions of an SoC (and onto further portions of a memory hierarchy). Note that PMA 1634 may be a portion of power management logic of a processor that may be active when CAB unit 1630 is powered on. In some cases, PMA 1634 may interface with a main power controller of a processor such as a PCU or other power management entity. CAB unit 1630 further includes an event blocking logic 1638, which may be configured to block incoming events when the processor is in particular low power states. Still further, CAB unit 1630 also includes a sideband interface 1639, which may interface with sideband interconnect 1690.[0123] In an embodiment, storage 1632 of CAB unit 1630 may be allowed to be accessed by PMA 1634 or by a verified access received via sideband interface 1639. In one such embodiment, this interface may include a security attribute identifier (SAI) logic to determine whether an access request to storage 1632 has a valid SAI security protection (e.g., a SAI value received with the request matches a SAI value associated with the storage location to be accessed). As such, storage 1632 may be secured to store sensitive content.[0124] In an embodiment, appropriate clocking logic may be applied to the various core perimeter logics to enable the storages and logic therein to be accessed in particular low power states. In an embodiment, double clocking logic may be applied to the storages of the sustain power domains. As one example, a cache coherent fabric (CCF) clock may be provided to the storages for standard read/write operations. In turn, a CRi clock may be provided to the storages for save/restore operations.[0125] Understand that a processor may include additional components and circuitry. In the illustration of FIG. 16, processor 1600 further includes a power delivery unit 1640, which in an embodiment may include one or more fully integrated voltage regulators, a clock circuit 1650, which in an embodiment may be implemented as a phase lock loop, and a digital thermal sensor 1660. As seen, each of these components may communicate with the other components of processor 1600 via interconnect 1675. Understand while shown with this particular processor implementation in FIG. 16, many variations and alternatives are possible. [0126] Referring now to FIG. 17, shown is a flow diagram of a method in accordance with an embodiment of the present invention. As shown in FIG. 17, method 1700 may be performed by hardware, software, firmware, and/or combinations thereof, such as an input/output hub (IOH) or other external-viewing hub or agent that is configured to receive incoming communications from off-chip sources, and send outgoing traffic to various system components. To this end, method 1700 of FIG. 17 may be performed by interface circuitry within such IOH. In various embodiments, such interface circuitry may be implemented as hardware, software, firmware and/or combinations thereof.[0127] As illustrated, method 1700 begins by receiving an interrupt in the IOH (block 1710). More specifically, this interrupt may be received from a given off-chip source, while the processor is in a package deep sleep state. Although the scope of the present invention is not limited in this regard, such package deep sleep state may be an ACPI package C6 state or an even deeper package C-state, such as a package C8 state.[0128] Next, control passes to block 1720 where a wake signal and an advanced programmable interrupt controller (APIC) ID are sent from the IOH to a power control unit of the processor. More specifically the APIC ID is an identifier of the core to which the interrupt is directed, in instances where the received interrupt includes such information as to a requested destination for handling the interrupt. In other cases, it is possible that an incoming interrupt, while destined for the processor, does not include an indication of a specific core or other agent to handle the interrupt.[0129] Still with reference to FIG. 17, next at diamond 1730 it can be determined whether a predetermined interval of time has completed since sending this wake signal. In an embodiment, this predetermined time interval may be set, e.g., within a control register of the IOH. The value of this predetermined time interval may correspond to a minimum guaranteed latency from communication of the wake signal until the IOH is to send the interrupt to further circuitry of the processor. In an embodiment, this predetermined time interval may be on the order of between approximately 20 microseconds to a number of minutes. Finally, method 1700 concludes by sending the interrupt to an uncore logic of the processor to be forwarded to its selected destination. This communication of the interrupt may include various information regarding the interrupt, including identification of source and destination, among other information to be used for handling the interrupt. Understand while shown at this high level in the embodiment of FIG. 17, many variations and alternatives are possible. For example, although a specific interrupt destination core identifier, namely an APIC ID is described, in other cases another type of identifier such as another interrupt destination identifier can be used for broadcast and match operations as described herein. For example, note that a received interrupt can be of many different types. As one example, an incoming interrupt may be a Message Signaled Interrupt (MSI) that is directed to a particular core by way of a destination core identifier. This destination core identifier may, in an embodiment, correspond to an APIC ID, and can be extracted from a destination ID field of the MSI itself.[0130] Referring now to FIG. 18, shown is a flow diagram of a method in accordance with another embodiment of the present invention. More specifically, method 1800 shown in FIG. 18 may be implemented by hardware, software, firmware and/or combinations thereof, such as hardware logic of a PCU. As illustrated, method 1800 begins by receiving a wake signal and APIC ID in the PCU (block 1810). Understand that this combination of wake signal and APIC ID may be received from an IOH or other external interface of a processor (such as described above with regard to method 1700 of FIG. 17).[0131] Still with reference to FIG. 18, next the PCU may send a wake signal and the received APIC ID to power management agents (PMAs) associated with the various cores of the processor (block 1820). In an embodiment, such communication may be via a power management sideband interconnect. By communication of this wake signal with corresponding APIC ID, the PCU is requesting an indication from the PMA associated with the core matching the APIC ID sent with the wake signal.[0132] Still referring to FIG. 18, next at block 1830 a fabric wake process may be triggered. More specifically, in an embodiment the PCU may initiate a fabric wake process for at least portions of a coherent fabric of the processor. In some instances, this fabric wake process may be triggered by sending a fabric wake signal, e.g., to fabric interface logic (FIL) of individual cores, by a broadcast mechanism. In other embodiments, this fabric wake process can further be issued to portions of the coherent fabric itself to enable such circuitry to be placed into an active state. [0133] Still referring to FIG. 18, at block 1840 the PCU may receive an indication of the matching core and a wake request from the PMA of the matching core. Note that this matching core indication is thus a response to the wake signal with APIC ID sent in block 1820. As such, at this point the PCU has determined the appropriate core to place into an active state to handle the interrupt. Responsive to this wake request, control passes to block 1850 where a core wake process can be triggered for the indicated core. Understand that while FIG. 18 shows a sequential flow, in some cases operations may occur in a different sequence. For example, in some cases it is possible that the fabric wake process may be triggered before the broadcast of the wake signal and corresponding APIC ID (effectively reversing sequence of blocks 1820 and 1830). Of course other examples also are possible.[0134] Referring now to FIG. 19, shown is a flow diagram of a method in accordance with yet another embodiment of the present invention. As shown in FIG. 19, method 1900 may be performed by core internal circuitry, along with closely associated circuitry of the core (such as FIL and/or PMA).[0135] As seen, method 1900 begins by receiving a wake signal and APIC ID in the PMA via a power management sideband interconnect (block 1910). As described above, receipt of this information may be responsive to a global broadcast of this information to all cores of a processor. Next, the PMA can determine whether the APIC ID received with this wake signal matches the APIC ID of the associated core (diamond 1920). In an embodiment, this determination may be based on an APIC ID stored in a storage of the PMA, which is the current valid APIC ID for the core. If no match is determined, no further operation occurs within this core with regard to the current interrupt.[0136] Still referring to FIG. 19, instead if the APIC ID matches, control passes to block 1930. At block 1930 a wakeup request is sent to the PCU. In an embodiment, this wakeup request can be sent via the power management sideband interconnect. Of course other links can be used to send the wakeup request. This wakeup request includes an indication of an identifier for the corresponding core to indicate that this core identifier is for the core having the currently matching APIC ID assigned to it. In some cases this core identifier may be different than the APIC ID, such as a static core identifier. [0137] Still with reference to FIG. 19 at block 1940 a wake trigger is received in the FIL and the core via the power management sideband interconnect. This trigger may be received substantially concurrently in the FIL and core. Responsive to receipt of this trigger, various operations can be performed both in the FIL and the core to enter into an active state (block 1950). When in an active state, at block 1960 the core may receive the interrupt and associated information via the active fabric. Note that by using embodiments of the present invention, greatly reduced latency can be realized from time of receipt of an interrupt in a processor to its delivery to an indicated core when that core (and potentially the processor itself) is in a deep low power state, such as a package deep low power state. Understand while shown at this high level in the embodiment of FIG. 19, many variations and alternatives are possible.[0138] Referring now to FIG. 20, shown is a timing diagram illustrating operations within a processor responsive to receipt of an interrupt from an external source, while the processor is in a package deep low power state (e.g., a package C8 state). As illustrated in FIG. 20, timing diagram 2000 shows receipt of an interrupt (at time 2010) in an IOH. Responsive to receipt of this interrupt, the IOH sends a wake request (which includes an APIC ID) to be broadcast to PMAs of all the cores, along with a request for waking the various circuitry of the processor, details of which are shown in FIG. 21. More specifically, this wake signal may include a request to wake a cache coherent fabric (CCF) and a CCP request.[0139] Still with reference to FIG. 20, responsive to receipt of this request in the PCU, the PCU will issue a reset request to various core circuitry, including uncore logic, caching agent, and FILs. As illustrated, this reset request may be issued in two separate stages, namely a first reset request sent at time instant 2020 to cause the various components to perform a CCF electrical wake operation. Thereafter, a second portion of the reset request is sent at time instant 2030 to cause a logical wake to occur within the corresponding components, and thereafter initialization of the coherent fabric (inter-die interconnect) occurs.[0140] Understand that responsive to the initial wake request, a corresponding matching core PMA can provide an indication to enable a core wakeup (not shown in FIG. 20). Responsive to receipt of this wake request in the PCU from the matching PMA, the PCU may send a similar reset request, namely a CCP electrical wake request. This request in turn next may trigger a CCP logical wake and thereafter context restore operations to occur, in which context information stored in a given storage (e.g., a C6 storage) can be obtained and restored to the core. As such, core wakeup activities proceed in parallel with fabric wakeup activities. As such, embodiments enable improved interrupt delivery with lower latency. Thus as seen, at time instant 2040, the IOH can deliver the interrupt to uncore logic, which in turn at time instant 2045 forwards the interrupt to a caching agent. Thereafter, the caching agent can directly deliver the interrupt at time instant 2050 to an already woken core.[0141] With reference now to FIG. 21, shown is a timing diagram of further details of the issuance of a wake request that is broadcast with an APIC ID. Thus as illustrated in FIG. 21, timing diagram 2100 may occur in parallel with the operation shown in FIG. 20 (and more specifically approximately within the time frame of the messages sent by the PCU at time instant 2020). First, an interrupt is received in IOH (same as in FIG. 20), which results in issuance of the wake request with corresponding APIC ID to the PCU at time instant 2010.[0142] As shown in this further detail of FIG. 21 responsive to this request, the PCU may issue a broadcast APIC ID to all core PMAs at time instant 2115 (as received by a representative core PMA at time instant 2130). Understand that this broadcast can be sent substantially around the same time as the first reset request sent in FIG. 20 and with the corresponding wake signal. Thereafter, the core PMA may access an internal storage to determine whether it stores a matching APIC ID (as determined at block 2140). Understand that this PMA-internal storage stores a current APIC ID for the associated core (which may dynamically change during normal system operation). Assume a match is determined for the identified core in FIG. 21. In this case, the core PMA sends a wake request at time instant 2150 to the PCU with an identifier of the core. Understand that responsive to this return of the matching core with a request to wake from the low power state, the PCU can cause the identified core to exit the low power state by sending a wake signal to instruct the core to perform a wake flow to exit the low power state. While shown at this high level in FIG. 21, various additional operations can be performed in other embodiments.[0143] The following examples pertain to further embodiments.[0144] In one example, a processor comprises: a plurality of cores, at least some of the plurality of cores having an APIC identifier associated therewith; a plurality of power management agents associated with the plurality of cores; and a power controller to receive an indication of an interrupt and a first APIC identifier and send a wake signal and the first APIC identifier to the plurality of power management agents to determine which of the plurality of cores is associated with the first APIC identifier.[0145] In an example, the power controller is to send the wake signal and the first APIC identifier when the processor is in a package low power state.[0146] In an example, responsive to the determination, the power controller is to cause a fabric coupled to the plurality of cores and the determined core associated with the first APIC identifier to wake up, while others of the plurality of cores are to remain in a low power state.[0147] In an example, a power management agent associated with the determined core is to send a wake request to the power controller, responsive to a match between the first APIC identifier and an APIC identifier of the determined core stored in a storage associated with the power management agent.[0148] In an example, the power management agent is to send the wake request via a power management sideband interconnect, while the fabric coupled to the plurality of cores is not in an active state.[0149] In an example, the power management agent is to send a core identifier with the wake request, the core identifier to identify the destined core and different than the APIC identifier associated with the destined core.[0150] In an example, the power controller is to cause the determined core and the fabric to exit a low power state concurrently.[0151] In an example, the processor of one or more of the above examples further comprises a caching agent, where the caching agent is to send the interrupt directly to the core via the fabric, after the core and the fabric have entered an active state.[0152] In an example, the processor of one or more of the above examples further comprises a plurality of adapter units associated with the plurality of cores, where the plurality of adapter units comprises the plurality of power management agents.[0153] In an example, the plurality of adapter units are to be maintained in an active state when the associated plurality of cores are in a first low power state. [0154] Note that the above processor can be implemented using various means.[0155] In an example, the processor comprises a SoC incorporated in a user equipment touch-enabled device.[0156] In another example, a system comprises a display and a memory, and includes the processor of one or more of the above examples[0157] In another example, a method comprises: receiving a wake signal and an interrupt destination identifier in a power control unit of a processor while the processor is in a low power state, responsive to receipt of an interrupt in the processor; sending the wake signal and the interrupt destination identifier to a plurality of power management agents of the processor via a power management sideband interconnect; receiving an indication of a core associated with the interrupt destination identifier; and concurrently causing the core associated with the interrupt destination identifier and a fabric that couples the plurality of cores to exit the low power state concurrently.[0158] In an example, the method further comprises broadcasting the wake signal and the interrupt destination identifier to the plurality of power management agents.[0159] In an example, the method further comprises receiving the wake signal in the power control unit from an interface circuit of the processor.[0160] In an example, the method further comprises receiving the interrupt in the core associated with the interrupt destination identifier from a uncore logic coupled to the interface circuit.[0161] In an example, the method further comprises maintaining others of the plurality of cores in the low power state while causing the core and the fabric to exit the low power state.[0162] In an example, receiving the indication of the core associated with the interrupt destination identifier comprises receiving a message from a power management agent associated with the core, the message to indicate that the core is associated with the interrupt destination identifier and to request the power control unit to wake the core from the low power state. [0163] In another example, a computer readable medium including instructions is to perform the method of any of the above examples.[0164] In another example, a computer readable medium including data is to be used by at least one machine to fabricate at least one integrated circuit to perform the method of any one of the above examples.[0165] In another example, an apparatus comprises means for performing the method of any one of the above examples.[0166] In yet another example, a system comprises: a processor having a power controller, a core to execute instructions and a core perimeter logic coupled to the core, the core perimeter logic including a power management agent to access an interrupt destination identifier associated with the core, where the power management agent is to send a message to the power controller to request the power controller to cause the core to wake up, responsive to detection of a match between a first interrupt destination identifier of a broadcast message sent to a plurality of cores and the interrupt destination identifier associated with the core; and a DRAM coupled to the processor.[0167] In an example, the processor further comprises a fabric to couple the plurality of cores and a sideband interconnect to couple the power controller to a plurality of core perimeter logics, where the power controller is to send the broadcast message via the sideband interconnect responsive to receipt of an interrupt in the processor while the processor is in a low power state.[0168] In an example, the power controller is to cause the core and the fabric to exit the low power state concurrently.[0169] In an example, the power controller is to receive a wake signal and the interrupt destination identifier from an interface circuit of the processor, responsive to receipt of the interrupt in the processor while the processor is in a package low power state.[0170] Understand that various combinations of the above examples are possible.[0171] Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein.[0172] Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. Embodiments also may be implemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations. Still further embodiments may be implemented in a computer readable storage medium including information that, when manufactured into a SoC or other processor, is to configure the SoC or other processor to perform one or more operations. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0173] While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
Apparatus, articles of manufacture, and methods for managing processing units are disclosed. An example apparatus includes Apparatus, articles of manufacture, and methods for managing processing units are disclosed. An example apparatus includes first processor circuitry to implement a central processing unit and second processor circuitry to perform at least one of first operations, second operations or third operations to obtain a resource request associated with a first workload; determine if a processing resource of a programmable network device is available to perform processing for the workload; determine if a second workload can be migrated from execution on the programmable network device; based on the determination that the second workload can be migrated, cause the second workload to be migrated; and cause the first workload to execute on the processing resource of the programmable network device. |
What Is Claimed Is:1. An apparatus to manage a processing unit, the apparatus comprising: first processor circuitry to implement a central processing unit; second processor circuitry including one or more of: at least one of a central processing unit, a graphic processing unit or a digital signal processor, the at least one of the central processing unit, the graphic processing unit or the digital signal processor having control circuitry to control data movement within the second processor circuitry, arithmetic and logic circuitry to perform one or more first operations corresponding to instructions, and one or more registers to store a result of the one or more first operations, the instructions in the apparatus; a Field Programmable Gate Array (FPGA), the FPGA including logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the logic gate circuitry and interconnections to perform one or more second operations, the storage circuitry to store a result of the one or more second operations; or Application Specific Integrated Circuitry (ASIC) including logic gate circuitry to perform one or more third operations; the second processor circuitry to perform at least one of the first operations, the second operations or the third operations to: obtain a resource request associated with a first workload; determine if a processing resource of a programmable network device is available to process the first workload; determine if a second workload can be migrated from execution on the programmable network device; based on the determination that the second workload can be migrated, cause the second workload to be migrated; and cause the first workload to execute on the processing resource of the programmable network device.
2 The apparatus as defined in claim 1, wherein the second processor circuitry includes an infrastructure processing unit.3. The apparatus as defined in claim 1, wherein the second processor circuitry is a component of a second programmable network device.4. The apparatus as defined in claim 1, wherein the second processor circuitry is to manage resources associated with the first processing circuitry.5. The apparatus as defined in claim 1, wherein the resource request specifies a type of processing resource to be utilized.6. The apparatus as defined in claim 1, wherein the second processor circuitry is to update a class of service for the second workload.7. The apparatus as defined in claim 1, wherein the second processor circuitry is to store an association of the first workload and the processing resource in a blockchain.8. A non-transitory computer readable medium comprising instructions that, when executed, cause a processor to at least: obtain a resource request associated with a first virtual workload; determine if a computing resource of a programmable network device is available to process the first virtual workload; determine if a second workload can be migrated from execution on the programmable network device; based on the determination that the second workload can be migrated, cause the second workload to be migrated; and cause the first virtual workload to execute on the computing resource of the programmable network device.9. The non-transitory computer readable medium as defined in claim 8, wherein the processor includes an infrastructure processing unit.10. The non-transitory computer readable medium as defined in claim 8, wherein the processor is a component of a second programmable network device.11. The non-transitory computer readable medium as defined in claim 8, wherein the processor is to manage resources associated with a processing circuitry.12. The non-transitory computer readable medium as defined in claim 8, wherein the resource request specifies a type of computing resource to be utilized.13. The non-transitory computer readable medium as defined in claim 8, wherein the instructions, when executed cause the processor to update a class of service for the second workload.14. The non-transitory computer readable medium as defined in claim 8, wherein the instructions, when executed cause the processor to store an association of the first virtual workload and the computing resource in a blockchain.15. A method comprising: obtaining a resource request associated with a first workload; determining if a processing resource of a programmable network device is available to process the first workload; determining if a second workload can be migrated from execution on the programmable network device; based on the determination that the second workload can be migrated, causing the second workload to be migrated; and causing the first workload to execute on the processing resource of the programmable network device.16. The method as defined in claim 15, wherein the determination if the processing resource is available is performed by an infrastructure processing unit.17. The method as defined in claim 15, wherein the determination if the processing resource is available is performed by a component of a second programmable network device.18. The method as defined in claim 15, further comprising managing resources associated with a first processing circuitry.19. The method as defined in claim 15, wherein the resource request specifies a type of processing resource to be utilized.20. The method as defined in claim 15, further comprising updating a class of service for the second workload.21. The method as defined in claim 15, further comprising storing an association of the first workload and the processing resource in a blockchain. |
APPARATUS, ARTICLES OF MANUFACTURE, AND METHODS FOR MANAGING PROCESSING UNITSRELATED APPLICATION[0001] This patent claims the benefit of U.S. Patent Application No. 63/222,938, which was filed on July 16, 2021. U.S. Patent Application No. 63/222,938 is hereby incorporated herein by reference in its entirety. Priority to U.S. Patent Application No. 63/222,938 is hereby claimed.FIELD OF THE DISCLOSURE[0002] This disclosure relates generally to computing systems and, more particularly, to apparatus, articles of manufacture, and methods for managing processing units.BACKGROUND[0003] Evolutions in computing systems has led to the utilization of computing systems with many types of processing units. For example, the concept of XPU is directed to the utilization of application specific processing units that may be included in a computing system. For example, a computing system may include a general purpose processing unit, a graphics processing unit, and an artificial intelligence processing unit. An XPU is a cross architecture computing solution that may be tied together in a single application programming interface (e.g., the oneAPI Standard Application Programming Interface), which manages the assignment of assigning each task to whichever processing unit is best suited to process it. For example, many cloud Service Providers (CSPs) are evolving their hardware platforms to disaggregated elements consisting of general-purpose processors, heterogeneous accelerators and purpose-built vertically integrated Infrastructure Processing Units (IPUs). Such processing units may be implemented by attached cards (e.g., peripheral control interconnect express (PCIE) attached cards), external processing units connected via a table (e.g.,
via a Thunderbolt port), via a motherboard-down (MB-down) solution soldered or otherwise attached to the motherboard, built into a central processing unit (CPU), etc.BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 is a block diagram of an example architecture for supporting heterogenous computing.[0005] FIG. 2 is a block diagram of an example architecture for infrastructure processing unit resource direction.[0006] FIG. 3 is a flowchart representative of example machine readable instructions and/or example operations that may be executed and/or instantiated by processor circuitry to perform configuring using infrastructure processing unit resource direction.[0007] FIG. 4 is a flowchart representative of example machine readable instructions and/or example operations that may be executed and/or instantiated by processor circuitry to conduct negotiation to dynamically allocate resources based on tolerances prescribed by an application and available IPU resources.[0008] FIG. 5 illustrates an example environment in which resources managed by IPUs have various states of free and busy resources among CPU, GPU, SSD, etc.[0009] FIG. 6 illustrates an example environment in which consensus in collaborative resource management is accomplished via a decentralized public block chain ledger.[0010] FIG. 7 is a block diagram of an example processing platform including processor circuitry structured to execute the example machine readable instructions of one or more of FIGS. 3-4 to implement the example architecture of FIG. 1 and/or FIG. 2.[0011] FIG. 8 is a block diagram of an example implementation of the processor circuitry of FIG. 7.[0012] FIG. 9 is a block diagram of another example implementation of the processor circuitry of FIG. 7.
[0013] FIG. 10 is a block diagram of an example software distribution platform (e.g., one or more servers) to distribute software (e.g., software corresponding to the example machine readable instructions described herein) to client devices associated with end users and/or consumers (e.g., for license, sale, and/or use), retailers (e.g., for sale, re-sale, license, and/or sub-license), and/or original equipment manufacturers (OEMs) (e.g., for inclusion in products to be distributed to, for example, retailers and/or to other end users such as direct buy customers).[0014] In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. The figures are not to scale.DETAILED DESCRIPTION[0015] As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other.[0016] Unless specifically stated otherwise, descriptors such as “first,” “second,” “third,” etc., are used herein without imputing or otherwise indicating any meaning of priority, physical order, arrangement in a list, and/or ordering in any way, but are merely used as labels and/or arbitrary names to distinguish elements for ease of understanding the disclosed examples. In some examples, the descriptor “first” may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as “second” or “third.” In such instances, it should be understood that such descriptors are used merely for identifying those elements distinctly that might, for example, otherwise share a same name.
[0017] As used herein “substantially real time” and “substantially simultaneously” refers to occurrence in a near instantaneous manner recognizing there may be real world delays for computing time, transmission, etc. Thus, unless otherwise specified, “substantially real time” and “substantially simultaneously” refer to real time +/- 1 second. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.[0018] As used herein, “processor circuitry” is defined to include (i) one or more special purpose electrical circuits structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmed with instructions to perform specific operations and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of processor circuitry include programmed microprocessors, Field Programmable Gate Arrays (FPGAs) that may instantiate instructions, Central Processor Units (CPUs), Graphics Processor Units (GPUs), Digital Signal Processors (DSPs), XPUs, or microcontrollers and integrated circuits such as Application Specific Integrated Circuits (ASICs). For example, an XPU may be implemented by a heterogeneous computing system (e.g., a computing system having one or more heterogenous processing unit(s)) including multiple types of processor circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more DSPs, etc., and/or a combination thereof) and application programming interface(s) (API(s)) that may assign computing task(s) to whichever one(s) of the multiple types of the processing circuitry best suited to execute the computing task(s).
[0019] Computer components, such components that include processors, including heterogeneous processors, and/or other computer components may use firmware for booting, initialization, and/or operation. It is desirable to provide computer components and computers with multiple processing capabilities, such as graphics and/or artificial intelligence. It is also desirable to reduce the bill of materials (BoM) and/or cost of such computing systems. Apparatus, articles of manufacture, and methods are disclosed that facilitate sharing of resources among processors, such as CPUs, GPUs, AI chips, FPGAs, ASICs, microcontrollers (e.g., embedded microcontrollers), etc. Identifying the common and/or sharable resources among CPU and other processors in a heterogeneous processor platform (e.g., a platform including a CPU and discrete graphics) may reduce dedicated hardware usage at the platform, which may help to reduce BoM cost.Disclosed apparatus, articles of manufacture, and methods disclosed herein improve efficiency such as by reusing firmware and/or software (e.g., using a One API library).[0020] Some cloud Service Providers (CSPs) are evolving their hardware platforms to disaggregated elements consisting of general-purpose processors, heterogeneous accelerators and purpose-built vertically integrated Infrastructure Processing Units (IPUs), XPUs, DPUs, etc. Some resource management systems (RMS) (e.g., INTEL® RDT) operate on the realm of a CPU as the control point and managing server node level platform resources pivoted around the CPU. Such approaches may not be scalable or even applicable to IPU-hosted microservices-based infrastructure wherein the IPU become the control point. IPU-based systems are disrupting the way Data Center Resource Management systems operate (e.g., moving away from the CPU as the control point to disaggregated heterogenous self-manageable smart accelerators).[0021] Apparatus, articles of manufacture, and methods disclosed herein facilitate the implementation of IPU resource management systems (IPURMS) that provide distributed services. In some examples, the proposed
IPURMS provides decentralized peer-to-peer IPU resource negotiation and management without CPU centric involvement towards low latency micro services. In some examples, the proposed IPURMS provides application aware resource management wherein IPUs can dynamically renegotiate RMS service level agreements (SLAs) for a variety of micro-services at run-time. In some examples, the proposed IPURMS facilitate IPUs P2P negotiations and resource management tracked via a decentralized distributed public ledger like blockchain with revocation capabilities to track/record telemetry with auditability. In some examples, the proposed IPURMS includes an IPU divided into two portions, namely i) data plane, and ii) control plane. The control plane handles resource allocation, monitoring and policy enforcement, and the data plane handles the data flow between IPUs and the logical units associated with the IPU.[0022] A Deep Neural Network (DNN) Library (e.g., a oneAPI Deep Neural Network (oneDNN)) provides compute primitives to facilitate improved Deep Learning Performance on CPUs and GPUs with a uniform/same API developed for CPUs, GPUs, etc. or any combination. Existing DNN libraries detect underlying target hardware capabilities (e.g., INTEL® Deep Learning Boost technology) to accelerate inference/training performance. For example, oneDNN may utilize Just-in- Time (JIT) code generation and tries to choose instruction set architecture (ISA) or mix of ISA based on detected target hardware features. Even though this abstraction provides the capabilities to take advantage of the underlying hardware capability presents challenges. Apparatus, articles of manufacture, and methods disclosed herein provide a dynamic negotiable deep learning neural network library that facilitates a configurable and negotiable interface for application frameworks to specify SLA to configure JIT code generation params at run-time. Such systems may be policy configurable with or without platform Trusted Execution Environment (TEE) that can help to dynamically manage the Kernel in terms power, performance, energy efficiency, optimization in addition to pure capabilities of the hardware. Apparatus,
articles of manufacture, and methods disclosed herein filter an implementation set of parameters to identify a candidate set based on application SLA and platform information. A corresponding JIT kernel may be dynamically generated for each from the candidate set. Apparatus, articles of manufacture, and methods disclosed herein may dry run the kernels one by one, pick out the one with best performance (e.g., Power/Energy Efficiency, TCO advantage, etc.), and cache it for later usage.[0023] FIG. 1 is a block diagram of an example architecture 100 includes example optimized applications 104, example optimized middleware and frameworks 106, and example application programming interfaces (APIs) 108. In some examples, the optimized applications 104 can be implemented by applications (e.g., software applications, web- or browser-based applications, etc.) that are customized, tailored, and/or otherwise optimized to effectuate the identification and/or generation of a composable ML compute node. For example, the optimized applications 104 can be accessed, utilized, etc., by a developer (e.g., a software developer, a researcher, etc.), Information Technology (IT) personnel, etc. In some such examples, the optimized applications 104 can be accessed, utilized, etc., to co-design a hardware/software (HW/SW) solution for a technical problem that can benefit from AI/ML techniques. In some examples, the optimized middleware and frameworks 106 can be implemented by middleware and frameworks that are customized, tailored, and/or otherwise optimized to effectuate the identification and/or generation of a composable ML compute node. For example, the optimized middleware and frameworks 106 can implement an interface (e.g., communication, connectivity, etc.) between the optimized applications 104 and the APIs 108.[0024] The APIs 108 of the illustrated example can be invoked to program, develop, and/or otherwise generate an AI/ML application by at least one of direct programming or API-based programming. The APIs 108 of the illustrated example include example porting tools 110, example direct
programming APIs 112, example API-based programming APIs 114, and example analysis tools 116.[0025] In some examples, the porting tools 110 can be implemented by software (e.g., a software application) that can adapt a program for the purpose of achieving some form of execution in a first computing or electronic environment that is different from a second computing or electronic environment for which the program was originally designed. For example, the porting tools 110 can convert and/or otherwise adapt a first program developed for a first type of hardware, operating system (OS), library, etc., into a second program for a second type of hardware, OS, library, etc.[0026] In some examples, the direct programming APIs 112 can be invoked to effectuate direct programming tasks, which may include developing and/or compiling data parallel C++ applications. In some examples, the API-based programming APIs 114 can be invoked to effectuate API-based programming, which may include developing and/or compiling applications that call (or invoke, instantiate, etc.) a Math Kernel Library (MKL), an MKL Deep Neural Network (DNN) library, a data analytics acceleration library, a thread building block library, a parallel standard template library, a media software development kit (SDK), a deep learning deployment toolkit, a machine learning scaling library, etc., and/or any combination(s) thereof.[0027] In some examples, the analysis tools 116 can be called, instantiated, and/or otherwise invoked to analyze hardware, software, and/or configuration(s) thereof of a composable ML compute node. For example, the analysis tools 116 can instantiate emulator(s) to emulate all of the hardware and/or software features of the composable ML compute node to generate and/or otherwise output one or more evaluation parameters. In some such examples, the evaluation parameters can include parameters representative and/or otherwise indicative of accuracy, latency, a number of cycles to complete a workload, or throughput of the composable ML compute node. In
some examples, the evaluation parameters can include parameters representative and/or otherwise indicative of a processor or clock frequency, a fabric frequency, a read memory bandwidth, a write memory bandwidth, hardware de-rate factors, a number of memory ports, a number of data processing units (DPUs), a number of model layers (e.g., neural network layers, convolution layers, etc.) an activation precision (e.g., a precision of activation values to be processed), a weight precision (e.g., a precision of weight values to be processed), etc., and/or any combination(s) thereof. For example, the analysis tools 116 can execute an emulator based on the composable ML compute node. In some such examples, the analysis tools 116 can execute the emulator to determine a throughput of the composable ML compute node when the composable ML compute node executes a particular AI/ML model having a particular configuration.[0028] In some examples, the analysis tools 116 can instantiate simulator(s) to simulate the behavior, the configuration, etc., of a composable ML compute node to generate and/or otherwise output one or more evaluation parameters. For example, the analysis tools 116 can execute a model (e.g., a simulation model, an AI/ML model, etc.) based on the composable ML compute node. In some such examples, the analysis tools 116 can execute the model to estimate, predict, and/or otherwise determine a throughput of the composable ML compute node when the composable ML compute node executes a particular AI/ML model having a particular configuration.[0029] The architecture 100 of the illustrated example includes different types of hardware and/or software from which a composable ML compute node can be generated. In the illustrated example, the architecture 100 includes interfaces and target system software for scalar, vector, matrix, and spatial hardware. Additionally and/or alternatively, any other type of hardware may be used. In this example, the scalar hardware is implemented by an example CPU 118 and example CPU system software 120. For example, the CPU system software 120 can include instructions corresponding to a CPU Instruction Set Architecture (ISA). In this example, the vector hardware is
implemented by an example GPU 122 and example GPU system software 124. For example, the GPU system software 124 can include kernels, portion(s) of code, etc., such as kernels, compute kernels, and/or shaders. In some examples, the kernels, the portion(s) of code), etc., can be represented in a high-level programming language such as, for example, a High-Level Shader Language (HLSL), OpenCL, etc.[0030] In this example, the matrix hardware is implemented by an example AI processor 126 and example AI system software 128. For example, the AI system software 128 can include one or more AI/ML algorithms, models, etc., such as neural networks (e.g., convolution neural networks (CNNs), deep neural networks (DNNs), recurrent neural networks (RNNs), etc.), Linear Regression models, Logistic Regression Models, Decision Tree Models, Learning Vector Quantization Models, etc., and/or combination(s) thereof. In this example, the spatial hardware is implemented by an example FPGA 130 and example FPGA system software 132. For example, the FPGA system software 132 can include kernels, portion(s) of code, etc., based on a hardware description language (HDL) such as Verilog.[0031] In the illustrated example, the CPU system software 120, the GPU system software 124, the AI system software 128, the FGPA system software 132, the host interface 134, and/or the level-zero interface 136 can correspond to and/or otherwise implement example system software below level zero 138. For example, system software below level zero 138 can correspond to and/or otherwise implement low-level direct-to-metal interfaces that are tailored to hardware, such as the CPU 118, the GPU 122, etc.[0032] In the illustrated example, the APIs 108 can implement example system software above level zero 140 and an example developer interface 142. For example, a developer, a user, etc., can access and/or otherwise utilize the architecture 100 by way of the APIs 108. In some examples, a developer, a user, etc., can access and/or otherwise utilize system software at a higher level than low-level direct-to-metal interfaces by way of the APIs 108. In some examples, a developer, a user, etc., can access and/or
otherwise utilize the system software below level zero 138 via the host interface 134 and/or the level-zero interface 136.[0033] The architecture 100 is well-suited for facilitating efficient utilization of the hardware such as the CPU 118, the GPU 122, etc. by way of the APIs 108. For example, APIs may be added to the APIs 108 to facilitate and/or improve various processes. For example, disclosed example include APIs directed a set of library functions that may communicate with XPU hardware (e.g., to facilitate the sharing of firmware and software resources among processing units). In some disclosed examples, the APIs 108 may include platform components to support machine learning (e.g., a dynamic negotiable deep neural network platform). For example, the machine learning components of the APIs 108 may operate to improve the targeting of hardware capabilities to improve performance (e.g., improve deep learning inference performance). The disclosed API improvements (and other improvements disclosed herein) may be implemented separately and/or in combination. For example, the APIs 108 may include the APIs directed a set of library functions that may communicate with XPU hardware to facilitate the sharing of firmware and software resources among processing units and the APIs 108 may include the APIs to improve the targeting of hardware capabilities to improve deep learning inference performance. For example, the various improvements, when combined, may provide additive system performance increases and reduced BOM costs.[0034] FIG. 2 is a block diagram of an example architecture 200 for IPURMS. Apparatus, articles of manufacture, and methods to implement an infrastructure processing unit resource directory technology (IPURMS) are disclosed. While reference is made to IPUs and an IPURMS by example, any type of processing unit (e.g., a processing unit to perform infrastructure operations, cloud service provider operations, etc. that is not the processing unit(s) to perform workload operations (e.g., implement a hypervisor, execute guest workloads, execute workloads of a tenant of a cloud services platform, etc.) and/or resource management technology may be utilized. The example IPURMS provides decentralized peer-to-peer IPU resource negotiation and
management without CPU centric involvement to facilitate low latency micro services and workloads such as VRAN, etc. In addition, the IPURMS provides application aware resource management wherein IPUs can dynamically renegotiate RMS SLAs for variety of micro-services at run-time. Furthermore, the IPURMS may facilitate IPUs P2P negotiations and resource management that may be tracked via decentralized distributed public ledger like blockchain with revocation capabilities (e.g., revocation management) to track/record telemetry with auditability. In addition, the IPURMS may facilitate an IPU that is divided into two portions, namely i) data plane, and ii) control plane, wherein the control plane handles resource allocation, monitoring and policy enforcement, and the data plane handles the data flow between IPUs and the logical units associated with the IPU.[0035] According to the illustrated example of FIG. 9, anew workload (or VM) 202 communicates with an example orchestrator 204 to request a system with a specific SLA. The example architecture 200 includes the orchestrator 204, an example user space 208, an example XPU/IPU software domain 208, and an example IPU hardware domain 210.[0036] The example orchestrator 204 is server circuitry that negotiates with existing workloads for placement of the workloads on computing resources based on SLAs. The example orchestrator 204 communicates with one or more computing system(s) 206 to manage the assignment of workloads to computing resources.[0037] The example computing resources 206 are represented by several abstractions including a user space 208, an XPU/IPU software domain 210, and an IPU hardware domain 212. The example user space 208 includes an application A 914 and an application B 216, though any number or type of application may be included. The example user space 208 is monitored by the orchestrator 204.[0038] The example XPU/IPU software domain 210 includes an example RMS exposure 218 that is monitored by an example SLA manager 220. The example RMS exposure 218 facilitates the communication of application level information with the orchestrator 204.
[0039] The example IPU hardware domain 212 includes an example XPU/IPU resource monitoring 222 monitored by an example SLA manager 224, an example XPU/IPU resource enforcement 226 monitored by an example SLA manager 228, and a Punit RMS 230.[0040] The example XPU/IPU resource monitoring 222 provides resource feedback to the example RMS exposure 218 while the example XPU/IPU resource monitoring 222 and the example XPU/IPU resource enforcement 226 communicate regarding hardware policies. The example RMS exposure 218 communicates QoS hints to the example XPU/IPU resource enforcement 226 and the example XPU/IPU resource enforcement 226 communicates with the Punit RMS 230 regarding QoS hardware features. The example architecture 200 facilitates a transition from CPU-centric, single node resource management to a scalable self-manageable XPU/IPU that can work in peer-to-peer collaboration. Consensus in such collaborative resource management may be accomplished via a centralized trust broker, a decentralized public ledger like block chain as illustrated in FIG. 6, etc.[0041] Flowcharts representative of example hardware logic circuitry, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing unified firmware for the example architecture 200 is shown in FIG. 10 and FIG. 11. The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by processor circuitry, such as the processor circuitry 712 shown in the example processor platform 700 discussed below in connection with FIG. 16 and/or the example processor circuitry discussed below in connection with FIGS. 48 and/or 49. The program may be embodied in software stored on one or more non-transitory computer readable storage media such as a compact disk (CD), a floppy disk, a hard disk drive (HDD), a solid-state drive (SSD), a digital versatile disk (DVD), a Blu-ray disk, a volatile memory (e.g., Random Access Memory (RAM) of any type, etc.), or a non-volatile memory (e.g., electrically erasable programmable read-only memory (EEPROM), FLASH memory, an HDD, an SSD, etc.) associated with processor circuitry located in one or more hardware devices, but the entire
program and/or parts thereof could alternatively be executed by one or more hardware devices other than the processor circuitry and/or embodied in firmware or dedicated hardware. The machine readable instructions may be distributed across multiple hardware devices and/or executed by two or more hardware devices (e.g., a server and a client hardware device). For example, the client hardware device may be implemented by an endpoint client hardware device (e.g., a hardware device associated with a user) or an intermediate client hardware device (e.g., a radio access network (RAN)) gateway that may facilitate communication between a server and an endpoint client hardware device). Similarly, the non-transitory computer readable storage media may include one or more mediums located in one or more hardware devices. Further, although the example program is described with reference to the flowcharts illustrated in FIG. 10 and FIG. 11, many other methods of implementing the example architecture 200 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., processor circuitry, discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. The processor circuitry may be distributed in different network locations and/or local to one or more hardware devices (e.g., a single-core processor (e.g., a single core central processor unit (CPU)), a multi-core processor (e.g., a multi-core CPU), etc.) in a single machine, multiple processors distributed across multiple servers of a server rack, multiple processors distributed across one or more server racks, a CPU and/or a FPGA located in the same package (e.g., the same integrated circuit (IC) package or in two or more separate housings, etc.).[0042] FIG. 3 is a flowchart representative of example machine readable instructions and/or example operations 300 that may be executed
and/or instantiated by processor circuitry to perform configuring using IPURMS.[0043] The machine readable instructions and/or the operations 300 of FIG. 3 begin at block 302, at which the example orchestrator 204 detects a new instance/application (e.g., workload 202) capable of running in a heterogenous IPU-based datacenter platform along with resource and migration tolerance SLAs. For example, the resource requirements and tolerance may be established by a user/administrator when creating the new instance/ application (e.g., using an SLA template). The orchestrator 204 determines if validation of the device and resource requirements is successful (block 304). For example, the resource requirements may be analyzed to determine if they are feasible without the constraints of the computing system. If the resource requirements are not valid and/or not feasibly met by the computing system, the orchestrator 204 returns control to block 302.[0044] If the resource requirements are valid (block 304), the orchestrator 204 negotiates with the IPU control plane to identify resource for performing the new instance/application (block 306). For example, based on the type of hardware resources specified in the request (e.g., CPU, GPU, FPGA and SSD), a set of IPUs corresponding to the specified resources are selected. Then, the negotiation between the new request and the existing Apps in the IPUs is started. For example, the negotiation may include making policy -based decisions using the identified resource tolerance thresholds and dynamically migrating existing workloads between IPUs to utilize all resources efficiently. Each IPU may include two portions, i) a data plane, and ii) a control plane. The control plane handles resource allocation, monitoring and policy enforcement, and the data plane handles the data flow between IPUs and the logical units associated with the IPU. An example process for negotiation is described in conjunction with FIG. 11.[0045] The orchestrator 204 determines if negotiation was successful (block 308). For example, the negotiation may be determined to be successful if the orchestrator is able to find the necessary resources within the set of IPUs. For example, in one scenario, existing applications continue to run on
the given IPUs, but there are additional resources free for the new application to be spun. In another scenario, the orchestrator 204 negotiates with an existing application and arranges for the application to be migrated to a different set of IPUs to free resources for the new instance/application.[0046] If the negotiation is not successful (block 308), control returns to block 302 for the orchestrator 204 look for a different set of IPUs that satisfy the resource requirements.[0047] If the negotiation is successful (block 308), the orchestrator 204 provisions the IPU/XPU resource monitoring and enforcement in the IPU control plane (block 310). Then, the orchestrator 204 configures the hardware resources on the IPU-based datacenter platform(s) for the new instance/application (block 312). Thus, the negotiation process among IPUs may enable cross-domain coordinated resource management at the datacenter level.[0048] FIG. 4 is a flowchart representative of example machine readable instructions and/or example operations 400 that may be executed and/or instantiated by processor circuitry to conduct negotiation to dynamically allocate resources based on tolerances prescribed by an application and available IPU resources.[0049] The machine readable instructions and/or the operations 400 of FIG. 4 begin at block 402, at which the orchestrator 204 detects that a user has spun up anew instance/application (e.g., a VM, an application, etc.). For example, the request may identify QoS parameters, SLA requirements, etc.For example, the QoS parameters may be set as QOS=FUNC(DEVICE REQS, FREQUENCY, CACHE, MEM-BW, POWER, IPC, CORES, STORAGE, MIGRATION-TOLERANCE). Specifying the SLA parameters enables the specification of hardware resources (e.g., CPU, GPU, FPGA, SSD and respective IPUs) within the datacenter. An example SLA template is specified as:1. CPU:A. FREQUENCY RANGEB. MEMORY BANDWIDTH RANGE
C. CACHE SIZE RANGED. TDP RANGEE. CORE COUNT RANGEF. MIGRATION TOLERANCEG XEON IPC RANGE2. SSD STORAGE SPACE RANGE3. GPU CORES RANGE4. FPGA5. PCIE GENERATION REQUIREMENT6. IPU control plane management h. Network bandwidth range i. Queue prioritization[0050] The orchestrator 204 validates the request for validity (block 404). If the request is not valid, the user is prompted to provide a valid request and control returns to block 402. If the request is valid (block 404), the orchestrator 204 determines availability of computing resources (block 406). If available computing resources (e.g., IPU resources) that are willing to negotiate are not available, control returns to block 402.[0051] If available computing resources are determined that are willing to negotiate (block 406), the orchestrator 204 begins negotiating with existing instances/applications that are executing on the IPUs and determines if negotiation is successful (block 408). For example, negotiation may involve determining existing applications on an IPU that may tolerate lower resources to free resources for the new instance/application. Alternatively, negotiation may identify applications that may be migrated to other resources to free the selected resources for the new instance/application. If negotiation fails to free resources for the new instance/application, control returns to block 406 to identify different resources.[0052] If negotiation succeeds in identifying available resources for execution of the new instance/application (block 408), the orchestrator 204
determines if there are existing instances/applications to be migrated off the resources (block 410). If there are existing instances/applications to be migrated, control returns to block 406 to manage negotiation and allocation of the existing instances/appbcations.[0053] If existing application/instances are not to be migrated (block 410), the orchestrator 204 updates a resource allocator (e.g., Class of Service (CloS) of the existing instance/appbcation (block 412). The orchestrator 204 spins-up the requested instance/application (e.g., workload 202) with the negotiated set of IPUs (block 414).[0054] FIG. 5 illustrates an example environment 500 in which resources managed by IPUs 502 (or any type of processing unit such as XPU, GPU, etc.) have various states of free and busy resources among CPU 504, GPU 506, SSD 508, etc. According to the illustrated example, APP-1 is utilizing a portion of the CPU 504, the GPU 506, and the SSD Store 508, APP-2 is utilizing a portion of the CPU 504 and the GPU 506, and APP-3 is utilizing a portion of the CPU 504 and the SSD Storage 508.[0055] FIG. 6 illustrates an example environment 600 in which consensus in collaborative resource management is accomplished via a decentralized public block chain ledger. As illustrated in FIG. 13, the operational states (e.g., state Si, state S2, state SN) of several IPUs 1 to N.Thus, each block in a blockchain (e.g., blocks Bi to BN) can store state information that may be utilized for peer-to-peer resource negotiation. Utilizing such a blockchain facilitates a distributed collection of information that is trustable to effectively operate as a trust broker. While FIG. 2 illustrates a single centralized orchestrator 204, blockchain or other decentralized techniques may be utilized to facilitate a decentralized orchestrator that manages resources suing the control plane portion of the IPUs. In such a decentralized approach, the resource management can be tracked via the decentralized public ledger with revocation capabilities to track/record telemetry with auditability. Thus, the IPUs 502 can be considered to have computer resources as well as the management Intellectual Property (Ips) for the device associated with the IPU. The control plane of the IPU hosts the
decentralized orchestrator that handles resource allocation, monitoring, and policy enforcement.[0056] From the foregoing, it will be appreciated that example systems, methods, apparatus, and articles of manufacture have been disclosed for managing the assignment of resources in systems utilizing IPUs. Disclosed systems, methods, apparatus, and articles of manufacture improve the efficiency of using a computing device by improving IPU and ingredient resource utilization, manageability with auditability, secure metering towards improved total cost of ownership savings. Disclosed examples facilitate fine granular resource monitoring and manageability across IPUs in hyper scale data centers. Providing application-negotiable resource monitoring and management allows for dynamic prioritization to provide deterministic performance for at-scale microservices.[0057] FIG. 7 is a block diagram of an example processor platform 700 structured to execute and/or instantiate the machine readable instructions and/or the operations of one or more of FIGS. 3-4 to implement resource management in an architecture such as the architecture 100 of FIG. 1. The processor platform 700 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a headset (e.g., an augmented reality (AR) headset, a virtual reality (VR) headset, etc.) or other wearable device, or any other type of computing device.[0058] The processor platform 700 of the illustrated example includes processor circuitry 712. The processor circuitry 712 of the illustrated example is hardware. For example, the processor circuitry 712 can be implemented by one or more integrated circuits, logic circuits, FPGAs, microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry 712 may be implemented by one or more semiconductor based (e.g., silicon based) devices.[0059] The processor circuitry 712 of the illustrated example includes a local memory 713 (e.g., a cache, registers, etc.). The processor circuitry 712 of the illustrated example is in communication with a main memory including
a volatile memory 714 and a non-volatile memory 716 by a bus 718. The volatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of RAM device. The non-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 714, 716 of the illustrated example is controlled by a memory controller 717.[0060] The processor platform 700 of the illustrated example also includes interface circuitry 720. The interface circuitry 720 may be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a Bluetooth® interface, a near field communication (NFC) interface, a Peripheral Component Interconnect (PCI) interface, and/or a Peripheral Component Interconnect Express (PCIe) interface.[0061] In the illustrated example, one or more input devices 722 are connected to the interface circuitry 720. The input device(s) 722 permit(s) a user to enter data and/or commands into the processor circuitry 712. The input device(s) 722 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system.[0062] One or more output devices 724 are also connected to the interface circuitry 720 of the illustrated example. The output device(s) 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuitry 720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU.
[0063] The interface circuitry 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network 726. The communication can be by, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, an optical connection, etc.[0064] The processor platform 700 of the illustrated example also includes one or more mass storage devices 728 to store software and/or data. Examples of such mass storage devices 728 include magnetic storage devices, optical storage devices, floppy disk drives, HDDs, CDs, Blu-ray disk drives, redundant array of independent disks (RAID) systems, solid state storage devices such as flash memory devices and/or SSDs, and DVD drives.[0065] The machine executable instructions 732, which may be implemented by the machine readable instructions of FIGS. 3-4 may be stored in the mass storage device 728, in the volatile memory 714, in the non-volatile memory 716, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.[0066] The processor platform 700 of the illustrated example of FIG. 16 includes example acceleration circuitry 734, which includes an example GPU 740, an example vision processing unit (VPU) 742, and an example neural network processor 744. Additionally and/or alternatively, the acceleration circuitry 734 may include any other type of hardware such as a CPU, an FPGA, an ASIC, etc. In this example, the GPU 740, the VPU 742, and the neural network processor 744 are in communication with different hardware of the processor platform 700, such as the volatile memory 714, the non-volatile memory 716, etc., via the bus 718. In this example, the neural network processor 744 may be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any
desired family or manufacturer that can be used to execute an AI model, such as a neural network.[0067] FIG. 8 is a block diagram of an example implementation of the processor circuitry 712 of FIG. 7. In this example, the processor circuitry 712 of FIG. 7 is implemented by a general purpose microprocessor 800. The general purpose microprocessor circuitry 800 executes some or all of the machine readable instructions of the flowcharts disclosed herein to effectively instantiate logic circuits to perform the operations corresponding to those machine readable instructions. For example, the microprocessor 800 may implement multi-core hardware circuitry such as a CPU, a DSP, a GPU, an XPU, etc. Although it may include any number of example cores 802 (e.g., 1 core), the microprocessor 800 of this example is a multi-core semiconductor device including N cores. The cores 802 of the microprocessor 800 may operate independently or may cooperate to execute machine readable instructions. For example, machine code corresponding to a firmware program, an embedded software program, or a software program may be executed by one of the cores 802 or may be executed by multiple ones of the cores 802 at the same or different times. In some examples, the machine code corresponding to the firmware program, the embedded software program, or the software program is split into threads and executed in parallel by two or more of the cores 802. The software program may correspond to a portion or all of the machine readable instructions and/or operations represented by one or more of the flowcharts disclosed herein.[0068] The cores 802 may communicate by a first example bus 804. In some examples, the first bus 804 may implement a communication bus to effectuate communication associated with one(s) of the cores 802. For example, the first bus 804 may implement at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the first bus 804 may implement any other type of computing or electrical bus. The cores 802 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 806. The cores 802 may output data, instructions, and/or
signals to the one or more external devices by the interface circuitry 806. Although the cores 802 of this example include example local memory 820 (e.g., Level 1 (LI) cache that may be split into an LI data cache and an LI instruction cache), the microprocessor 800 also includes example shared memory 810 that may be shared by the cores (e.g., Level 2 (L2_ cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 810. The local memory 820 of each of the cores 802 and the shared memory 810 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory of FIG. 7). Typically, higher levels of memory in the hierarchy exhibit lower access time and have smaller storage capacity than lower levels of memory. Changes in the various levels of the cache hierarchy are managed (e.g., coordinated) by a cache coherency policy.[0069] Each core 802 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 802 includes control unit circuitry 814, arithmetic and logic (AL) circuitry (sometimes referred to as an ALU) 816, a plurality of registers 818, the LI cache 820, and a second example bus 822. Other structures may be present. For example, each core 802 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating point unit (FPU) circuitry, etc. The control unit circuitry 814 includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core 802. The AL circuitry 816 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 802. The AL circuitry 816 of some examples performs integer based operations. In other examples, the AL circuitry 816 also performs floating point operations. In yet other examples, the AL circuitry 816 may include first AL circuitry that performs integer based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry 816 may be referred to as an Arithmetic Logic Unit (ALU). The registers 818 are
semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 816 of the corresponding core 802. For example, the registers 818 may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 818 may be arranged in a bank as shown in FIG. 48. Alternatively, the registers 818 may be organized in any other arrangement, format, or structure including distributed throughout the core 802 to shorten access time. The second bus 822 may implement at least one of an I2C bus, a SPI bus, a PCI bus, or a PCIe bus[0070] Each core 802 and/or, more generally, the microprocessor 800 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 800 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The processor circuitry may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry.[0071] FIG. 9 is a block diagram of another example implementation of the processor circuitry 712 of FIG. 7. In this example, the processor circuitry 712 of FIG. 7 is implemented by FPGA circuitry 900. The FPGA circuitry 900 can be used, for example, to perform operations that could
otherwise be performed by the example microprocessor 800 of FIG. 48 executing corresponding machine readable instructions. However, once configured, the FPGA circuitry 900 instantiates the machine readable instructions in hardware and, thus, can often execute the operations faster than they could be performed by a general purpose microprocessor executing the corresponding software.[0072] More specifically, in contrast to the microprocessor 800 of FIG. 8 described above (which is a general purpose device that may be programmed to execute some or all of the machine readable instructions represented by the flowcharts disclosed herein but whose interconnections and logic circuitry are fixed once fabricated), the FPGA circuitry 900 of the example of FIG. 9 includes interconnections and logic circuitry that may be configured and/or interconnected in different ways after fabrication to instantiate, for example, some or all of the machine readable instructions represented by the flowcharts disclosed herein. In particular, the FPGA 900 may be thought of as an array of logic gates, interconnections, and switches. The switches can be programmed to change how the logic gates are interconnected by the interconnections, effectively forming one or more dedicated logic circuits (unless and until the FPGA circuitry 900 is reprogrammed). The configured logic circuits enable the logic gates to cooperate in different ways to perform different operations on data received by input circuitry. Those operations may correspond to some or all of the software represented by the flowcharts disclosed herein. As such, the FPGA circuitry 900 may be structured to effectively instantiate some or all of the machine readable instructions of the flowcharts disclosed herein as dedicated logic circuits to perform the operations corresponding to those software instructions in a dedicated manner analogous to an ASIC. Therefore, the FPGA circuitry 900 may perform the operations corresponding to the some or all of the machine readable instructions disclosed herein faster than the general purpose microprocessor can execute the same.[0073] In the example of FIG. 9, the FPGA circuitry 900 is structured to be programmed (and/or reprogrammed one or more times) by an end user by a hardware description language (HDL) such as Verilog. The FPGA
circuitry 900 of FIG. 9, includes example input/output (I/O) circuitry 902 to obtain and/or output data to/from example configuration circuitry 904 and/or external hardware (e.g., external hardware circuitry) 706. For example, the configuration circuitry 704 may implement interface circuitry that may obtain machine readable instructions to configure the FPGA circuitry 900, or portion(s) thereof. In some such examples, the configuration circuitry 704 may obtain the machine readable instructions from a user, a machine (e.g., hardware circuitry (e.g., programmed or dedicated circuitry) that may implement an Artificial Intelligence/Machine Learning (AI/ML) model to generate the instructions), etc. In some examples, the external hardware 706 may implement the microprocessor 800 of FIG. 8. The FPGA circuitry 900 also includes an array of example logic gate circuitry 908, a plurality of example configurable interconnections 910, and example storage circuitry 912. The logic gate circuitry 908 and interconnections 910 are configurable to instantiate one or more operations that may correspond to at least some of the machine readable instructions of FIGS. 3-4 and/or other desired operations. The logic gate circuitry 908 shown in FIG. 9 is fabricated in groups or blocks. Each block includes semiconductor-based electrical structures that may be configured into logic circuits. In some examples, the electrical structures include logic gates (e.g., And gates, Or gates, Nor gates, etc.) that provide basic building blocks for logic circuits. Electrically controllable switches (e.g., transistors) are present within each of the logic gate circuitry 908 to enable configuration of the electrical structures and/or the logic gates to form circuits to perform desired operations. The logic gate circuitry 908 may include other electrical structures such as look-up tables (LUTs), registers (e.g., flip-flops or latches), multiplexers, etc.[0074] The interconnections 910 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 908 to program desired logic circuits.
[0075] The storage circuitry 912 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 912 may be implemented by registers or the like. In the illustrated example, the storage circuitry 912 is distributed amongst the logic gate circuitry 908 to facilitate access and increase execution speed.[0076] The example FPGA circuitry 900 of FIG. 9 also includes example Dedicated Operations Circuitry 914. In this example, the Dedicated Operations Circuitry 914 includes special purpose circuitry 916 that may be invoked to implement commonly used functions to avoid the need to program those functions in the field. Examples of such special purpose circuitry 916 include memory (e.g., DRAM) controller circuitry, PCIe controller circuitry, clock circuitry, transceiver circuitry, memory, and multiplier-accumulator circuitry. Other types of special purpose circuitry may be present. In some examples, the FPGA circuitry 900 may also include example general purpose programmable circuitry 918 such as an example CPU 920 and/or an example DSP 922. Other general purpose programmable circuitry 918 may additionally or alternatively be present such as a GPU, an XPU, etc., that can be programmed to perform other operations.[0077] Although FIGS. 8 and 9 illustrate two example implementations of the processor circuitry 712 of FIG. 7, many other approaches are contemplated. For example, as mentioned above, modem FPGA circuitry may include an on-board CPU, such as one or more of the example CPU 920 of FIG. 9. Therefore, the processor circuitry 712 of FIG. 7may additionally be implemented by combining the example microprocessor 800 of FIG. 8 and the example FPGA circuitry 900 of FIG. 9. In some such hybrid examples, a first portion of the machine readable instructions represented by the flowcharts of FIGS. 3-4 may be executed by one or more of the cores 802 of FIG. 8, a second portion of the machine readable instructions represented by the flowcharts of FIGS. 3-4 may be executed by the FPGA circuitry 900 of FIG. 9, and/or a third portion of the machine readable instructions represented by the flowcharts disclosed herein may be executed
by an ASIC. Some or all of the circuitry may be instantiated, for example, in one or more threads executing concurrently and/or in series.[0078] In some examples, the processor circuitry 712 of FIG. 7 may be in one or more packages. For example, the processor circuitry 800 of FIG. 8 and/or the FPGA circuitry 900 of FIG. 9 may be in one or more packages. In some examples, an XPU may be implemented by the processor circuitry 712 of FIG. 7, which may be in one or more packages. For example, the XPU may include a CPU in one package, a DSP in another package, a GPU in yet another package, and an FPGA in still yet another package.[0079] A block diagram illustrating an example software distribution platform 1005 to distribute software such as the example machine readable instructions 732 or machine readable instructions of one or more of FIGS. 3-4 to hardware devices owned and/or operated by third parties is illustrated in FIG. 10. The example software distribution platform 1005 may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. The third parties may be customers of the entity owning and/or operating the software distribution platform 1005. For example, the entity that owns and/or operates the software distribution platform 1005 may be a developer, a seller, and/or a licensor of software such as the example machine readable instructions 732. The third parties may be consumers, users, retailers, OEMs, etc., who purchase and/or license the software for use and/or re-sale and/or sub licensing. In the illustrated example, the software distribution platform 1005 includes one or more servers and one or more storage devices. The storage devices store the machine readable instructions 732, which may correspond to the example machine readable instructions of the flowcharts disclosed herein, as described above. The one or more servers of the example software distribution platform 1005 are in communication with a network 1010, which may correspond to any one or more of the Internet and/or any of the example networks 726 described above. In some examples, the one or more servers are responsive to requests to transmit the software to a requesting party as part of a commercial transaction. Payment for the delivery, sale, and/or license of the
software may be handled by the one or more servers of the software distribution platform and/or by a third party payment entity. The servers enable purchasers and/or licensors to download the machine readable instructions 732 from the software distribution platform 1005. For example, the software, which may correspond to the example machine readable instructions of the flowcharts disclosed herein, may be downloaded to the example processor platform 700, which is to execute the machine readable instructions. In some example, one or more servers of the software distribution platform 1005 periodically offer, transmit, and/or force updates to the software (e.g., the example machine readable instructions 732) to ensure improvements, patches, updates, etc., are distributed and applied to the software at the end user devices.[0080] Example methods, apparatus, systems, and articles of manufacture to managing processing units are disclosed herein. Further examples and combinations thereof include the following:[0081] Example 1 includes an apparatus to manage a processing unit, the apparatus comprising first processor circuitry to implement a central processing unit, second processor circuitry including one or more of at least one of a central processing unit, a graphic processing unit or a digital signal processor, the at least one of the central processing unit, the graphic processing unit or the digital signal processor having control circuitry to control data movement within the second processor circuitry, arithmetic and logic circuitry to perform one or more first operations corresponding to instructions, and one or more registers to store a result of the one or more first operations, the instructions in the apparatus, a Field Programmable Gate Array (FPGA), the FPGA including logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the logic gate circuitry and interconnections to perform one or more second operations, the storage circuitry to store a result of the one or more second operations, or Application Specific Integrated Circuitry (ASIC) including logic gate circuitry to perform one or more third operations, the second processor circuitry to perform at least one of the first operations, the second operations or the third operations to
obtain a resource request associated with a first workload, determine if a processing resource of a programmable network device is available to process the first workload, determine if a second workload can be migrated from execution on the programmable network device, based on the determination that the second workload can be migrated, cause the second workload to be migrated, and cause the first workload to execute on the processing resource of the programmable network device.[0082] Example 2 includes the apparatus as defined in example 1, wherein the second processor circuitry includes an infrastructure processing unit.[0083] Example 3 includes the apparatus as defined in example 1, wherein the second processor circuitry is a component of a second programmable network device.[0084] Example 4 includes the apparatus as defined in example 1, wherein the second processor circuitry is to manage resources associated with the first processing circuitry.[0085] Example 5 includes the apparatus as defined in example 1, wherein the resource request specifies a type of processing resource to be utilized.[0086] Example 6 includes the apparatus as defined in example 1, wherein the second processor circuitry is to update a class of service for the second workload.[0087] Example 7 includes the apparatus as defined in example 1, wherein the second processor circuitry is to store an association of the first workload and the processing resource in a blockchain.[0088] Example 8 includes a non-transitory computer readable medium comprising instructions that, when executed, cause a processor to at least obtain a resource request associated with a first virtual workload, determine if a computing resource of a programmable network device is available to process the first virtual workload, determine if a second workload can be migrated from execution on the programmable network device, based on the determination that the second workload can be migrated, cause the
second workload to be migrated, and cause the first virtual workload to execute on the computing resource of the programmable network device.[0089] Example 9 includes the non-transitory computer readable medium as defined in example 8, wherein the processor includes an infrastructure processing unit.[0090] Example 10 includes the non-transitory computer readable medium as defined in example 8, wherein the processor is a component of a second programmable network device.[0091] Example 11 includes the non-transitory computer readable medium as defined in example 8, wherein the processor is to manage resources associated with a processing circuitry.[0092] Example 12 includes the non-transitory computer readable medium as defined in example 8, wherein the resource request specifies a type of computing resource to be utilized.[0093] Example 13 includes the non-transitory computer readable medium as defined in example 8, wherein the instructions, when executed cause the processor to update a class of service for the second workload.[0094] Example 14 includes the non-transitory computer readable medium as defined in example 8, wherein the instructions, when executed cause the processor to store an association of the first virtual workload and the computing resource in a blockchain.[0095] Example 15 includes a method comprising obtaining a resource request associated with a first workload, determining if a processing resource of a programmable network device is available to process the first workload, determining if a second workload can be migrated from execution on the programmable network device, based on the determination that the second workload can be migrated, causing the second workload to be migrated, and causing the first workload to execute on the processing resource of the programmable network device.[0096] Example 16 includes the method as defined in example 15, wherein the determination if the processing resource is available is performed by an infrastructure processing unit.
[0097] Example 17 includes the method as defined in example 15, wherein the determination if the processing resource is available is performed by a component of a second programmable network device.[0098] Example 18 includes the method as defined in example 15, further comprising managing resources associated with a first processing circuitry.[0099] Example 19 includes the method as defined in example 15, wherein the resource request specifies a type of processing resource to be utilized.[00100] Example 20 includes the method as defined in example 15, further comprising updating a class of service for the second workload.[00101] Example 21 includes the method as defined in example 15, further comprising storing an association of the first workload and the processing resource in a blockchain.[00102] The following claims are hereby incorporated into this Detailed Description by this reference. Although certain example systems, methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all systems, methods, apparatus, and articles of manufacture fairly falling within the scope of the claims of this patent. |
A memory apparatus includes an interconnect in a first dielectric above a substrate and a structure above the interconnect, where the structure includes a diffusion barrier material and covers the interconnect. The memory apparatus further includes a resistive random-access memory (RRAM) device coupled to the interconnect. The RRAM device includes a first electrode on a portion of the structure, a stoichiometric layer having a metal and oxygen on the first electrode, a non-stoichiometric layer including the metal and oxygen on the stoichiometric layer. A second electrode including a barrier material is on the non-stoichiometric layer. In some embodiments, the RRAM device further includes a third electrode on the second electrode. To prevent uncontrolled oxidation during a fabrication process a spacer may be directly adjacent to the RRAM device, where the spacer includes a second dielectric. |
A memory apparatus, comprising:an interconnect in a first dielectric above a substrate;a structure above the interconnect, wherein the structure comprises a diffusion barrier material, wherein the structure substantially covers the interconnect;a resistive random-access memory (RRAM) device coupled to the interconnect, the RRAM device comprising:a first electrode on a portion of the structure;a stoichiometric layer comprising metal and oxygen on the first electrode;a non-stoichiometric layer comprising the metal and oxygen on the stoichiometric layer;a second electrode comprising a barrier material on the non-stoichiometric layer; anda third electrode on the second electrode; anda spacer directly adjacent to the RRAM device, wherein the spacer comprises a second dielectric.The memory apparatus of claim 1, wherein the first electrode comprises a noble metal.The memory apparatus of claim 1 or 2, wherein the stoichiometric layer and the non-stoichiometric layer each comprise tantalum.The memory apparatus of claim 3, wherein the stoichiometric layer has a chemical composition of Ta2O5, and wherein the sub-stoichiometric layer has a chemical composition of Tax OY, where O is oxygen and wherein the ratio between X and Y is between 1:1.08 to 1:1.2.The memory apparatus of one of the previous claims, wherein the sub-stoichiometric layer has a gradient in oxygen concentration and wherein the concentration of oxygen decreases away from an interface between the non-stoichiometric layer and the stoichiometric layer toward the second electrode.The memory apparatus of one of the previous claims, wherein the stoichiometric layer has a thickness in the range of 2nm-5nm, wherein the non-stoichiometric layer has a thickness in the range of 5nm-15nm, and wherein the non-stoichiometric layer has a thickness that is between 2 and 3 times the thickness of the stoichiometric layer.The memory apparatus of one of the previous claims, wherein the first electrode comprises a noble metal, and wherein the second electrode comprises a noble metal.The memory apparatus of one of the previous claims, wherein the third electrode comprises tantalum or an alloy, and wherein the alloy comprises nitrogen and at least one of: tantalum, tungsten or titanium.The memory apparatus of one of the previous claims, wherein the non-stoichiometric layer has a sidewall, and wherein a portion of the non-stoichiometric layer adjacent to the sidewall is substantially oxidized.The memory apparatus of claim 9, wherein the portion of the non-stoichiometric layer adjacent to the sidewall has a lateral width of less than 3nm as measured from the sidewall.The memory apparatus according to any one of claims 1 to 10, wherein the third electrode has an outer most sidewall surface, and wherein a portion of the third electrode adjacent to the outmost sidewall surface includes oxygen.A system comprising:a processor;a radio transceiver coupled to the processor, wherein the transceiver includes a transistor comprising:a drain contact coupled to a drain;a source contact coupled to a source; anda gate contact coupled to a gate; anda resistive random-access memory (RRAM) device coupled with the drain contact, the RRAM device comprising:a first electrode above the drain contact;a stoichiometric layer comprising metal and oxygen on the first electrode;a non-stoichiometric layer comprising the metal and oxygen on the stoichiometric layer;a second electrode on the sub-stoichiometric layer; anda third electrode on the barrier electrode; anda spacer directly adjacent to the RRAM device, wherein the spacer comprises a second dielectric.The system of claim 12, further comprising a battery coupled to power at least one of the processor or memory.A method, comprising:forming an interconnect in a first dielectric above a substrate;forming a structure above the interconnect, wherein the structure comprises a diffusion barrier material, wherein the structure substantially covers the interconnect;forming a resistive random-access memory (RRAM) device coupled to the interconnect, the RRAM device comprising:forming a first electrode on a portion of the structure;forming a stoichiometric layer comprising metal and oxygen on the first electrode;forming a non-stoichiometric layer comprising the metal and oxygen on the stoichiometric layer;forming a second electrode comprising a barrier material on the non-stoichiometric layer; andforming a third electrode on the second electrode; andforming a spacer directly adjacent to the RRAM device, wherein the spacer comprises a second dielectric.The method of claim 14, wherein:the stoichiometric layer and the non-stoichiometric layer each comprise tantalum.the stoichiometric layer has a chemical composition of Ta2O5,the sub-stoichiometric layer has a chemical composition of Tax OY, where O is oxygen and wherein the ratio between X and Y is between 1:1.08 to 1:1.2;the sub-stoichiometric layer has a gradient in oxygen concentration and wherein the concentration of oxygen decreases away from an interface between the non-stoichiometric layer and the stoichiometric layer toward the second electrode; orthe first electrode comprises a noble metal. |
BACKGROUNDFor the past several decades, feature size reduction has been a key focus for industrial-scale semiconductor process development. Scaling to smaller dimensions enables a higher density of functional elements per chip, smaller chips, and also reduced cost. However, as the industry approaches the physical limits of traditional scaling, it is becoming increasingly important to look for non-traditional types of devices that can offer new functionality. One such example is non-volatile memory based on resistive random-access memory (RRAM) devices.Non-volatile on-chip embedded memory with resistive random-access memory (RRAM) devices can improve energy and computational efficiency of a system on chip (SOC). However, the technical challenges of creating an appropriate stack for fabrication of RRAM devices with high device endurance present formidable roadblocks to commercialization of this technology. Specifically, endurance refers to long term repeated switching of an RRAM device between high and low resistance state with minimal variation in switching parameters. It is high desirable for a large number of individual RRAM devices to switch repeatedly within a given voltage and current range for functional embedded memory applications. As such, significant improvements are still needed in engineering material layer stacks for endurance improvement in RRAM devices.BRIEF DESCRIPTION OF THE DRAWINGSThe material described herein is illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Also, various physical features may be represented in their simplified "ideal" forms and geometries for clarity of discussion, but it is nevertheless to be understood that practical implementations may only approximate the illustrated ideals. For example, smooth surfaces and square intersections may be drawn in disregard of finite roughness, corner-rounding, and imperfect angular intersections characteristic of structures formed by nanofabrication techniques. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.Figure 1A illustrates a cross-sectional view of an RRAM device including a non-stoichiometric layer, in accordance with an embodiment of the present disclosure.Figure 1B illustrates a cross-sectional view of a RRAM device including sidewall portions that include oxygen, in accordance with an embodiment of the present disclosure.Figure 1C illustrates a plan-view of the RRAM device in Figure 1A , in accordance with an embodiment of the present disclosure.Fig 2 illustrates a flow chart for a method to fabricate an RRAM device, in accordance with an embodiment of the present disclosure.Figure 3A illustrates an electrode on a metallization structure that is formed above a substrate and a first dielectric on the electrode.Figure 3B illustrates the structure of Figure 3A following a process to planarize the first dielectric and portions of the electrode.Figure 3C illustrates the structure of Figure 3B following the formation of a material layer stack for an RRAM device, on the electrode and on the first dielectric.Figure 3D illustrates the structure of Figure 3C following an etch process used to etch a plurality of layers of the material layer stack.Figure 3E illustrates the structure of Figure 3D following the formation of a dielectric spacer layer covering the RRAM device.Figure 3F illustrates the structure of Figure 3E following the formation of a second dielectric on the dielectric spacer layer and following the formation of a metallization structure on the RRAM device.Figures 4A illustrates an I-V plot, demonstrating concepts involved with filament formation and voltage cycling (reading and writing) in an RRAM device, in accordance with embodiments of the present disclosure.Figure 4B illustrates a cross-sectional view of a conductive filament formed in an RRAM device, in an accordance with an embodiment of the present disclosureFigure 4C illustrates a cross-sectional view of an RRAM device where the conductive filament is broken, in an accordance with an embodiment of the present disclosure.Figure 5 illustrates a cross-sectional view of an RRAM element coupled to a drain side of a select transistor, in accordance with an embodiment of the present disclosure.Figure 6 illustrates a computing device in accordance with embodiments of the present disclosure.Figure 7 illustrates an integrated circuit (IC) structure that includes one or more embodiments of the disclosure.DESCRIPTION OF EXEMPLARY EMBODIMENTSA resistive random-access memory (RRAM) device and methods of fabrication are described. In the following description, numerous specific details are set forth, such as structural schemes and detailed fabrication methods in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features, such as transistor operations and switching operations associated with embedded memory, are described in lesser detail in order to not unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.In some instances, in the following description, well-known methods and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present disclosure. Reference throughout this specification to "an embodiment" or "one embodiment" or "some embodiments" means that a particular feature, structure, function, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Thus, the appearances of the phrase "in an embodiment" or "in one embodiment" or "some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment of the disclosure. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive.As used in the description and the appended claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.The terms "coupled" and "connected," along with their derivatives, may be used herein to describe functional or structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical, optical, magnetic or electrical contact with each other. "Coupled" may be used to indicated that two or more elements are in either direct or indirect (with other intervening elements between them) physical or electrical contact with each other, and/or that the two or more elements co-operate or interact with each other (e.g., as in a cause an effect relationship).The terms "over," "under," "between," and "on" as used herein refer to a relative position of one component or material with respect to other components or materials where such physical relationships are noteworthy. For example, in the context of materials, one material or material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with that second material/material. Similar distinctions are to be made in the context of component assemblies. As used throughout this description, and in the claims, a list of items joined by the term "at least one of' or "one or more of' can mean any combination of the listed terms.The term "adjacent" here generally refers to a position of a thing being next to (e.g., immediately next to or close to with one or more things between them) or adjoining another thing (e.g., abutting it).The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."The term "device" may generally refer to an apparatus according to the context of the usage of that term. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures having active and/or passive elements, etc. Generally, a device is a three-dimensional structure with a plane along the x-y direction and a height along the z direction of an x-y-z Cartesian coordinate system. The plane of the device may also be the plane of an apparatus which comprises the device.Here, an in-plane magnet refers to a magnet that has magnetization in a direction substantially along the plane of the magnet. For example, a magnet with a magnetization which is in an x or y direction and is in a range of 0 (or 180 degrees) +/- 20 degrees relative to an x-y plane of a device.The term "free" or "unfixed" here with reference to a magnet refers to a magnet whose magnetization direction can change along its easy axis upon application of an external field or force (e.g., Oersted field, spin torque, etc.). Conversely, the term "fixed" or "pinned" here with reference to a magnet refers to a magnet whose magnetization direction is pinned or fixed along an axis and which may not change due to application of an external field (e.g., electrical field, Oersted field, spin torque,).As used throughout this description, and in the claims, a list of items joined by the term "at least one of' or "one or more of' can mean any combination of the listed terms. Unless otherwise specified in the explicit context of their use, the terms "substantially equal," "about equal" and "approximately equal" mean that there is no more than incidental variation between two things so described. In the art, such variation is typically no more than +/-10% of a predetermined target value.The terms "left," "right," "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. For example, the terms "over," "under," "front side," "back side," "top," "bottom," "over," "under," and "on" as used herein refer to a relative position of one component, structure, or material with respect to other referenced components, structures or materials within a device, where such physical relationships are noteworthy. These terms are employed herein for descriptive purposes only and predominantly within the context of a device z-axis and therefore may be relative to an orientation of a device. Hence, a first material "over" a second material in the context of a figure provided herein may also be "under" the second material if the device is oriented upside-down relative to the context of the figure provided. In the context of materials, one material disposed over or under another may be directly in contact or may have one or more intervening materials. Moreover, one material disposed between two materials may be directly in contact with the two layers or may have one or more intervening layers. In contrast, a first material "on" a second material is in direct contact with that second material. Similar distinctions are to be made in the context of component assemblies.The term "between" may be employed in the context of the z-axis, x-axis or y-axis of a device. A material that is between two other materials may be in contact with one or both of those materials, or it may be separated from both of the other two materials by one or more intervening materials. A material "between" two other materials may therefore be in contact with either of the other two materials, or it may be coupled to the other two materials through an intervening material. A device that is between two other devices may be directly connected to one or both of those devices, or it may be separated from both of the other two devices by one or more intervening devices.Integration of a memory array with low voltage logic circuitry, such as logic circuitry operational at a voltage less than or equal to 1 Volt, may be advantageous since it enables higher operation speeds compared to having physically separate logic and memory chips. Additionally, approaches to integrating an RRAM device with a transistor to create embedded memory presents material challenges that have become far more formidable with scaling. As transistor operating voltages are scaled down in an effort to become more energy efficient, RRAM memory devices that are connected in series with such transistors are also required to function at lower voltages and currents.Non-volatile memory devices, such as resistive random-access memory (RRAM) devices, depend on a phenomenon of resistance switching to store information. The non-volatile memory device functions as a variable resistor where the resistance of the device may switch between a high resistance state and a low resistance state. Resistance switching relies on a change in conductivity of the RRAM device. In particular, a switching layer determines the conductivity. In one embodiment, the conductivity is modulated by formation and dissolution of a conductive filament in the switching layer.The conductive filament may be created in an RRAM device by a one-time electro-forming process, where a voltage is applied between two electrodes located on either side of the switching layer. The electro-forming process may cause an electrical breakdown within the switching layer leading to a formation of the conductive filament. The electro-forming voltage depends on the material composition, thickness and quality of the switching layer and can dictate a size of the conductive filament being formed within the switching layer. A low electro-forming voltage is desirable as it creates a conductive filament that supports low current to switch an RRAM device between a high and a low resistance state. A low current operation is desirable from a power savings perspective. In some embodiments, the electro-forming voltage may be reduced by inserting an oxygen exchange layer between the switching layer and an electrode.The oxygen exchange layer may be a metal that acts as a source or sink of oxygen vacancies. However, during the fabrication process, sidewall portions of the oxygen exchange layer may become damaged and oxidized. During operation, the inventors have found that while RRAM device with a metal oxygen exchange layer may enable formation of form conductive filaments at low electro-forming voltages, the RRAM devices suffer from retention failures. An example of retention failure observed is a drifting of the resistance level of a RRAM device from a programmed low resistance level to a level above a predetermined reference level over a 24-hour time period. The inventors believe that the oxygen exchange layer may suffer from oxidation during the fabrication process for reliable device operation. When oxidation of a metal oxygen exchange layer is not uniform a non-uniform partially oxidized metal film may be formed. Sidewall portions of the oxygen exchange layer may be particularly vulnerable to non-uniform partial oxidation. The non-uniformity of a partially oxidized oxygen exchange layer can lead to a high level of variability in device performance.Increasing the size of the conductive filament formed during the electro-forming process may mitigate variability. However, during the process of resistance switching a larger conductive filament may require a larger electrical current to dissolve and re-form compared to a relatively smaller conductive filament. A low current operation is desirable for embedded memory applications where the RRAM device may be coupled to a transistor. The maximum current delivered by the transistor to the RRAM device may not meet the threshold current requirement for filament formation and dissolution if the conductive filament formed during the electro-forming process becomes too large in size. In examples, where the transistor threshold current is not a limitation, increasing filament size can also lead to endurance problems. Endurance problem may be defined as the number of switching cycles that an RRAM device can complete before it is stuck in the high resistance state.The inventors have found that the problems associated with uncontrolled partial oxidation may be solved by replacing a metal oxygen exchange layer with a non-stoichiometric layer. The non-stoichiometric layer may include a metal and oxygen where the metal to oxygen ratio is not stoichiometrically proportional. The non-stoichiometric layer may include a same metal as the metal of the switching layer for added benefits such as retention improvement.In accordance with an embodiment of the present disclosure, a memory apparatus includes an interconnect in a first dielectric above a substrate and a structure above the interconnect, where the structure includes a diffusion barrier material and covers the interconnect. The memory apparatus further includes a resistive random-access memory (RRAM) device coupled to the interconnect. The RRAM device includes a first electrode on a portion of the structure, a stoichiometric layer having a metal and oxygen on the first electrode, a non-stoichiometric layer including the metal and oxygen on the stoichiometric layer. A second electrode including a barrier material is on the non-stoichiometric layer. In some embodiments, the RRAM device further includes a third electrode for fabrication advantages on the second electrode. To prevent uncontrolled oxidation during a fabrication process a spacer may be directly adjacent to the RRAM device, where the spacer includes a second dielectric.Figure 1A illustrates a cross-sectional illustration of a memory apparatus 100A above a substrate 101. As shown, memory apparatus 100A includes an RRAM device 102 on a portion of a structure 104. The structure 104 includes a diffusion barrier material, (herein referred to as electrode 104). The RRAM device 102 includes a bottom electrode 106, and a stoichiometric layer 108 on the bottom electrode 106. The stoichiometric layer 108 supports a conductive filament during operation and is herein referred to as switching layer 108. The switching layer 108 includes a metal and oxygen in substantially stoichiometric proportions. The RRAM device 102 further includes a non-stoichiometric layer 110 including the metal and oxygen on the switching layer 108, an electrode 112 on the non-stoichiometric layer 110 and a top electrode 114 on the electrode 112. In the illustrative embodiment, the memory apparatus 100A further includes a spacer 116 directly adjacent to the RRAM device, where the spacer 116 includes a dielectric, and a conductive interconnect 118 directly below and coupled with the barrier electrode 104.In an embodiment, the bottom electrode 106 includes a noble metal. The noble metals Ir, Pt and Pd provide excellent resistance to oxidation. However, a ruthenium bottom electrode 106 may oxidize and remain conductive with no adverse effect to the RRAM device 102.In an exemplary embodiment, the switching layer 108 includes oxygen and tantalum. When the switching layer 108 includes a metal such as tantalum having an oxidation state +5, the switching layer 108 has a chemical composition of Ta2O5. The thickness of the switching layer 108 may vary on the desired voltage operating range. In one embodiment, the switching layer 108 has a thickness of at least 1nm. In exemplary embodiments, the switching layer 108 has a thickness between 2 nm and 5 nm. The magnitude of the electro-forming voltage, discussed above, is proportional to a thickness of the switching layer 108. In some embodiments, the switching layer 108 includes a stoichiometric oxide that may not be structurally homogenous across the cross-sectional plane in Figure 1A . For example, a portion of the switching layer 108 inside a sidewall 108A may have lattice dislocations, indicative of damage during the fabrication process.The non-stoichiometric layer 110 acts as a source of oxygen vacancies or as a sink for oxygen atoms in filamentary RRAM devices and is herein referred to as oxygen exchange layer 110. The oxygen vacancies migrate to and from the oxygen exchange layer 110 into the switching layer 108, in response to an applied voltage between the top electrode 114 and bottom electrode 106. Migration of oxygen vacancies enable resistance switching in the RRAM device 100A. In an exemplary embodiment, the oxygen exchange layer 110 includes tantalum and oxygen for example, Tax OY, where O is oxygen and wherein the ratio between X and Y is between 1:0.8 to 1:1.2. In some embodiments, the ratio between X and Y is substantially close to 1:0.8. In other embodiments, the ratio between X and Y is substantially close to 1:1.2. In an embodiment, the oxygen exchange layer 110 has a thickness between 5nm and 20nm. The thickness may depend on the ratio between X and Y in Tax OY. In an embodiment, the oxygen exchange layer 110 has a gradient in oxygen concentration. The concentration of oxygen may decrease from an interface 119A between the switching layer 108 and the oxygen exchange layer 110 toward interface 119B between the oxygen exchange layer 110 and the electrode 112. The oxygen gradient may such that the ratio between X and Y in TaX OY at interface 119A is substantially close to 1:1.2 and the ratio between X and Y in TaX OY at interface 119B is substantially close to 1:0.8.In an embodiment, the electrode 112 includes ruthenium, platinum, iridium, palladium, tungsten, tantalum or an alloy including nitrogen and at least one of Ta, Ti or W. The electrode 112 has a thickness between 2nm and 10nm and may depend on the material. In some embodiments, metals that are difficult to pattern for example, Pt, Ir, Pd have a thickness that is 5nm or less.In an embodiment, the top electrode 114 includes a metal such as Ta, Ti or W or an alloy including nitrogen and at least one of Ta, Ti or W. The top electrode 114 has a thickness that is between 20nm and 50nm. In some embodiments, such as is illustrated, the top electrode 114 has a curved outermost surface 114A.In an embodiment, the electrode 104 includes an alloy including nitrogen and at least one of Ta, Ti or W. The electrode 104 has a cross-sectional width, Wiv, as shown. In the illustrative embodiment, WIV is greater than a cross-sectional width, WBE of the bottom electrode 106. In some embodiments, WIV may also be less than WBE. In embodiments, when WIV is less than WBE, the RRAM device 102 has portions on a dielectric 120 laterally adjacent to electrode 104.Conductive interconnect 118 may include lateral runs (e.g., metallized trenches) and vertical runs (e.g., metallized vias). As shown, the metallization structure 118 has an uppermost surface that is coplanar or substantially co-planar with an uppermost surface of an adjacent dielectric 122. In an embodiment, the metallization structure 118 includes a barrier layer 118A, and a fill metal 118B on the barrier layer 118A. In an embodiment, the barrier layer 118A includes tantalum nitride, tantalum or ruthenium. In an embodiment, the fill metal 118B includes W, Co, Ni or Cu. In the illustrative embodiment, for example, a width, WCI of the metallization structure 118 is representative of the largest dimension of the metallization structure 118, within the cross-sectional plane of the RRAM device 100A. In exemplary embodiments, WCI is less than WIV. In some embodiments, the WCI is substantially similar to WIV. In some such embodiments, the electrode 104 covers the conductive interconnect 118.The spacer 116 may be utilized to prevent uncontrollable oxidation of one or more layers in the RRAM device 100A. As shown, the spacer 116 is adjacent to outer most surface 114 of the top electrode include sidewall portions and a top surface.As shown, when WBE is less than Wiv, the spacer 116 is also on a portion of the electrode 104. The dielectric liner has a composition that is substantially free of metal. In an exemplary embodiment, the dielectric liner includes silicon and nitrogen.In the illustrative embodiment, the memory apparatus 100A includes a metallization contact 126.The metallization contact 126 may include a barrier layer and a fill material. In an embodiment, the composition of the barrier layer and the fill material may be substantially the same as the barrier layer 118A and the fill metal 118B.In an embodiment, RRAM device 102 is scaled laterally in size and approach a lateral thickness (along the Y-axis) of the metallization contact 126. In such an embodiment, any misalignment between the metallization contact 126 and the RRAM device 100A may cause portions of the metallization contact 126 to extend along a section of the outermost surface 114A and down a sidewall of the RRAM device 102. For advantageous device operation, a lowermost portion of the metallization contact 126 should not extend below the oxygen exchange layer 110.In some such embodiments, the top electrode 114 includes a conductive material that is relatively easy to pattern and is substantially thicker than 10 nm. The top electrode 114 may be between 50nm - 100nm, in some examples, which provides sufficient thickness as an intermediate layer for coupling with the metallization contact 126 above.In an embodiment, the dielectric 120 and 122 include silicon and at least one of oxygen, carbon or nitrogen. In one embodiment, dielectric 120 includes silicon and nitrogen and dielectric 122 includes silicon, oxygen and nitrogen. In a different embodiment dielectric 120 includes silicon and nitrogen and dielectric 122 includes silicon, and oxygen.In an embodiment, the substrate 101 includes a suitable semiconductor material such as but not limited to, single crystal silicon, polycrystalline silicon and silicon on insulator (SOI). In another embodiment, substrate 101 includes other semiconductor materials such as germanium, silicon germanium or a suitable group III-N or a group III-V compound. Logic devices such as MOSFET transistors and access transistors and may be formed on the substrate 101. Logic devices such as access transistors may be integrated with plurality of memory apparatus each including RRAM device 102 to form embedded memory. Embedded memory including RRAM devices and logic MOSFET transistors may be combined to form functional integrated circuit such as a system on chip.In some embodiments, the bottom electrode 104 and the electrode 112 include a noble metal and top electrode 104 includes a metal having an affinity for oxygen. In some such embodiments, portions inside a sidewall of the top electrode 114 may include oxygen.Figure 1B includes a cross sectional illustration of a memory apparatus 100B where the bottom electrode 106 and the electrode 112 include a noble metal. In an exemplary embodiment, bottom electrode includes Ru, the electrode 112 includes a noble metal and the top electrode 114 includes at least one of Ta, TaN, TiN. In one such exemplary embodiment, the top electrode 114 has a top electrode portion 114B and a top electrode portion 114C adjacent to the top electrode portion 114B. As shown, the top electrode portion 114C is between the dashed line 130 and the outer surface 114A. The top electrode portion 114B includes tantalum and top electrode portion 114C includes tantalum and oxygen. In an embodiment, the top electrode portion 114C has an oxygen concentration that is substantially uniform. In an embodiment, the top electrode portion 114C has a lateral thickness, TTE, that is between 2nm-5nm (across the cross-sectional plane in the Y-direction). The top electrode portion 114C may have a lateral thickness, TTE, that correlates with a vertical thickness, TBE, of the bottom electrode. For example, the greater TBE is, the greater is TE.In an embodiment, when the bottom electrode includes Ru, the oxygen exchange layer 110 includes an inner portion 110A and an outer portion 110B adjacent to the inner portion, as shown. In an embodiment, the inner portion is substantially non-stoichiometric and the outer portion is substantially stoichiometric. In the illustrative embodiment, where the oxygen exchange layer 110 has a chemical composition of Tax OY, the ratio between X and Y in the inner portion 110A is between between 1:0.8 to 1:1.2. In one such embodiment, the outer portion 110B has a chemical composition that is Ta2O5. In an embodiment, the outer portion 110B has a chemical composition that is substantially the same as the chemical composition of the switching layer 108. In an embodiment, outer portion 110B has a lateral thickness, TOEL in a cross-sectional plane of Figure 1B that is between 1nm and 3nm. In an embodiment, TOEL correlates with TBE. For example, the greater TBE is, the greater is TOEL. A bottom electrode including Ru, may have a thickness TBE that ranges between 5nm and 10nm.In some embodiments, top electrode portion 114A adds undesirable electrical resistance to the RRAM device. Added electrical resistance increases the burden on applied voltage during operation. In some such embodiments, the contact metallization structure 126 extends into the conductive top electrode portion 114B (inside dashed line 130), as shown.Figure 1C illustrates a plan-view of the RRAM device in Figure 1B , in accordance with an embodiment of the present disclosure. The areas of the various layers shown represent a lowermost surface of the electrode 104 and an uppermost surface of the conductive interconnect 118. The spacer 116 and metallization contact 126 are not shown for clarity. As shown, the electrode 104 has a lowermost surface area that is greater than a lowermost surface area of the conductive interconnect 118 (inside dashed lines). The electrode 104 also covers the conductive interconnect 118, as shown. A plan view area of a lowermost surface of the RRAM device 102 is also shown in the Figure. In the illustrative embodiment, the RRAM device 102 has a lowermost surface area that is less than the lowermost surface area of the electrode 104.Fig 2 illustrates a flow chart for a method to fabricate an RRAM device, in accordance with an embodiment of the present disclosure. In an embodiment, the method 200 begins in operation 210 by forming a conductive layer above a conductive interconnect. The method 200 continues in operation 220 by forming a layer including oxygen and a metal on the conductive layer. The method 200 continues in operation 230 by forming a non-stoichiometric layer on the layer including oxygen and metal. The method 200 continues in operation 240 by forming an electrode layer on the non-stoichiometric layer. The method 200 continues in operation 250 by patterning the electrode layer, the non-stoichiometric layer, the layer including oxygen and metal, and the conductive layer to form a structure having sidewalls.Figures 3A-3F illustrate cross-sectional views of the memory apparatus 100A illustrated in Figure 1A evolving as a fabrication method, such as method 200, is practiced.Figure 3A illustrates a metallization structure 118 surrounded by a dielectric 122 formed above a substrate 101. In an embodiment, the metallization structure 118 is formed in a dielectric 122 by a damascene or a dual damascene process. In an embodiment, the metallization structure 118 includes a barrier layer, such as ruthenium, titanium nitride, ruthenium, tantalum or tantalum nitride, and a fill metal, such as cobalt, nickel, copper or tungsten. In an embodiment, the metallization structure 118 is fabricated using a subtractive etch process when materials other than copper are utilized. In some examples, the dielectric 122 includes a silicon and at least one or nitrogen, oxygen or carbon. In an embodiment, the dielectric 122 has an uppermost surface 122A that is substantially co-planar with an uppermost surface 118C of the metallization structure 118. In some examples, metallization structure 118 may be electrically connected to a circuit element such as an access transistor (not shown). Logic devices such as access transistors may be integrated with memory devices such as a RRAM device to form embedded memory.Figure 3A also illustrates an electrode 104 formed above the conductive interconnect 118. In an embodiment, a conductive layer including a diffusion barrier material is blanket deposited on the uppermost surfaces 118C and 122A. In an embodiment, the conductive layer is patterned by forming a mask on the conductive layer and performing a plasma etch process to form electrode 104. A dielectric 120 is blanket deposited on the surface 104A of the electrode 104 and on the dielectric surface 122A. In an embodiment, the dielectric 120 is blanket deposited using a physical vapor deposition (PVD) or a chemical vapor deposition (CVD), or a plasma enhanced chemical vapor deposition (PECVD) process.Figure 3B illustrates the structure of Figure 3A following a process to planarize the first dielectric and portions of the electrode. In an embodiment, the planarization includes a chemical mechanical polish (CMP) process. In the illustrative embodiment, CMP process is utilized to planarize the dielectric 120 and portions of the electrode 104 to form uppermost surfaces 104A and 120A that are substantially co-planar as shown.Figure 3B illustrates the structure of Figure 3A following the formation of a material layer stack 300 utilized in the formation of an RRAM device, on a dielectric surface 120A and on the electrode surface 104A. In1 an embodiment, a conductive layer 301 is blanket deposited by a physical vapor deposition (PVD), a chemical vapor deposition (CVD), a plasma enhanced chemical vapor deposition (PECVD) or an atomic layer deposition process (ALD). In an embodiment, the conductive layer 301 includes noble metals Ir, Pt, Pd or Ru. The choice of materials utilized to form the conductive layer 301 results in conductive layer 301 having a low electrical resistivity, such as an electrical resistivity between 100-250 µ-Ω-cm.In an embodiment, the conductive layer 301 may be planarized before deposition of additional layers of the material layer stack 300. Planarization may enable the top surface 301A of the conductive layer 301 to have a surface roughness that is less than 1nm. A surface roughness of less than 1nm enables a layer 303 having a uniform thickness to be deposited on surface of the conductive layer 301. A uniform thickness in the layer 303 is desirable to reduce variation in forming voltage in a large collection of RRAM devices. In an embodiment, the layer 303 includes a material is the same or substantially the same as the material of the switching layer 108.In other embodiments, a stoichiometric layer 303 is deposited on the conductive layer 301 without breaking vacuum, as shown. In an embodiment, the stoichiometric layer 303 is a material that includes oxygen and tantalum having a composition Ta2O5. The stoichiometric layer 303 may be formed using an atomic layer deposition (ALD) process. The ALD process may be characterized by a slow and a controlled deposition rate resulting in a metal oxide film with a stoichiometric oxygen content. In some embodiments, the stoichiometric layer 303 is deposited using a physical vapor deposition (PVD) process. The PVD process may include depositing a metal oxide film in an ambient containing oxygen flowing at a constant rate. The stoichiometric layer 303 is deposited to a thickness between 2 nm and 5 nm.The deposition method is continued with the formation of a non-stoichiometric layer 305 on the stoichiometric layer 303. The PVD process may include depositing a metal oxide film in an ambient containing oxygen flowing at a constant rate. The deposition process may form a non-stoichiometric layer 305 that is slightly deficient in oxygen concentration resulting in a film that is deficient in oxygen content. In some such embodiments, the non-stoichiometric layer 305 has an oxygen concentration gradient with higher concentration of oxygen proximate to the stoichiometric layer surface 303A and a lower concentration of oxygen distal from a conductive layer surface 307A. Such an arrangement may preferably provide greater oxygen vacancies in a location that aids with filament formation and dissolution.The non-stoichiometric layer 305 may include a material having a composition and a thickness, such as is described above in association with the oxygen exchange layer 110 such as TaXOY. Utilizing a metal that is the same as the metal of the stoichiometric layer 303 enables an upper portion of the stoichiometric layer 303 to maintain oxygen vacancies after an anneal process (to be described further below). The presence of oxygen vacancies may reduce the electro-forming voltage during operation. In an embodiment, the non-stoichiometric layer 305 is blanket deposited on the stoichiometric layer 303, for example, using a PVD process.The deposition method is continued with the formation of the conductive layer 307 on the non-stoichiometric layer 305. In an embodiment, the conductive layer 307 includes a material that is the same as or substantially the same as the material of the electrode 112 (described in association with Figure 1A ). Referring again to Figure 3C . The conductive layer 307 may be deposited using a PVD process. In one example the conductive layer 307 and the non-stoichiometric layer 305 are deposited sequentially in a same chamber or in a same tool without breaking vacuum. Sequential deposition without an air-break may prevent an uppermost portion of the non-stoichiometric layer 305 from becoming stoichiometric. Oxidation of the non-stoichiometric layer 305 can introduce variability in electro-forming voltage and variability in switching voltages during RRAM device operation. In some embodiments, the conductive layer 307 includes a metal of the stoichiometric layer 303 and the metal of the non-stoichiometric layer 305, i.e. Ta.The conductive layer 307 is utilized as a work function electrode and may include a material that is substantially difficult to pattern when deposited to a thickness greater than 5nm. In some such embodiments, the conductive layer 307 is deposited to a thickness of approximately 10nm and a top electrode layer 309 is deposited on the conductive layer 307. Portions of the top electrode layer 309 may be sacrificed during subsequent processing operations. In an embodiment, the top electrode layer 309 is blanket deposited using one of the deposition processes described above. The top electrode layer 309 is deposited to a thickness between 20nm and 100nm. In some embodiments, the top electrode layer 309 includes a metal of the stoichiometric layer 303 and the metal of the non-stoichiometric layer 305, i.e. Ta. In other embodiments, the top electrode layer 309 and the conductive layer 307 each includes a metal of the stoichiometric layer 303 and the metal of the non-stoichiometric layer 305, i.e. Ta. In some such embodiments, top electrode layer 309 further includes nitrogen.In one embodiment, the conductive layer 301 includes ruthenium, conductive layer 307 layer includes a noble metal excluding ruthenium, and the top electrode layer 309 includes Ta, Ti or W or an alloy including nitrogen and at least one of Ta, Ti or W. In second embodiment, conductive layer 301 includes a noble metal excluding ruthenium, conductive layer 307 layer includes a noble metal excluding ruthenium, and the top electrode layer 309 includes Ta, Ti or W or an alloy including nitrogen and at least one of Ta, Ti or W. In a third embodiment, the conductive layer 301 includes a noble metal excluding ruthenium, conductive layer 307 layer includes ruthenium, and the top electrode layer 309 includes Ta, Ti or W or an alloy including nitrogen and at least one of Ta, Ti or W.Upon deposition of the top electrode layer 309, the RRAM material layer stack 300, may be subjected to a high temperature anneal process. In an embodiment, anneal temperatures reach up to 400°C and last for a time period of up to 60 minutes. Annealing is a thermal phenomenon that may drive the oxygen from the stoichiometric layer 303, creating oxygen vacancies, Vo, in the switching layers. When the non-stoichiometric layer 305 and stoichiometric layer 303 both include Ta, some oxygen from the stoichiometric layer 303 may diffuse toward the non-stoichiometric layer 305 above during the anneal process. The process is insufficient to fully oxidize the non-stoichiometric layer 305.After annealing the material layer stack 300, a mask 311 is formed on the material layer stack 300. In the illustrative embodiment, the mask 311 is formed on the top electrode layer 309. In some embodiments, the mask 311 is formed by a lithographic process. In other embodiments, the mask 311 includes a dielectric material that has been patterned. The mask 311 defines a size of an RRAM device that will subsequently be formed.Figure 3D illustrates the structure of Figure 3C following an etch process used to etch a plurality of layers of the material layer stack 300 to form an RRAM device 350. In an embodiment, an anisotropic plasma etch process is used to pattern the top electrode layer 309 to form a top electrode 114. Portions of the top electrode 114 may be eroded during the etch process resulting in an outmost surface 114A that is curved as shown. The plasma etch is continued to etch the conductive layer 307, the non-stoichiometric layer 305, the stoichiometric layer 303 and the conductive layer 301 to form oxygen exchange layer 110, switching layer 108 and electrode 104, respectively.In some embodiments, portions of one or more layers of the material layer stack 300 may become damaged by attack from energetic ion species during the plasma etch. In some such embodiments, the anneal process described above can be performed after the plasma etch process is completed.In an embodiment, the plasma etch may be stopped after etching the conductive layer 301 and exposing the conductive layer 301. A sacrificial spacer may be formed surrounding a portion of a partially patterned material layer stack 300 (above conductive layer 301, for example). The conductive layer 301 may be etched after formation of the spacer. In an embodiment, where the conductive layer 301 includes ruthenium, a plasma etch process may utilize oxygen to pattern the ruthenium. The sacrificial spacer may protect portions of the partially patterned material layer stack 300, such as the oxygen exchange layer 110 from becoming oxidized. After etching the conductive layer 301 to form the bottom electrode 106, the sacrificial spacer may be removed. In such an embodiment, the bottom electrode may protrude laterally beyond the switching layer 108.Figure 3E illustrates the structure of Figure 3D following the formation of a dielectric spacer layer 114 covering the RRAM device 350. The dielectric spacer layer 114 may be blanket deposited by a PVD, PECVD or an ALD process. In some embodiments, the dielectric spacer layer 114 is deposited immediately after forming the bottom electrode 106 and forms a hermetic seal completely around the RRAM device 350, including sidewall and top surfaces. The spacer 116 may be formed on the structure of the RRAM device 350 without breaking vacuum. In an embodiment, the dielectric spacer layer 114 includes a material such as silicon nitride, silicon carbide, carbon-doped silicon nitride, silicon dioxide. In an embodiment, the dielectric spacer layer has a thickness between 5nm and 50nm.Figure 3F illustrates the structure of Figure 3E following the formation of a dielectric 352 on the dielectric spacer layer 114 and following the formation of contact metallization 126 on the RRAM device 350. The contact metallization 126 may be formed on the RRAM device 350 after deposition of a dielectric 352 on the RRAM device 350. In an embodiment, a via opening (not shown) may be formed in the dielectric 352. In the illustrative embodiment, the via opening via opening etches a portion of the dielectric spacer layer 114 to expose the top electrode 114. In an embodiment, one or more materials of the contact metallization 126 may be deposited into the via opening and subsequently planarized to form metallization structure 126. Depending on the size of the via opening, the spacer 116 may or may not remain on a top surface 114B of the top electrode 114. In the illustrative embodiment, a portion of the spacer 116 remains on the top surface 114B.Depending on the choice of materials and on fabrication processes, the RRAM device 350 may include all embodiments of the RRAM device 100A or RRAM device 100B described above.Figures 4A illustrates an I-V plot, demonstrating concepts involved with filament formation and voltage cycling (reading and writing) in an RRAM device, such as an RRAM device 400 depicted in Figure 4B , in accordance with embodiments of the present disclosure. RRAM device 400 is the same or substantially the same as the RRAM device 102 described in association with Figure 1A . Referring again to Figure 4A , the initial operation of the RRAM device 400 begins by applying a voltage, between the top electrode 114 and the bottom electrode 104, that increases in magnitude until it reaches a value VElectro-Forming (point A to B). In an embodiment, VElectro-Forming is less than 1.6V. In an "intentional" one-time breakdown process, known as electro-forming, oxygen vacancies, Vo, are removed from the oxygen exchange layer 110 into the switching layer 108 and into the switching layer 108 to augment the vacancies created during the anneal process described above. Movement of vacancies in response to an electric field generated in the RRAM device 400 leads to a formation of a "conductive filament" in the switching layer 108. In an embodiment, the conductive filament may extend across switching layer 108 (point B).Figure 4B depicts an illustration of a conductive filament 402 in the RRAM device 400, in an accordance with an embodiment of the present disclosure. It is to be appreciated that a size of the conductive filament 402 may be determined by resistance of the RRAM device before the process of electro-forming and by the electroforming voltage. With a conductive filament 402, bridging from the top electrode 114 to the bottom electrode 104, the RRAM device 400 is said to be almost immediately conductive. Referring again to the I-V plot, RRAM device 400 becomes conductive and the current through the RRAM device starts to increase (point B to C), until it reaches a predetermined compliance current, IComp. The current through the RRAM device 400 does not continue to increase beyond IComp. In an embodiment, when the RRAM device is coupled with a transistor, IComp may be the maximum current that the transistor can deliver to the RRAM device 400. At point C, the RRAM device 400 is in a low resistance state.By reducing the magnitude of the voltage (while maintaining a positive polarity) between the top electrode 114 and bottom electrode 104 (moving from point C to D and then to point A), causes a reduction in a strength of the electric field. By applying a voltage of an opposite polarity between the top electrode 114 and bottom electrode 104 (moving from point A to F), causes a reversal in a direction of the electric field. In response to the change in the direction of the electric field, the oxygen vacancies move towards the oxygen exchange layer 110, leading to a dissolution of the conductive filament 402 in the switching layer 108 and in the switching layer 108. Filament dissolution takes place at a critical voltage (point F), termed VReset. In an embodiment, VReset is between -0.8 V and -1.0 V. Increasing the magnitude of the voltage beyond VReset changes the current flowing through the device.Figure 4C depicts an illustration of a dissolved filament 404 in the RRAM device 400, in an accordance with an embodiment of the present disclosure. With a dissolved filament 404, the current through the RRAM device 400 decreases dramatically and the device returns to a high resistance state (point G).Referring again to the I-V plot in Figure 4A , it is to be appreciated that the high resistance level of the RRAM device, point G, is different and lower in magnitude compared to the resistance level of the device before the onset of the forming process. In other words, the resistance level of the RRAM device 400 in a high resistance state can be over 10 times smaller than the virgin resistance (discussed above). By decreasing the magnitude of the voltage, traversing from point G to H and then to point I in the I-V plot, the dissolved filament is recreated again (at point I) under the action of vacancy migration. At a critical voltage, VSet, the filament completely bridges the top electrode 114 and the bottom electrode 104 and current begins to flow through the RRAM device 400. In an embodiment, VSet is less than 1.0 V. The RRAM device is, once again, said to be in a conductive or a low resistance state (at point J). The filament, that is recreated at point J, may have a size that is comparable to the size of the filament formed during the electro-forming process.Cycling of an RRAM device 400 in this manner, where the resistance levels remain unchanged when the voltage between the top electrode 114 and the bottom electrode 104 is set to 0V, leads to non-volatile memory effect. By increasing the magnitude of the voltage to at least 0.05V, the resistance state of the RRAM device 400 can be read. In one example, a voltage of 0.05V to 0.2V, referred to as a read voltage, VR, is much less than the switching voltage (VSet or VReset) and does not perturb the resistance state of the RRAM device 400. It is to be appreciated that the values VSet and VReset, generally refer to a portion of a voltage that may be applied to a transistor in series with the RRAM device 400. The RRAM device 400 coupled with a transistor in this manner is given the term embedded memory.Figure 5 illustrates a two-terminal spin orbit memory device such as memory apparatus 100A including an RRAM device 102 coupled to an access transistor 500.In an embodiment, the transistor 500 is on a substrate 501 and has a gate 502, a source region 504, and a drain region 506. In the illustrative embodiment, an isolation 508 is adjacent to the source region 504, drain region 506 and portions of the substrate 501. In some implementations of the disclosure, such as is shown, a pair of sidewall spacers 510 are on opposing sides of the gate 502.The transistor 500 further includes a gate contact 512 above and electrically coupled to the gate 502, and a drain contact 514 above and electrically coupled to the drain region 506, and a source contact 516 above and electrically coupled to the source region 504, as is illustrated in Figure 5 . The transistor 500 also includes dielectric 518 adjacent to the gate 502, source region 504, drain region 506, isolation 508, sidewall spacers 510, gate contact 512, drain contact 514 and source contact 516.In an embodiment, the memory apparatus 100A has one or more structural and material properties described above in association with Figure 1A . In the illustrative embodiment, the memory apparatus 100A includes an RRAM device 102 on a portion of an electrode 104, bottom electrode 106, on the electrode 104, and a switching layer 108 on the bottom electrode 106. The switching layer 108 supports a conductive filament during operation. The switching layer 108 includes a metal and oxygen in substantially stoichiometric proportions. The RRAM device 102 further includes an oxygen exchange layer 110 including the metal and oxygen on the switching layer 108, an electrode 112 on the oxygen exchange layer 110 and a top electrode 114 on the electrode 112. In the illustrative embodiment, the memory apparatus 100A further includes a spacer 116 directly adjacent to the RRAM device, where the spacer 116 includes a dielectric. The electrode 104 is above and coupled with conductive interconnect 118 and adjacent to dielectric 518. In the illustrative embodiment, the conductive interconnect 118 is on and above with the drain contact 514. A contact metallization 126, is coupled with the top electrode 114 as shown. Contact metallization 126 may be connected to one or more circuit elements.In other embodiments, a memory apparatus having one or more features of memory apparatus 100B may be coupled with the transistor 500.Gate contact 512 and source contact 516 are each coupled with interconnects. In the illustrative embodiment, gate contact 512 is coupled with a source interconnect 522 and the source contact 516 is coupled with a gate interconnect 524. A dielectric 526 is adjacent to source interconnect 522, gate interconnect 524, memory device 100, source contact 516 and gate contact 512. As shown, the dielectric spacer 116 extends laterally beyond the memory apparatus 100A and over the dielectric 518 to the gate interconnect 524 and source interconnect 522. The dielectric spacer 116 also extends on a portion of the gate contact 512 and source contact 516, as shown.In an embodiment, the underlying substrate 501 represents a surface used to manufacture integrated circuits. Suitable substrate 501 includes a material such as single crystal silicon, polycrystalline silicon and silicon on insulator (SOI), as well as substrates 501 formed of other semiconductor materials. In some embodiments, the substrate 501 is the same as or substantially the same as the substrate 101. The substrate 501 may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates.In an embodiment, the transistor 500 associated with substrate 501 are metal-oxide-semiconductor field-effect transistors (MOSFET or simply MOS transistors), fabricated on the substrate 501. In some embodiments, the transistor 500 is an access transistor 500. In various implementations of the disclosure, the transistor 500 may be planar transistors, nonplanar transistors, or a combination of both. Nonplanar transistors include FinFET transistors such as double-gate transistors and tri-gate transistors, and wrap-around or all-around gate transistors such as nanori 16on and nanowire transistors.In some embodiments, gate 502 includes at least two layers, a gate dielectric layer 502A and a gate electrode 502B. The gate dielectric layer 502A may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide (SiO2) and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric layer 502A to improve its quality when a high-k material is used.The gate electrode 502B of the access transistor 500 of substrate 501 is formed on the gate dielectric layer 502A and may consist of at least one P-type work function metal or N-type work function metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode 502B may consist of a stack of two or more metal layers, where one or more metal layers are work function metal layers and at least one metal layer is a conductive fill layer.For a PMOS transistor, metals that may be used for the gate electrode 502B include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a work function that is between about 4.5 eV and about 5.2 eV. For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a work function that is between about 3.5 eV and about 4.2 eV.In some implementations, the gate electrode may consist of a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate 501 and two sidewall portions that are substantially perpendicular to the top surface of the substrate 501. In another implementation, at least one of the metal layers that form the gate electrode 502B may simply be a planar layer that is substantially parallel to the top surface of the substrate 501 and does not include sidewall portions substantially perpendicular to the top surface of the substrate 501. In further implementations of the disclosure, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode 502B may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.The sidewall spacers 510 may be formed from a material such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers include deposition and etching process operations. In an alternate implementation, a plurality of spacer pairs may be used, for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack. As shown, the source region 504 and drain region 506 are formed within the substrate adjacent to the gate stack of each MOS transistor. The source region 504 and drain region 506 are generally formed using either an implantation/diffusion process or an etching/deposition process. In the former process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be ion-implanted into the substrate to form the source region 504 and drain region 506. An annealing process that activates the dopants and causes them to diffuse further into the substrate typically follows the ion implantation process. In the latter process, the substrate 501 may first be etched to form recesses at the locations of the source and drain regions. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the source region 504 and drain region 506. In some implementations, the source region 504 and drain region 506 may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some implementations, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In further embodiments, the source region 504 and drain region 506 may be formed using one or more alternate semiconductor materials such as germanium or a group III-V material or alloy. And in further embodiments, one or more layers of metal and/or metal alloys may be used to form the source region 504 and drain region 506.In an embodiment, the source contact 516, the drain contact 514 and gate contact 512 each include a multi-layer stack. In an embodiment, the multi-layer stack includes two or more distinct layers of metal such as a layer of Ti, Ru or Al and a conductive cap on the layer of metal. The conductive cap may include a material such as W or Cu.In an embodiment, the source interconnect 522, gate interconnect 524, and contact metallization 126 includes a material that is the same or substantially the same as the material of the conductive interconnect 118.The isolation 508 and dielectric 518 and 526 may each include any material that has sufficient dielectric strength to provide electrical isolation. Materials may include silicon and one or more of oxygen, nitrogen or carbon such as silicon dioxide, silicon nitride, silicon oxynitride, carbon doped nitride or carbon doped oxide.Figure 6 illustrates a computing device 600 in accordance with embodiments of the present disclosure. As shown, computing device 600 houses a motherboard 602. Motherboard 602 may include a number of components, including but not limited to a processor 601 and at least one communications chip 604 or 605. Processor 601 is physically and electrically coupled to the motherboard 602. In some implementations, communications chip 605 is also physically and electrically coupled to motherboard 602. In further implementations, communications chip 605 is part of processor 601.Depending on its applications, computing device 600 may include other components that may or may not be physically and electrically coupled to motherboard 602. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset 606, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). In an embodiment, the battery is coupled to power at least one of the processor or the volatile or non-volatile memory.Communications chip 605 enables wireless communications for the transfer of data to and from computing device 600. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. Communications chip 605 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.11 family), long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. Computing device 600 may include a plurality of communications chips 604 and 605. For instance, a first communications chip 605 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communications chip 604 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.Processor 601 of the computing device 600 includes an integrated circuit die packaged within processor 601. In some embodiments, the integrated circuit die of processor 601 includes one or more transistors, interconnect structures, and non-volatile memory devices such as transistor 500, source interconnect 522, gate interconnect 524, contact metallization 126, and conductive interconnect 118, and memory apparatus 100A including RRAM device 102, respectively ( Figure 5 ). Referring again to Figure 6 , the term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.Communications chip 605 also includes an integrated circuit die packaged within communication chip 605. In another embodiment, the integrated circuit die of communications chips 604, 605 includes one or more transistors, interconnect structures, non-volatile memory devices, conductive structures and metallization structures such as transistor 500, source interconnect 522, gate interconnect 524, contact metallization 126, and conductive interconnect 118, and memory apparatus 100A including RRAM device 102, respectively ( Figure 5 ). Referring again to Figure 6 , depending on its applications, computing device 600 may include other components that may or may not be physically and electrically coupled to motherboard 602. These other components may include, but are not limited to, volatile memory (e.g., DRAM) 607, 608, non-volatile memory (e.g., ROM) 610, a graphics CPU 612, flash memory, global positioning system (GPS) device 613, compass 614, a chipset 606, an antenna 616, a power amplifier 609, a touchscreen controller 611, a touchscreen display 617, a speaker 615, a camera 603, and a battery 618, as illustrated, and other components such as a digital signal processor, a crypto processor, an audio codec, a video codec, an accelerometer, a gyroscope, and a mass storage device (such as hard disk drive, solid state drive (SSD), compact disk (CD), digital versatile disk (DVD), and so forth), or the like. In further embodiments, any component housed within computing device 600 and discussed above may contain a stand-alone integrated circuit memory die that includes one or more arrays of NVM devices including one or more memory apparatus each coupled with a transistor.In various implementations, the computing device 600 may be a laptop, a netbook, a notebook, an Ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 600 may be any other electronic device that processes data.Figure 7 illustrates an integrated circuit (IC) structure 700 that includes one or more embodiments of the disclosure. The integrated circuit (IC) structure 700 is an intervening substrate used to bridge a first substrate 702 to a second substrate 704. The first substrate 702 may be, for instance, an integrated circuit die. The second substrate 704 may be, for instance, a memory module, a computer mother, or another integrated circuit die. Generally, the purpose of an integrated circuit (IC) structure 700 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an integrated circuit (IC) structure 700 may couple an integrated circuit die to a ball grid array (BGA) 707 that can subsequently be coupled to the second substrate 704. In some embodiments, the first and second substrates 702/704 are attached to opposing sides of the integrated circuit (IC) structure 700. In other embodiments, the first and second substrates 702/704 are attached to the same side of the integrated circuit (IC) structure 700. And in further embodiments, three or more substrates are interconnected by way of the integrated circuit (IC) structure 700.The integrated circuit (IC) structure 700 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the integrated circuit (IC) structure may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials.The integrated circuit (IC) structure may include metal interconnects 708 and vias 710, including but not limited to through-silicon vias (TSVs) 712. The integrated circuit (IC) structure 700 may further include embedded devices 714, including both passive and active devices. Such embedded devices 714 include capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, device structure including transistors, such as transistor 500 (described in Figure 5 ) coupled with a memory apparatus 100A including RRAM device 102, in accordance with an embodiment of the present disclosure. Referring again to Figure 7 , the integrated circuit (IC) structure 700 may further include embedded devices 714 such as one or more resistive random-access devices, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the integrated circuit (IC) structure 700. In accordance with embodiments of the present disclosure, apparatuses or processes disclosed herein may be used in the fabrication of integrated circuit (IC) structure 700.Accordingly, one or more embodiments of the present disclosure relate generally to the fabrication of embedded microelectronic memory. The microelectronic memory may be non-volatile, wherein the memory can retain stored information even when not powered. One or more embodiments of the present disclosure relate to the fabrication of a memory apparatus including an RRAM device having a non-stoichiometric oxygen exchange layer above a stoichiometric switching layer. The memory apparatus may be used in an embedded non-volatile memory application.Thus, embodiments of the present disclosure include RRAM devices and methods of fabrication.In a first example, memory apparatus includes an interconnect in a first dielectric above a substrate, a structure above the interconnect, where the structure includes a diffusion barrier material. The structure substantially covers the interconnect. A resistive random-access memory (RRAM) device is coupled to the interconnect, the RRAM device includes a first electrode on a portion of the structure, a stoichiometric layer including metal and oxygen on the first electrode, a non-stoichiometric layer including the metal and oxygen on the stoichiometric layer, a second electrode including a barrier material on the non-stoichiometric layer and a third electrode on the second electrode. A spacer is directly adjacent to the RRAM device where the spacer includes a second dielectric.In second examples, for any of the first example, the first electrode includes a noble metal.In third examples, for any of the first through second examples the stoichiometric layer and the non-stoichiometric layer each include tantalum.In fourth examples, for any of the first through third examples, the stoichiometric layer has a chemical composition of Ta2O5, and wherein the sub-stoichiometric layer has a chemical composition of Tax OY, where O is oxygen and wherein the ratio between X and Y is between 1:1.08 to 1:1.2.In fifth examples, for any of the first through fourth examples, the sub-stoichiometric layer has a gradient in oxygen concentration, where the concentration of oxygen decreases away from an interface between the non-stoichiometric layer and the stoichiometric layer toward the second electrode.In sixth examples, for any of the first through fifth examples, the stoichiometric layer has a thickness in the range of 2nm-5nm, where the non -stoichiometric layer has a thickness in the range of 5nm-15nm, and wherein the non-stoichiometric layer has a thickness that is between 2 and 3 times the thickness of the stoichiometric layer.In seventh examples, for any of the first through sixth examples, the first electrode includes a noble metal, and where the second electrode includes a noble metal.In eighth examples, for any of the first through seventh examples, the third electrode includes tantalum or an alloy, and where the alloy includes nitrogen and at least one of tantalum, tungsten or titanium.In ninth examples, for any of the first through eighth examples, the non-stoichiometric layer has a sidewall, and where a portion of the non-stoichiometric layer adjacent to the sidewall is substantially oxidized.In tenth examples, for any of the first through ninth examples, the memory apparatus of claim 9, wherein the portion of the non-stoichiometric layer adjacent to the sidewall has a lateral width of less than 3nm as measured from the sidewall.In eleventh examples, for any of the first through tenth examples, the third electrode has an outer most sidewall surface, and wherein a portion of the third electrode adjacent to the outmost sidewall surface includes oxygen.In twelfth examples, for any of the first through eleventh examples, the spacer is on a portion of an uppermost surface of the diffusion barrier, and on an uppermost surface of the third electrode.In a thirteenth example, for any of the first through twelfth examples, the memory apparatus further includes a metallization structure in contact with a portion of the third electrode.In a fourteenth example, a memory apparatus includes an interconnect in a dielectric above a substrate, a diffusion barrier on an uppermost surface of the interconnect, where the diffusion barrier has a lowermost surface area that is greater than the uppermost surface area of the interconnect and further where the diffusion barrier covers the interconnect. A resistive random-access memory apparatus is coupled to the interconnect, the RRAM device includes a bottom electrode including ruthenium on a portion of the diffusion barrier, a stoichiometric layer including oxygen and tantalum on the first electrode, layer including tantalum and oxygen on the stoichiometric layer. The layer further includes an inner portion and an outer portion adjacent to the inner portion, where the inner portion is non-stoichiometric and the outer portion is substantially stoichiometric. A barrier electrode is on the layer including the tantalum and oxygen, and a top electrode on the barrier electrode, where the top electroe includes a first portion and a second portion adjacent to the first portion, where the first portion includes tantalum and a second portion includes tantalum and oxygen.In fifteenth examples, for any of the fourteenth examples, the stoichiometric layer has a chemical composition, Ta2O5, and wherein the layer including tantalum and oxygen has a chemical composition of Tax OY, where O is oxygen and where the ratio between X and Y is between 1:1.08 to 1:1.2.In sixteenth examples, for any of the fourteenth through fifteenth examples, the outer portion that is substantially stoichiometric has a thickness between 2nm to 5nm.In seventeenth examples, for any of the fourteenth through sixteenth examples, the sub-stoichiometric layer has a gradient in oxygen concentration, and where the concentration of oxygen decreases away from an interface between the non-stoichiometric layer and the stoichiometric layer toward the barrier electrode.In eighteenth examples, for any of the fourteenth through seventeenth examples, the first electrode has a thickness between 5nm and 10nm.In a nineteenth example, for any of the fourteenth through eighteenth examples, the bottom electrode includes Ru and the second electrode includes a noble metal.In twentieth examples, for any of the fourteenth through nineteenth examples, the top electrode includes an outermost surface, and where a portion of the top electrode adjacent to the outmost surface includes oxygen.In twenty first examples, for any of the fourteenth through twentieth examples, the portion of the top electrode adjacent to the outer most surface including oxygen has a lateral thickness that correlates with a vertical thickness of the bottom electrode, where the lateral thickness is orthogonal to the vertical thickness, and where the vertical thickness is measured from an interface between an uppermost surface of the diffusion barrier and a lowermost surface of the bottom electrode.In a twenty second example, a system includes a processor, a radio transceiver coupled to the processor, where the transceiver includes a transistor. The transistor incldues a drain contact coupled to a drain, a source contact coupled to a source and a gate contact coupled to a gate. The radio transceiver further includes a resistive random-access memory (RRAM) device coupled with the drain contact, the RRAM device includes a first electrode above the drain contact, a stoichiometric layer including metal and oxygen on the first electrode, a non-stoichiometric layer including the metal and oxygen on the stoichiometric layer, a second electrode on the sub-stoichiometric layer and a third electrode on the barrier electrode. A spacer directly is adjacent to the RRAM device, where the spacer includes a second dielectric.In twenty third examples, for any of the twenty second examples, the system of claim 22, further includes a battery coupled to power at least one of the processor or memory. |
Method, apparatus, and program means for performing a dot-product operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources store to a storage location a result value equal to a dot-product of at least two operands. |
CLAIMS What is claimed is: 1. A machine-readable medium having stored thereon an instruction, which if executed by a machine causes the machine to perform a method comprising: determining a dot-product result of at least two operands, each having a plurality of packed values of a first datatype; storing the dot-product result. 2. The machine -readable medium of claim 1 , wherein the first datatype is an integer datatype. 3. The machine -readable medium of claim 1 , wherien the first datatype is a floating point datatype. 4. The machine -readable medium of claim 1 , wherein the at least two operands each have only two packed values. 5. The machine -readable medium of claim 1 , wherein the at least two operands each have only four packed values. 6. The machine -readable medium of claim 1 , wherein each of the plurality of packed values is a single-precision value and is to be represented by 32 bits. 7. The machine -readable medium of claim 1 , wherein each of the plurality of packed values is a double-precision value and is to be represented by 64 bits. 8. The machine -readable medium of claim 1 , wherein the at least two operands and the dot-product result are to be stored in at least two registers to store up to 128 bits of data. 9. An apparatus comprising: a first logic to perform a single-instruction-multiple-data (SIMD) dot- product instruction on at least two packed operands of a first datatype. 10. The apparatus of claim 9, wherein the SIMD dot-product instruction includes a source operand indicator, a destination operand indicator, and at least one immediate value indicator. 11. The apparatus of claim 10, wherein the source operand indicator includes an address of a source register having a plurality of elements to store a plurality of packed values. 12. The apparatus of claim 11 , wherein the destination operand indicator includes an address of a destination register having a plurality of elements to store a plurality of packed values. 13. The apparatus of claim 12, wherein the immediate value indicator includes a plurality of control bits. 14. The apparatus of claim 9, wherein the at least two packed operands are each double-precision integers. 15. The apparatus of claim 9, wherein the at least two packed operands are each double precision floating point values. 16. The apparatus of claim 9, wherein the at least two packed operands are each single precision integers. 17. The apparatus of claim 9, wherein the at least two packed operands are each single precision floating point values. 18. A system comprising : a first memory to store a single-instruction-multiple-data (SIMD) dot- product instruction; a processor coupled to the first memory to perform the SIMD dot-product instruction. 19. The system of claim 18, wherein the SIMD dot-product instruction includes a source operand indicator, a destination operand indicator, and at least one immediate value indicator. 20. The system of claim 19, wherein the source operand indicator includes an address of a source register having a plurality of elements to store a plurality of packed values. 21. The system of claim 20, wherein the destination operand indicator includes an address of a destination register having a plurality of elements to store a plurality of packed values. 22. The system of claim 21 , wherein the immediate value indicator includes a plurality of control bits. 23. The system of claim 18, wherein the at least two packed operands are each double-precision integers. 24. The system of claim 18, wherein the at least two packed operands are each double precision floating point values. 25. The system of claim 18, wherein the at least two packed operands are each single precision integers. 26. The apparatus of claim 18, wherein the at least two packed operands are each single precision floating point values. 27. A method comprising: multiplying a first data element of a first packed operand and a first data element of a second packed operand to generate a first product; multiplying a second data element of the first packed operand and a second data element of the second packed operand to generate a second product; adding the first product to the second product to generate a dot-product result. 28. The method of claim 27, further comprising multiplying a third data element of the first packed operand and a third data element of the second packed operand to generate a third product. 29. The method of claim 28, further comprising multiplying a fourth data element of the first packed operand and a fourth data element of the second packed operand to generate a fourth product. 30. A processor comprising: a source register to store a first packed operand, including a first and second data value; a destination register to store a second packed operand, including a third and fourth data value; logic to perform a single-instruction-multiple-data (SIMD) dot-product instruction according to a control value indicated by the dot-product instruction, the logic comprising a first multiplier to multiply the first and third data values to generate a first product, a second multiplier to multiply the second and fourth data values to generate a second product, the logic further including at least one adder to add to the first and second product to produce at least one sum. 31. The processor of claim 30, wherein the logic further includes a first multiplexer to select between the first product and a null value, depending upon a first bit of the control value. 32. The processor of claim 31 , wherein the logic further includes a second multiplexer to select between the second product and a null value, depending upon a second bit of the control value. 33. The processor of claim 32, wherein the logic further includes a third multiplexer to select between the sum and a null value to be stored in a first element of the destination register. 34. The processor of claim 33, wherein the logic further includes a fourth multiplexer to select between the sum and a null value to be stored in a second element of the destination register. 35. The processor of claim 30, wherein the first, second, third, and fourth data values are 64 bit integer values. 36. The processor of claim 30, wherein the first, second, third, fourth data values are 64 bit floating point values. 37. The processor of claim 30, wherein the first, second, third, and fourth data values are 32 bit integer values. 38. The processor of claim 30, wherein the first, second, third, and fourth data values are 32 bit floating point values. 39. The processor of claim 30, wherein the source and destination registers are to store at least 128 bits of data. |
INSTRUCTION AND LOGIC FOR PERFORMING A DOT-PRODUCTOPERATIONFIELD OF THE INVENTIONThe present disclosure pertains to the field of processing apparatuses and associated software and software sequences that perform mathematical operations. DESCRIPTION OF RELATED ARTComputer systems have become increasingly pervasive in our society. The processing capabilities of computers have increased the efficiency and productivity of workers in a wide spectrum of professions. As the costs of purchasing and owning a computer continues to drop, more and more consumers have been able to take advantage of newer and faster machines. Furthermore, many people enjoy the use of notebook computers because of the freedom. Mobile computers allow users to easily transport their data and work with them as they leave the office or travel. This scenario is quite familiar with marketing staff, corporate executives, and even students. [0003] As processor technology advances, newer software code is also being generated to run on machines with these processors. Users generally expect and demand higher performance from their computers regardless of the type of software being used. One such issue can arise from the kinds of instructions and operations that are actually being performed within the processor. Certain types of operations require more time to complete based on the complexity of the operations and/or type of circuitry needed. This provides an opportunity to optimize the way certain complex operations are executed inside the processor.Media applications have been driving microprocessor development for more than a decade. In fact, most computing upgrades in recent years have been driven by media applications. These upgrades have predominantly occurred within consumer segments, although significant advances have also been seen in enterprise segments for entertainment enhanced education and communication purposes. Nevertheless, future media applications will require even higher computational requirements. As a result, tomorrow's personal computing experience will be even richer in audio-visual effects, as well as being easier to use, and more importantly, computing will merge with communications. [0005] Accordingly, the display of images, as well as playback of audio and video data, which is collectively referred to as content, have become increasingly popular applications for current computing devices. Filtering and convolution operations are some of the most common operations performed on content data, such as image audio and video data. Such operations are computationally intensive, but offer a high level of data parallelism that can be exploited through an efficient implementation using various data storage devices, such as for example, single instruction multiple data (SIMD) registers. A number of current architectures also require multiple operations, instructions, or sub-instructions (often referred to as "micro-operations" or "uops") to perform various mathematical operations on a number of operands, thereby diminishing throughput and increasing the number of clock cycles required to perform the mathematical operations.For example, an instruction sequence consisting of a number of instructions may be required to perform one or more operations necessary to generate a dot-product, including adding the products of two or more numbers represented by various datatypes within a processing apparatus, system or computer program. However, such prior art techniques may require numerous processing cycles and may cause a processor or system to consume unnecessary power in order to generate the dot-product. Furthermore, some prior art techniques may be limited in the operand datatypes that may be operated upon.BRIEF DESCRIPTION OF THE FIGURESThe present invention is illustrated by way of example and not limitation in the Figures of the accompanying drawings:Figure IA is a block diagram of a computer system formed with a processor that includes execution units to execute an instruction for a dot-product operation in accordance with one embodiment of the present invention;Figure IB is a block diagram of another exemplary computer system in accordance with an alternative embodiment of the present invention;Figure 1C is a block diagram of yet another exemplary computer system in accordance with another alternative embodiment of the present invention;Figure 2 is a block diagram of the micro-architecture for a processor of one embodiment that includes logic circuits to perform a dot-product operation in accordance with the present invention;Figure 3A illustrates various packed data type representations in multimedia registers according to one embodiment of the present invention; [0013] Figure 3B illustrates packed data-types in accordance with an alternative embodiment;Figure 3C illustrates various signed and unsigned packed data type representations in multimedia registers according to one embodiment of the present invention;Figure 3D illustrates one embodiment of an operation encoding (opcode) format;Figure 3E illustrates an alternative operation encoding (opcode) format; [0017] Figure 3F illustrates yet another alternative operation encoding format;] Figure 4 is a block diagram of one embodiment of logic to perform a dot- product operation on packed data operands in accordance with the present invention. [0019] Figure 5a is a block diagram of a logic to perform a dot-product operation on single precision packed data operands in accordance with one embodiment of the present invention;Figure 5b is a block diagram of logic to perform a dot-product operation on double precision packed data operands in accordance with one embodiment of the present invention;Figure 6 A is a block diagram of a circuit for performing a dot-product operation in accordance with one embodiment of the present invention; [0022] Figure 6B is a block diagram of a circuit for performing a dot-product operation in accordance with another embodiment of the present invention;] Figure 7A is a pseudo-code representation of operations that may be performed by executing a DPPS instruction, according to one embodiment. [0024] Figure 7B is a pseudo-code representation of operations that may be performed by executing a DPPD instruction, according to one embodiment.DETAILED DESCRIPTIONThe following description describes embodiments of a technique to perform a dot-product operation within a processing apparatus, computer system, or software program. In the following description, numerous specific details such as processor types, micro-architectural conditions, events, enablement mechanisms, and the like are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. Additionally, some well known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring the present invention.Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. The same techniques and teachings of the present invention can easily be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of the present invention are applicable to any processor or machine that performs data manipulations. However, the present invention is not limited to processors or machines that perform 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data operations and can be applied to any processor and machine in which manipulation of packed data is needed. [0027] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. One of ordinary skill in the art, however, will appreciate that these specific details are not necessary in order to practice the present invention. In other instances, well known electrical structures and circuits have not been set forth in particular detail in order to not necessarily obscure the present invention. In addition, the following description provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of the present invention rather than to provide an exhaustive list of all possible implementations of the present invention.Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present invention can be accomplished by way of software. In one embodiment, the methods of the present invention are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the present invention. The present invention may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform a process according to the present invention. Alternatively, the steps of the present invention might be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components. Such software can be stored within a memory in the system. Similarly, the code can be distributed via a network or by way of other computer readable media.Thus a machine -readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD- ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, a transmission over the Internet, electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.) or the like. Accordingly, the computer-readable medium includes any type of media/machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer). Moreover, the present invention may also be downloaded as a computer program product. As such, the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client). The transfer of the program may be by way of electrical, optical, acoustical, or other forms of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem, network connection or the like).A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. An optical or electrical wave modulated or otherwise generated to transmit such information, a memory, or a magnetic or optical storage such as a disc may be the machine readable medium. Any of these mediums may "carry" or "indicate" the design or software information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re -transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may make copies of an article (a carrier wave) embodying techniques of the present invention. [0031] In modern processors, a number of different execution units are used to process and execute a variety of code and instructions. Not all instructions are created equal as some are quicker to complete while others can take an enormous number of clock cycles. The faster the throughput of instructions, the better the overall performance of the processor. Thus it would be advantageous to have as many instructions execute as fast as possible. However, there are certain instructions that have greater complexity and require more in terms of execution time and processor resources. For example, there are floating point instructions, load/store operations, data moves, etc.As more and more computer systems are used in internet and multimedia applications, additional processor support has been introduced over time. For instance, Single Instruction, Multiple Data (SIMD) integer/floating point instructions and Streaming SIMD Extensions (SSE) are instructions that reduce the overall number of instructions required to execute a particular program task, which in turn can reduce the power consumption. These instructions can speed up software performance by operating on multiple data elements in parallel. As a result, performance gains can be achieved in a wide range of applications including video, speech, and image/photo processing. The implementation of SIMD instructions in microprocessors and similar types of logic circuit usually involve a number of issues. Furthermore, the complexity of SIMD operations often leads to a need for additional circuitry in order to correctly process and manipulate the data.Presently a SIMD dot-product instruction is not available. Without the presence of a SIMD dot-product instruction, a large number of instructions and data registers may be needed to accomplish the same results in applications such as audio/video compression, processing, and manipulation. Thus, at least one dot-product instruction in accordance with embodiments of the present invention can reduce code overhead and resource requirements. Embodiments of the present invention provide a way to implement a dot-product operation as an algorithm that makes use of SIMD related hardware. Presently, it is somewhat difficult and tedious to perform dot-product operations on data in a SIMD register. Some algorithms require more instructions to arrange data for arithmetic operations than the actual number of instructions to execute those operations. By implementing embodiments of a dot-product operation in accordance with embodiments of the present invention, the number of instructions needed to achieve dot-product processing can be drastically reduced. [0034] Embodiments of the present invention involve an instruction for implementing a dot-product operation. A dot-product operation generally involves multiplying at least two values and adding this product to the product of at least two other values. Other variations may be made on the generic dot-product algorithm, including adding the result of various dot-product operations to generate another dot- product. For example, a dot product operation according to one embodiment as applied to data elements can be generically represented as:DESTl *- SRCl * SRC2;DEST2 *- SRC3 * SRC4;DEST3 <r DESTl + DEST2;For a packed SIMD data operand, this flow can be applied to each data element of each operand.In the above flow, "DEST" and "SRC" are generic terms to represent the source and destination of the corresponding data or operation. In some embodiments, they may be implemented by registers, memory, or other storage areas having other names or functions than those depicted. For example, in one embodiment, DESTl and DEST2 may be a first and second temporary storage area (e.g., "TEMPI" and "TEMP2" register), SRCl and SRC3 may be first and second destination storage area (e.g., "DESTl" and "DEST2" register), and so forth. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage elements within the same storage area (e.g., a SIMD register). Furthermore, in one embodiment, a dot-product operation may generate sum of dot-products generated by the above generic flow.Figure IA is a block diagram of an exemplary computer system formed with a processor that includes execution units to execute an instruction for a dot- product operation in accordance with one embodiment of the present invention. System 100 includes a component, such as a processor 102 to employ execution units including logic to perform algorithms for process data, in accordance with the present invention, such as in the embodiment described herein. System 100 is representative of processing systems based on the PENTIUM<(R)> III, PENTIUM<(R)> 4, Xeon(TM), Itanium<(R)>, XScale(TM) and/or StrongARM(TM) microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 100 may execute a version of the WINDOWS(TM) operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present invention is not limited to any specific combination of hardware circuitry and software.Embodiments are not limited to computer systems. Alternative embodiments of the present invention can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that performs dot-product operations on operands. Furthermore, some architectures have been implemented to enable instructions to operate on several data simultaneously to improve the efficiency of multimedia applications. As the type and volume of data increases, computers and their processors have to be enhanced to manipulate data in more efficient methods.Figure IA is a block diagram of a computer system 100 formed with a processor 102 that includes one or more execution units 108 to perform an algorithm to calculate the dot-product of a data elements from one or more operands in accordance with one embodiment of the present invention. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments can be included in a multiprocessor system. System 100 is an example of a hub architecture. The computer system 100 includes a processor 102 to process data signals. The processor 102 can be a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. The processor 102 is coupled to a processor bus 110 that can transmit data signals between the processor 102 and other components in the system 100. The elements of system 100 perform their conventional functions that are well known to those familiar with the art.In one embodiment, the processor 102 includes a Level 1 (Ll) internal cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. Alternatively, in another embodiment, the cache memory can reside external to the processor 102. Other embodiments can also include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 can store different types of data in various registers including integer registers, floating point registers, status registers, and instruction pointer register.Execution unit 108, including logic to perform integer and floating point operations, also resides in the processor 102. The processor 102 also includes a microcode (ucode) ROM that stores microcode for certain macroinstructions. For this embodiment, execution unit 108 includes logic to handle a packed instruction set 109.In one embodiment, the packed instruction set 109 includes a packed dot-product instruction for calculating the dot-product of a number of operands. By including the packed instruction set 109 in the instruction set of a general-purpose processor 102, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general-purpose processor 102. Thus, many multimedia applications can be accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This can eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations one data element at a time.Alternate embodiments of an execution unit 108 can also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 includes a memory 120. Memory 120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 can store instructions and/or data represented by data signals that can be executed by the processor 102. [0041] A system logic chip 116 is coupled to the processor bus 110 and memory 120. The system logic chip 116 in the illustrated embodiment is a memory controller hub (MCH). The processor 102 can communicate to the MCH 116 via a processor bus 110. The MCH 116 provides a high bandwidth memory path 118 to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. The MCH 116 is to direct data signals between the processor 102, memory 120, and other components in the system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O 122. In some embodiments, the system logic chip 116 can provide a graphics port for coupling to a graphics controller 112. The MCH 116 is coupled to memory 120 through a memory interface 118. The graphics card 112 is coupled to the MCH 116 through an Accelerated Graphics Port (AGP) interconnect 114.System 100 uses a proprietary hub interface bus 122 to couple the MCH 116 to the I/O controller hub (ICH) 130. The ICH 130 provides direct connections to some I/O devices via a local I/O bus. The local I/O bus is a high-speed I/O bus for connecting peripherals to the memory 120, chipset, and processor 102. Some examples are the audio controller, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller 134. The data storage device 124 can comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. [0043] For another embodiment of a system, an execution unit to execute an algorithm with a dot-product instruction can be used with a system on a chip. One embodiment of a system on a chip comprises of a processor and a memory. The memory for one such system is a flash memory. The flash memory can be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller can also be located on a system on a chip.Figure IB illustrates a data processing system 140 which implements the principles of one embodiment of the present invention. It will be readily appreciated by one of skill in the art that the embodiments described herein can be used with alternative processing systems without departure from the scope of the invention. [0045] Computer system 140 comprises a processing core 159 capable of performing SIMD operations including a dot-product operation. For one embodiment, processing core 159 represents a processing unit of any type of architecture, including but not limited to a CISC, a RISC or a VLIW type architecture. Processing core 159 may also be suitable for manufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate said manufacture.Processing core 159 comprises an execution unit 142, a set of register f[iota]le(s) 145, and a decoder 144. Processing core 159 also includes additional circuitry (not shown) which is not necessary to the understanding of the present invention. Execution unit 142 is used for executing instructions received by processing core 159. In addition to recognizing typical processor instructions, execution unit 142 can recognize instructions in packed instruction set 143 for performing operations on packed data formats. Packed instruction set 143 includes instructions for supporting dot-product operations, and may also include other packed instructions. Execution unit 142 is coupled to register file 145 by an internal bus. Register file 145 represents a storage area on processing core 159 for storing information, including data. As previously mentioned, it is understood that the storage area used for storing the packed data is not critical. Execution unit 142 is coupled to decoder 144. Decoder 144 is used for decoding instructions received by processing core 159 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 142 performs the appropriate operations. [0047] Processing core 159 is coupled with bus 141 for communicating with various other system devices, which may include but are not limited to, for example, synchronous dynamic random access memory (SDRAM) control 146, static random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association (PCMCIA)/compact flash (CF) card control 149, liquid crystal display (LCD) control 150, direct memory access (DMA) controller 151, and alternative bus master interface 152. In one embodiment, data processing system 140 may also comprise an I/O bridge 154 for communicating with various I/O devices via an I/O bus 153. Such I/O devices may include but are not limited to, for example, universal asynchronous receiver/transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth wireless UART 157 and I/O expansion interface 158.One embodiment of data processing system 140 provides for mobile, network and/or wireless communications and a processing core 159 capable of performing SIMD operations including a dot-product operation. Processing core 159 may be programmed with various audio, video, imaging and communications algorithms including discrete transformations such as a Walsh-Hadamard transform, a fast Fourier transform (FFT), a discrete cosine transform (DCT), and their respective inverse transforms; compression/decompression techniques such as color space transformation, video encode motion estimation or video decode motion compensation; and modulation/demodulation (MODEM) functions such as pulse coded modulation (PCM). Some embodiments of the invention may also be applied to graphics applications, such as three dimensional ("3D") modeling, rendering, objects collision detection, 3D objects transformation and lighting, etc.Figure 1C illustrates yet alternative embodiments of a data processing system capable of performing SIMD dot-product operations. In accordance with one alternative embodiment, data processing system 160 may include a main processor 166, a SIMD coprocessor 161, a cache memory 167, and an input/output system 168. The input/output system 168 may optionally be coupled to a wireless interface 169. SIMD coprocessor 161 is capable of performing SIMD operations including dot-product operations. Processing core 170 may be suitable for manufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate the manufacture of all or part of data processing system 160 including processing core 170.For one embodiment, SIMD coprocessor 161 comprises an execution unit 162 and a set of register file(s) 164. One embodiment of main processor 165 comprises a decoder 165 to recognize instructions of instruction set 163 including SIMD dot- product calculation instructions for execution by execution unit 162. For alternative embodiments, SIMD coprocessor 161 also comprises at least part of decoder 165B to decode instructions of instruction set 163. Processing core 170 also includes additional circuitry (not shown) which is not necessary to the understanding of embodiments of the present invention.In operation, the main processor 166 executes a stream of data processing instructions that control data processing operations of a general type including interactions with the cache memory 167, and the input/output system 168. Embedded within the stream of data processing instructions are SIMD coprocessor instructions. The decoder 165 of main processor 166 recognizes these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 161. Accordingly, the main processor 166 issues these SIMD coprocessor instructions (or control signals representing SIMD coprocessor instructions) on the coprocessor bus 166 where from they are received by any attached SIMD coprocessors. In this case, the SIMD coprocessor 161 will accept and execute any received SIMD coprocessor instructions intended for it.Data may be received via wireless interface 169 for processing by the SIMD coprocessor instructions. For one example, voice communication may be received in the form of a digital signal, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples representative of the voice communications. For another example, compressed audio and/or video may be received in the form of a digital bit stream, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples and/or motion video frames. For one embodiment of processing core 170, main processor 166, and a SIMD coprocessor 161 are integrated into a single processing core 170 comprising an execution unit 162, a set of register f[iota]le(s) 164, and a decoder 165 to recognize instructions of instruction set 163 including SIMD dot-product instructions. [0052] Figure 2 is a block diagram of the micro-architecture for a processor 200 that includes logic circuits to perform a dot-product instruction in accordance with one embodiment of the present invention. For one embodiment of the dot-product instruction, the instruction can multiply a first data element with a second data element and add this product to a product of third and fourth data element. In some embodiments, the dot-product instruction can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment the in-order front end 201 is the part of the processor 200 that fetches macro-instructions to be executed and prepares them to be used later in the processor pipeline. The front end 201 may include several units. In one embodiment, the instruction prefetcher 226 fetches macro-instructions from memory and feeds them to an instruction decoder 228 which in turn decodes them into primitives called microinstructions or micro-operations (also called micro op or uops) that the machine can execute. In one embodiment, the trace cache 230 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 234 for execution. When the trace cache 230 encounters a complex macro-instruction, the microcode ROM 232 provides the uops needed to complete the operation. [0053] Many macro-instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete a macro-instruction, the decoder 228 accesses the microcode ROM 232 to do the macro-instruction. For one embodiment, a packed dot-product instruction can be decoded into a small number of micro ops for processing at the instruction decoder 228. In another embodiment, an instruction for a packed dot-product algorithm can be stored within the microcode ROM 232 should a number of micro-ops be needed to accomplish the operation. The trace cache 230 refers to a entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences for the dot-product algorithm in the micro-code ROM 232. After the microcode ROM 232 finishes sequencing micro-ops for the current macro-instruction, the front end 201 of the machine resumes fetching micro-ops from the trace cache 230.Some SIMD and other multimedia types of instructions are considered complex instructions. Most floating point related instructions are also complex instructions. As such, when the instruction decoder 228 encounters a complex macro- instruction, the microcode ROM 232 is accessed at the appropriate location to retrieve the microcode sequence for that macro-instruction. The various micro-ops needed for performing that macro-instruction are communicated to the out-of-order execution engine 203 for execution at the appropriate integer and floating point execution units. [0055] The out-of-order execution engine 203 is where the micro-instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and re -order the flow of micro-instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 202, slow/general floating point scheduler 204, and simple floating point scheduler 206. The uop schedulers 202, 204, 206, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 202 of this embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.Register files 208, 210, sit between the schedulers 202, 204, 206, and the execution units 212, 214, 216, 218, 220, 222, 224 in the execution block 211. There is a separate register file 208, 210, for integer and floating point operations, respectively. Each register file 208, 210, of this embodiment also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register file to new dependent uops. The integer register file 208 and the floating point register file 210 are also capable of communicating data with the other. For one embodiment, the integer register file 208 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 210 of one embodiment has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width. [0057] The execution block 211 contains the execution units 212, 214, 216, 218, 220, 222, 224, where the instructions are actually executed. This section includes the register files 208, 210, that store the integer and floating point data operand values that the micro-instructions need to execute. The processor 200 of this embodiment is comprised of a number of execution units: address generation unit (AGU) 212, AGU 214, fast ALU 216, fast ALU 218, slow ALU 220, floating point ALU 222, floating point move unit 224. For this embodiment, the floating point execution blocks 222, 224, execute floating point, MMX, SIMD, and SSE operations. The floating point ALU 222 of this embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present invention, any act involving a floating point value occurs with the floating point hardware. For example, conversions between integer format and floating point format involve a floating point register file. Similarly, a floating point divide operation happens at a floating point divider. On the other hand, non-floating point numbers and integer type are handled with integer hardware resources. The simple, very frequent ALU operations go to the high-speed ALU execution units 216, 218. The fast ALUs 216, 218, of this embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 220 as the slow ALU 220 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 212, 214. For this embodiment, the integer ALUs 216, 218, 220, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 216, 218, 220, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 222, 224, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 222, 224, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions.In this embodiment, the uops schedulers 202, 204, 206, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 200, the processor 200 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one embodiment of a processor are also designed to catch instruction sequences for dot- product operations.The term "registers" is used herein to refer to the on-board processor storage locations that are used as part of macro-instructions to identify operands. In other words, the registers referred to herein are those that are visible from the outside of the processor (from a programmer's perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit. Rather, a register of an embodiment need only be capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty-two bit integer data. A register file of one embodiment also contains sixteen XMM and general purpose registers, eight multimedia (e.g., "EM64T" additions) multimedia SIMD registers for packed data. For the discussions below, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMX(TM) registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology can also be used to hold such packed data operands. In this embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types.In the examples of the following figures, a number of data operands are described. Figure 3A illustrates various packed data type representations in multimedia registers according to one embodiment of the present invention. Fig. 3 A illustrates data types for a packed byte 310, a packed word 320, and a packed doubleword (dword) 330 for 128 bits wide operands. The packed byte format 310 of this example is 128 bits long and contains sixteen packed byte data elements. A byte is defined here as 8 bits of data. Information for each byte data element is stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits are used in the register. This storage arrangement increases the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in parallel.Generally, a data element is an individual piece of data that is stored in a single register or memory location with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in a XMM register is 128 bits divided by the length in bits of an individual data element. Similarly, in packed data sequences relating to MMX and SSE technology, the number of data elements stored in an MMX register is 64 bits divided by the length in bits of an individual data element. Although the data types illustrated in Fig. 3A are 128 bit long, embodiments of the present invention can also operate with 64 bit wide or other sized operands. The packed word format 320 of this example is 128 bits long and contains eight packed word data elements. Each packed word contains sixteen bits of information. The packed doubleword format 330 of Fig. 3 A is 128 bits long and contains four packed doubleword data elements. Each packed doubleword data element contains thirty two bits of information. A packed quadword is 128 bits long and contains two packed quad- word data elements.Figure 3B illustrates alternative in-register data storage formats. Each packed data can include more than one independent data element. Three packed data formats are illustrated; packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 contain fixed-point data elements. For an alternative embodiment one or more of packed half 341, packed single 342, and packed double 343 may contain floating-point data elements. One alternative embodiment of packed half 341 is one hundred twenty- eight bits long containing eight 16-bit data elements. One embodiment of packed single 342 is one hundred twenty-eight bits long and contains four 32-bit data elements. One embodiment of packed double 343 is one hundred twenty-eight bits long and contains two 64-bit data elements. It will be appreciated that such packed data formats may be further extended to other register lengths, for example, to 96-bits, 160-bits, 192-bits, 224-bits, 256-bits or more.Figure 3C illustrates various signed and unsigned packed data type representations in multimedia registers according to one embodiment of the present invention. Unsigned packed byte representation 344 illustrates the storage of an unsigned packed byte in a SIMD register. Information for each byte data element is stored in bit seven through bit zero for byte zero, bit fifteen through bit eight for byte one, bit twenty-three through bit sixteen for byte two, and finally bit one hundred twenty through bit one hundred twenty-seven for byte fifteen. Thus, all available bits are used in the register. This storage arrangement can increase the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in a parallel fashion. Signed packed byte representation 345 illustrates the storage of a signed packed byte. Note that the eighth bit of every byte data element is the sign indicator. Unsigned packed word representation 346 illustrates how word seven through word zero are stored in a SIMD register. Signed packed word representation 347 is similar to the unsigned packed word in-register representation 346. Note that the sixteenth bit of each word data element is the sign indicator. Unsigned packed doubleword representation 348 shows how doubleword data elements are stored. Signed packed doubleword representation 349 is similar to unsigned packed doubleword in-register representation 348. Note that the necessary sign bit is the thirty-second bit of each doubleword data element. [0064] Figure 3D is a depiction of one embodiment of an operation encoding (opcode) format 360, having thirty-two or more bits, and register/memory operand addressing modes corresponding with a type of opcode format described in the "IA-32 Intel Architecture Software Developer's Manual Volume 2: Instruction Set Reference," which is which is available from Intel Corporation, Santa Clara, CA on the world-wide- web (www) at intel.com/design/litcentr. In one embodiment, a dot-product operation may be encoded by one or more of fields 361 and 362. Up to two operand locations per instruction may be identified, including up to two source operand identifiers 364 and 365. For one embodiment of the dot-product instruction, destination operand identifier 366 is the same as source operand identifier 364, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 366 is the same as source operand identifier 365, whereas in other embodiments they are different. In one embodiment of a dot-product instruction, one of the source operands identified by source operand identifiers 364 and 365 is overwritten by the results of the dot-product operations, whereas in other embodiments identifier 364 corresponds to a source register element and identifier 365 corresponds to a destination register element. For one embodiment of the dot-product instruction, operand identifiers 364 and 365 may be used to identify 32-bit or 64-bit source and destination operands. [0065] Figure 3E is a depiction of another alternative operation encoding (opcode) format 370, having forty or more bits. Opcode format 370 corresponds with opcode format 360 and comprises an optional prefix byte 378. The type of dot-product operation may be encoded by one or more of fields 378, 371, and 372. Up to two operand locations per instruction may be identified by source operand identifiers 374 and 375 and by prefix byte 378. For one embodiment of the dot-product instruction, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. For one embodiment of the dot-product instruction, destination operand identifier 376 is the same as source operand identifier 374, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 376 is the same as source operand identifier 375, whereas in other embodiments they are different. In one embodiment, the dot-product operations multiply one of the operands identified by operand identifiers 374 and 375 to another operand identified by the operand identifiers 374 and 375 is overwritten by the results of the dot-product operations, whereas in other embodiments the dot-product of the operands identified by identifiers 374 and 375 are written to another data element in another register. Opcode formats 360 and 370 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD fields 363 and 373 and by optional scale- index-base and displacement bytes.Turning next to Figure 3F, in some alternative embodiments, 64 bit single instruction multiple data (SIMD) arithmetic operations may be performed through a coprocessor data processing (CDP) instruction. Operation encoding (opcode) format 380 depicts one such CDP instruction having CDP opcode fields 382 and 389. The type of CDP instruction, for alternative embodiments of dot-product operations, may be encoded by one or more of fields 383, 384, 387, and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390 and one destination operand identifier 386. One embodiment of the coprocessor can operate on 8, 16, 32, and 64 bit values. For one embodiment, the dot-product operation is performed on integer data elements. In some embodiments, a dot-product instruction may be executed conditionally, using selection field 381. For some dot-product instructions source data sizes may be encoded by field 383. In some embodiments of dot-product instruction, Zero (Z), negative (N), carry (C), and overflow (V) detection can be done on SIMD fields. For some instructions, the type of saturation may be encoded by field 384.Figure 4 is a block diagram of one embodiment of logic to perform a dot- product operation on packed data operands in accordance with the present invention. Embodiments of the present invention can be implemented to function with various types of operands such as those described above. For one implementation, dot-product operations in accordance to the present invention are implemented as a set of instructions to operate on specific data types. For instance, a dot-product packed single -precision (DPPS) instruction is provided to determine the dot-product for 32-bit data types, including integer and floating point. Similarly, a dot-product packed double-precision (DPPD) instruction is provided to determine the dot-product for 64-bit data types, including integer and floating point. Although these instructions have different names, the general dot-product operation that they perform is similar. For simplicity, the following discussions and examples below are in the context of a dot- product instruction to process data elements. [0068] In one embodiment, the dot-product instruction identifies various information, including: an identifier of a first data operand DATA A 410 and an identifier of a second second data operand DATA B 420, and an identifier for the RESULTANT 440 of the dot-product operation (which may be the same identifier as one of the first data operand identifiers in one embodiment). For the following discussions, DATA A, DATA B, and RESULTANT are generally referred to as operands or data blocks, but not restricted as such, and also include registers, register files, and memory locations. In one embodiment, each dot-product instruction (DPPS, DPPD) is decoded into one micro-operation. In an alternative embodiment, each instruction may be decoded into a various number of micro-ops to perform the dot- product operation on the data operands. For this example, the operands 410, 420, are 128 bit wide pieces of information stored in a source register/memory having word wide data elements. In one embodiment, the operands 410, 420, are held in 128 bit long SIMD registers, such as 128 bit SSEx XMM registers. For one embodiment, the RESULTANT 440 is also a XMM data register. Furthermore, RESULTANT 440 may also be the same register or memory location as one of the source operands. Depending on the particular implementation, the operands and registers can be other lengths such as 32, 64, and 256 bits, and have byte, doubleword, or quadword sized data elements. Although the data elements of this example are word size, the same concept can be extended to byte and doubleword sized elements. In one embodiment, where the data operands are 64 bit wide, MMX registers are used in place of the XMM registers. [0069] The first operand 410 in this example is comprised of a set of eight data elements: A3, A2, Al, and AO. Each individual data element corresponds to a data element position in the resultant 440. The second operand 420 is comprised of another set of eight data segments: B3, B2, Bl, and BO. The data segments here are of equal length and each comprise of a single word (32 bits) of data. However, data elements and data element positions can possess other granularities other than words. If each data element was a byte (8 bits), doubleword (32 bits), or a quadword (64 bits), the 128 bit operands would have sixteen byte wide, four doubleword wide, or two quadword wide data elements, respectively. Embodiments of the present invention are not restricted to particular length data operands or data segments, and can be sized appropriately for each implementation. [0070] The operands 410, 420, can reside either in a register or a memory location or a register file or a mix. The data operands 410, 420, are sent to the dot-product computation logic 430 of an execution unit in the processor along with a dot-product instruction. By the time the dot-product instruction reaches the execution unit, the instruction should have been decoded earlier in the processor pipeline, in one embodiment. Thus the dot-product instruction can be in the form of a micro operation (uop) or some other decoded format. For one embodiment, the two data operands 410, 420, are received at dot-product computation logic 430. The dot-product computation logic 430 generates a first multiplication product of two data elements of the first operand 410, with a second multiplication product of two data elements in the corresponding data element position of the second operand 420, and stores the sum of the first and second multiplication products into the appropriate position in the resultant 440, which may correspond to the same storage location as the first or second operand. In one embodiment, the data elements from the first and second operands are single precision (e.g., 32 bit), whereas in other embodiments, the data elements from the first and second operands are double precision (e.g., 64 bit).For one embodiment, the data elements for all of the data positions are processed in parallel. In another embodiment, a certain portion of the data element positions can be processed together at a time. In one embodiment, the resultant 440 is comprised of two or four possible dot-product result positions, depending on whether DPPD or DPPS is performed, respectively: DOT-PRODUCTA3 i-o, DOT-PRODUCTA63- 32, DOT-PRODUCTA95-64, DOT-PRODUCTAI27-96 (for DPPS instruction results), and DOT-PRODUCTA63-O, DOT-PRODUCTAI27-64 (for DPPD instruction results). [0072] In one embodiment, the position of the dot-product result in resultant 440 depends upon a selection field associated with the dot-product instruction. For example, for DPPS instructions, the position of the dot-product result in the resultant 440 is DOT-PRODUCTASI-O, if the selection field is equal to a first value, DOT- PRODUCTA63-32, if the selection field is equal to a second value, DOT-PRODUCTA95- 64, if the selection field is equal to a third value, and DOT-PRODUCTAI27-64, if the selection field is equal to a fourth value. In the case of a DPPD instruction, the position of the dot-product result in resultant 440 is DOT-PRODUCTA63-O, if the selection field is a first value, and DOT-PRODUCTAI27-64 if the selection field is a second value. [0073] Figure 5a illustrates the operation of a dot-product instruction according to one embodiment of the present invention. Specifically, Figure 5 a illustrates the operation of a DPPS instruction, according to one embodiment. In one embodiment, the dot-product operation of the example illustrated in Figure 5a may substantially be performed by the dot-product computation logic 430 of Fig. 4. In other embodiments, the dot-product operation of Figure 5a may be performed by other logic, including hardware, software, or some combination thereof.In other embodiments, the operations illustrated in Figures 4, 5a, and 5b may be performed in any combination or order to produce the dot-product result. In one embodiment, Figure 5a illustrates a 128-bit source register 501a including storage locations to up to store four single precision floating point or integer values of 32 bits each, AO - A3. Similarly illustrated in Figure 5a is a 128-bit destination register 505a including storage locations to store up to four single precision floating point or integer values of 32 bits each, B0-B3. In one embodiment, each value, A0-A3, stored in the source register is multiplied to a corresponding value, B0-B3, stored in the corresponding position of the destination register and each resultant value, A0*B0, A1*B1, A2*B2, A3*B3 (referred to herein as the "products"), is stored in a corresponding storage location of a first 128-bit temporary register ("TEMPI") 510a including storage locations to store up to four single precision floating point or integer values of 32 bits each.In one embodiment, pairs of products are added together and each sum (referred to herein as "the intermediate sums") is stored into a storage location of a second 128-bit temporary register ("TEMP2") 515a and a third 128-bit temporary register ("TEMP3") 520a. In one embodiment the products are stored into the least- most significant 32-bit element storage location of the first and second temporary registers. In other embodiments, they may be stored in other element storage locations of the first and second temporary registers. Furthermore, in some embodiments, the products may be stored in the same register, such as either the first or second temporary register.In one embodiment, the intermediate sums are added together (referred to herein as "the final sum") and stored into storage element a fourth 128-bit temporary register ("TEMP4") 525a. In one embodiment, the final sum is stored into a least-significant 32-bit storage element of the TEMP4, whereas in other embodiments the final sum is stored into other storage elements of TEMP4. The final sum is then stored into a storage element of the destination register 505a. The exact storage element into which the final sum is to be stored may depend on variables configurable within the dot-product instruction. In one embodiment, an immediate field ("IMMy [x]") containing a number of bit storage locations may be used to determine the destination register storage element into which the final sum is to be stored. For example, in one embodiment, if the IMM8[0] field contains a first value (e.g., "1"), the final sum is stored into storage element BO of the destination register, if the IMM8[1] field contains a first value (e.g., "1"), the final sum is stored into storage element Bl, if the IMM8[2] field contains a first value (e.g., "1"), the final sum is stored into storage element B2 of the destination register, and if the IMM8[3] field contains a first value (e.g., "1"), the final sum is stored into storage element B3 of the destination register. In other embodiments, other immediate fields may be used to determine the storage element into which the final sum is stored in the destination register.In one embodiment, immediate fields may be used to control whether each multiply and addition operation is performed in the operation illustrated in Figure 5a. For example, IMM8[4] may be used to indicate (by being set to a "0" or "1", for example) whether the AO is to be multiplied by BO and the result stored into TEMPI . Similarly, IMM8[5] may be used to indicate (by being set to a "0" or "1", for example) whether the Al is to be multiplied by Bl and the result stored into TEMPI . Likewise, IMM8[6] may be used to indicate (by being set to a "0" or "1", for example) whether the A2 is to be multiplied by B2 and the result stored into TEMPI . Finally, IMM8[7] may be used to indicate (by being set to a "0" or "1", for example) whether the A3 is to be multiplied by B3 and the result stored into TEMPI .Figure 5b illustrates the operation of a DPPD instruction, according to one embodiment. One difference between the DPPS and DPPD instructions is that DPPD operate on double precision floating point and integer values (e.g., 64 bit values) instead of single precision values. Accordingly, there are fewer data elements to manage and therefore fewer intermediate operations and storage units (e.g., registers) involved in performing a DPPD instruction than a DPPS instruction, in one embodiment. [0079] In one embodiment, Figure 5b illustrates a 128-bit source register 501b including storage elements to up to store two double precision floating point or integer values of 64 bits each, AO - Al. Similarly illustrated in Figure 5b is a 128-bit destination register 505b including storage elements to store up to two double precision floating point or integer values of 64 bits each, BO-Bl. In one embodiment, each value, AO-Al, stored in the source register is multiplied to a corresponding value, BO-Bl, stored in the corresponding position of the destination register and each resultant value, A0*B0, A1*B1 (referred to herein as the "products"), is stored in a corresponding storage element of a first 128-bit temporary register ("TEMPI") 510b including storage elements to store up to two double precision floating point or integer values of 64 bits each.In one embodiment, pairs of products are added together and each sum (referred to herein as "the final sum") is stored into a storage element of a second 128- bit temporary register ("TEMP2") 515b. In one embodiment the products and final sum are stored into the least-most significant 64-bit element storage location of the first and second temporary registers, respectively. In other embodiments, they may be stored in other element storage locations of the first and second temporary registers. [0081] In one embodiment, the final sum is stored into a storage element of the destination register 505b. The exact storage element into which the final sum is to be stored may depend on variables configurable within the dot-product instruction. In one embodiment, an immediate field ("IMMy [x]") containing a number of bit storage locations may be used to determine the destination register storage element into which the final sum is to be stored. For example, in one embodiment, if the IMM8[0] field contains a first value (e.g., "1"), the final sum is stored into storage element BO of the destination register, if the IMM8[1] field contains a first value (e.g., "1"), the final sum is stored into storage element Bl. In other embodiments, other immediate fields may be used to determine the storage element into which the final sum is stored in the destination register.In one embodiment, immediate fields may be used to control whether each multiply operation is performed in the dot-product operations illustrated in Figure 5b. For example, IMM8[4] may be used to indicate (by being set to a "0" or "1", for example) whether the AO is to be multiplied by BO and the result stored into TEMPI . Similarly, IMM8[5] may be used to indicate (by being set to a "0" or "1", for example) whether the Al is to be multiplied by Bl and the result stored into TEMPI . In other embodiments, other control techniques for determining whether to perform the multiply operations of the dot-product may be used.Figure 6A is a block diagram of a circuit 600a for performing a dot-product operation on single-precision integer or floating point values in accordance with one embodiment. The circuit 600a of this embodiment multiplies, via multipliers 610a- 613a, corresponding single-precision elements of two registers 601a and 605a, the results of which may be selected by multiplexers 615a-618a using an immediate field, IMM8[7:4]. Alternatively, multiplexers 615a-618a may select a zero value instead of the corresponding product of the multiplication operation for each element. The result of the selection by multiplexers 615a-618a are then added together by adder 620a, and the result is stored in any of the elements of result register 630a, depending upon the value of immediate field, IMM8[3:0], which selects a corresponding sum result from adder 620a using multiplexers 625a-628a. In one embodiment, multiplexers 625a-628a may select zeros to fill an element of result register 630a if a sum result is not chosen to be stored stored in the result element. In other embodiments, more adders may be used to generate the sums of the various multiplication products. Furthermore, in some embodiments, intermediate storage elements may be used to store the product or sum results until they are further operated upon.Figure 6B is a block diagram of a circuit 600b for performing a dot-product operation on single-precision integer or floating point values in accordance with one embodiment. The circuit 600b of this embodiment multiplies, via multipliers 610b, 612b, corresponding single-precision elements of two registers 601b and 605b, the results of which may be selected by multiplexers 615b, 617b using an immediate field, IMM8[7:4]. Alternatively, multiplexers 615b, 618b may select a zero value instead of the corresponding product of the multiplication operation for each element. The result of the selection by multiplexers 615b, 618b are then added together by adder 620b, and the result is stored in any of the elements of result register 630b, depending upon the value of immediate field, IMM8[3:0], which selects a corresponding sum result from adder 620b using multiplexers 625b, 627b. In one embodiment, multiplexers 625b-627b may select zeros to fill an element of result register 630b if a sum result is not chosen to be stored stored in the result element. In other embodiments, more adders may be used to generate the sums of the various multiplication products. Furthermore, in some embodiments, intermediate storage elements may be used to store the product or sum results until they are further operated upon.Figure 7 A is a pseudo-code representation of operations to perform a DPPS instruction, according to one embodiment. The pseudo-code illustrated in Figure 7A indicates that a single-precision floating point or integer value stored in a source register ("SRC") in bits 31-0 is to be multiplied to a single-precision floating point or integer value stored in a destination register ("DEST") in bits 31-0 and the result stored in bits 31-0 of a temporary register ("TEMPI") only if an immediate value stored in an immediate field ("IMM8[4]") is equal to "1". Otherwise, bit storage locations 31-0 may contain a null value, such as all zeros.Also illustrated in Figure 7A is pseudo-code to indicate that a single- precision floating point or integer value stored in the SRC register in bits 63-32 is to be multiplied to a single-precision floating point or integer value stored in the DEST register in bits 63-32 and the result stored in bits 63-32 of a TEMPI register only if an immediate value stored in an immediate field ("IMM8[5]") is equal to "1". Otherwise, bit storage locations 63-32 may contain a null value, such as all zeros. [0087] Similarly illustrated in Figure 7A is pseudo-code to indicate that a single- precision floating point or integer value stored in the SRC register in bits 95-64 is to be multiplied to a single-precision floating point or integer value stored in the DEST register in bits 95-64 and the result stored in bits 95-64 of a TEMPI register only if an immediate value stored in an immediate field ("IMM8[6]") is equal to "1". Otherwise, bit storage locations 95-64 may contain a null value, such as all zeros. [0088] Finally, illustrated in Figure 7A is pseudo-code to indicate that a single- precision floating point or integer value stored in the SRC register in bits 127-96 is to be multiplied to a single-precision floating point or integer value stored in the DEST register in bits 127-96 and the result stored in bits 127-96 of a TEMPI register only if an immediate value stored in an immediate field ("IMM8[7]") is equal to "1". Otherwise, bit storage locations 127-96 may contain a null value, such as all zeros. [0089] Next, Figure 7A illustrates that bits 31 -0 are added to bits 63-32 of TEMP 1 and the result stored into bit storage 31-0 of a second temporary register ("TEMP2"). Similarly, bits 95-64 are added to bits 127-96 of TEMPI and the result stored into bit storage 31-0 of a third temporary register ("TEMP3"). Finally, bits 31-0 of TEMP2 are added to bits 31-0 of TEMP3 and the result stored into bit storage 31-0 of a fourth temporary register ("TEMP4").The data stored in temporary registers may then be stored into the DEST register, in one embodiment. The particular location within the DEST register to store the data may depend upon other fields within the DPPS instruction, such as fields in IMM8[x]. Particularly, Figure 7A illustrates that, in one embodiment, bits 31-0 of TEMP4 are stored into DEST bit storage 31-0 if IMM8[0] is equal to "1", DEST bit storage 63-32 if IMM8[1] is equal to "1", DEST bit storage 95-64 if IMM8[2] is equal to "1", or DEST bit storage 127-96 if IMM8[3] is equal to "1". Otherwise, the corresponding DEST bit element will contain a null value, such as all zeros. [0091] Figure 7B is a pseudo-code representation of operations to perform a DPPD instruction, according to one embodiment. The pseudo-code illustrated in Figure 7B indicates that a single-precision floating point or integer value stored in a source register ("SRC") in bits 63-0 is to be multiplied to a single-precision floating point or integer value stored in a destination register ("DEST") in bits 63-0 and the result stored in bits 63-0 of a temporary register ("TEMPI") only if an immediate value stored in an immediate field ("IMM8[4]") is equal to "1". Otherwise, bit storage locations 63-0 may contain a null value, such as all zeros.Also illustrated in Figure 7B is pseudo-code to indicate that a single- precision floating point or integer value stored in the SRC register in bits 127-64 is to be multiplied to a single-precision floating point or integer value stored in the DEST register in bits 127-64 and the result stored in bits 127-64 of a TEMPI register only if an immediate value stored in an immediate field ("IMM8[5]") is equal to "1". Otherwise, bit storage locations 127-64 may contain a null value, such as all zeros. [0093] Next, Figure 7B illustrates that bits 63-0 are added to bits 127-64 of TEMPI and the result stored into bit storage 63-0 of a second temporary register ("TEMP2"). The data stored in the temporary register may then be stored into the DEST register, in one embodiment. The particular location within the DEST register to store the data may depend upon other fields within the DPPS instruction, such as fields in IMM8[x].Particularly, Figure 7 A illustrates that, in one embodiment, bits 63-0 of TEMP2 are stored into DEST bit storage 63-0 if IMM8[0] is equal to "1", or bits 63-0 of TEMP2 are stored in DEST bit storage 127-64 if IMM8[1] is equal to "1". Otherwise, the corresponding DEST bit element will contain a null value, such as all zeros. [0094] The operations disclosed in Figures 7A and 7B are merely one representation of operations that may be used in one or more embodiments of the invention. Specifically, the pseudo-code illustrated in Figures 7A and 7B correspond to operations performed according to one or more processor architectures having 128 bit registers. Other embodiments may be performed in processor architectures having any size of registers, or other type of storage area. Furthermore, other embodiments may not use the registers exactly as illustrated in Figures 7 A and 7B. For example, in some embodiments, a different number of temporary registers, or none at all, may be used to stored operands. Lastly, embodiments of the invention may be performed among numerous processors or processing cores using any number of registers or datatypes. [0095] Thus, techniques for performing a dot-product operation are disclosed. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims. |
Some embodiments of the invention enable debugging functionality for memory devices residing on a memory module that are buffered from the memory bus by a buffer chip. Some embodiments map connector signals from a tester coupled to the high speed interface between the buffer chip and the memory bus to an interface between the buffer chip and the memory devices. During test mode, some embodiments bypass the normal operational circuitry of the buffer chip and provide a direct connection to the memory devices. Other embodiments use the existing architecture of the buffer chip to convert high speed pins into low speed pins and map them to pins that are connected to the memory devices. Other embodiments are described in the claims. |
What is claimed is:1. A method comprising:coupling a tester to a buffer chip that is located on a DIMM using a first set of pins on the buffer chip, the first set of pins structured to carry address, command, and data signals during a normal operation of the DIMM;coupling a plurality of DRAM modules that are located on the DIMM to a second set of pins on the buffer chip; andtesting the plurality of DRAM modules.2. The method of claim 1, wherein coupling the tester to the buffer chip that is located on the DIMM using the first set of pins comprises coupling the tester to a first set of pins that carries differential signals during a non-test operation mode of the buffer chip.3. The method of claim 2, wherein testing the plurality of DRAM modules comprises:bypassing the buffer chip.4. The method of claim 3, wherein bypassing the buffer circuit comprises:connecting each of the first set of pins directly to a corresponding one of the second set of pins with a switching circuit.5. The method of claim 4, wherein the switching circuit comprises one selected from the group consisting of a passgate circuit and an inverter circuit.6. The method of claim 5, wherein bypassing the buffer circuit further comprises:disabling an internal clock signal generated by a phase-locked loop in the buffer chip.7. The method of claim 2, wherein testing the plurality of DRAM modules comprises:mapping the first set of pins to corresponding ones of the second set of pins using a circuit of the buffer chip that is used during the non-test operation mode.8. The method of claim 7, wherein mapping the first set of pins to corresponding ones of the second set of pins comprises:passing command, address, and data signals to the plurality of DRAM modules after introducing an internal delay in the buffer chip.9. The method of claim 8, wherein introducing the internal delay in the buffer chip comprises introducing an internal delay of one DRAM clock cycle.10. The method of claim 9, wherein introducing the internal delay of one DRAM clock cycle comprises:clocking a first half of a data word from the tester to a DRAM module on a rising edge of a clock signal; andclocking a second half of the data word to the DRAM module on a falling edge of the clock signal.11. The method of claim 8, wherein introducing the internal delay in the buffer chip comprises introducing an internal delay of two DRAM clock cycles.12. A memory device comprising:a plurality of DRAM modules;an edge connector, wherein the edge connector is configured to accommodate a DRAM tester; anda buffer chip that includes a first set of pins coupled to the plurality of DRAM modules, a second set of pins that are coupled to the edge connector, and a switching circuit configured to couple one of the first set of pins directly to one of the second set of pins, hereby bypassing the other circuits in the buffer chip.13. The device of claim 12, the switching circuit chosen from the group consisting of a passgate circuit and an inverter circuit.14. The device of claim 12, the buffer chip further comprising:a phase-locked loop circuit; anda multiplexer configured to select from among at least two clock signals, wherein a first one of the at least two clock signals is an output of the phase-locked loop circuit.15. The device of claim 14, a second one of the at least two clock signals comprising an input of the phase-locked loop circuit.16. The device of claim 14, the second one of the at least two clock signals comprising an output of a logic circuit, wherein inputs of the logic circuit comprise an input of the phase-locked loop circuit and at least one additional clock input from the DRAM tester.17. The device of claim 16, the logic circuit comprising XOR logic gates.18. A system comprising:a host that includes a processor;a memory bus; anda plurality of memory devices, the host and the plurality of memory devices connected to the memory bus in a point-to-point manner, each memory device having a plurality of DRAM devices and a buffer chip with a first interface between the buffer chip and the memory bus and a second interface between the buffer chip and the plurality of DRAM devices, the buffer chip configured to connect a first interface pin to a second interface pin during a test mode of operation.19. The system of claim 18, the buffer chip comprising:a switching circuit configured to directly connect the first interface pin to the second interface pin.20. The system of claim 19, the switching circuit comprising:a switching circuit selected from the group consisting of a passgate circuit and an inverter circuit.21. The system of claim 18, the buffer chip comprising:a phase locked loop circuit; anda switch circuit configured to select one from the group consisting of an external reference clock and an output of the phase locked loop circuit.22. A machine-readable medium, that when read, causes a machine to perform processes comprising:establishing a signal path between an edge connector of a DIMM and a DRAM module located on the DIMM, wherein the signal path lies through a buffer chip, by connecting a first pin on the buffer chip and a second pin on the buffer chip, wherein the first pin and the second pin are normally configured to transfer data at different speeds.23. The machine-readable medium of claim 22, wherein connecting the first pin on the buffer chip and the second pin on the buffer chip comprises:operating a switch circuit that directly connects the first pin to the second pin.24. The machine-readable medium of claim 23, wherein the switch circuit is one chosen from the group consisting of a passgate circuit and an inverter circuit.25. The machine-readable medium of claim 22, wherein connecting the first pin on the buffer chip and the second pin on the buffer chip comprises:inserting a delay that is equal to at least one DRAM clock cycle between the first pin and the second pin using a circuit on the buffer chip. |
BACKGROUND OF THE INVENTION1. Technical Field of the InventionThis disclosure relates generally to memory systems, components, and methods and more particularly to a method and apparatus for providing debug functionality in a fully buffered memory channel that has no direct connection between an edge connector on a DIMM and the dynamic random access memory (DRAM) devices that reside on the DIMM.2. Description of the Related ArtFIG. 1 is a block diagram illustrating a conventional memory channel 100 that exhibits a "stub bus" topology. The memory channel includes a host 110 and four DIMMs 120-150. Each of the DIMMs 120-150 is connected to the memory bus 115 to exchange data with the host 110. Each of the DIMMs 120-150 adds a short electrical stub to the memory bus 115. For approximately the past 15 years, memory subsystems have relied on this type of stub bus topology.Simulations have shown that for applications of 2 to 4 DIMMs per memory channel, the stub bus technology reaches a maximum bandwidth of 533-667 MT/s (mega-transactions/second), or 4.2-5.3 GB/s (gigabytes/second) for an eight byte wide DIMM. Achieving the next significant level, 800 megatransfers/second (MT/s) and beyond, will be difficult if not impossible with the stub bus topology.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram illustrating a conventional memory channel using a "stub bus" topology.FIG. 2 is a block diagram illustrating a memory channel with a "point-to-point" topology.FIG. 3 is a block diagram that illustrates a data bypass circuit according to some embodiments of the invention.FIG. 4 is a block diagram that illustrates a PLL bypass circuit according to some embodiments of the invention.FIG. 5 is a block diagram illustrating a buffer chip of FIG. 2.FIG. 6 is a timing diagram illustrating an example of timing for a DRAM activate, read, and write sequence according to other embodiments of the invention.DETAILED DESCRIPTION OF THE INVENTIONIn order to increase memory bandwidth requirements above 4.2-5.3 GB/s per memory channel, "point-to-point" (P2P) signaling technology has been developed. FIG. 2 is a block diagram illustrating a memory channel 200 with a P2P topology. The P2P memory channel 200 includes four DIMMs 220, 230, 240, and 250. Each of the DIMMs has eight DRAMs 260. Other P2P memory channels may have more or less DIMMs, but they will nonetheless still be arranged in the manner illustrated in FIG. 2.The host 210 and DIMMs 220-250 are connected to a memory bus 215, where 215a represents the inbound data stream (to the host) and 215b represents the outbound data stream (from the host). In this case, the inbound data path to the DIMM 250 and the outbound data path from the DIMM 250 are not used, since DIMM 250 is the last in the chain.The host 210 can include one or more microprocessors, signal processors, memory controllers, graphics processors, etc. Typically, a memory controller coordinates access to system memory, and the memory controller will be the component of host 210 connected directly to the inbound and outbound data paths 215a and 215b.In the P2P configuration, each DIMM has a buffer chip 270. The buffer chips 270 capture signals from the inbound data stream 215a or outbound data stream 215b and re-transmit the signals to the next buffer chip 270 on a neighboring DIMM in a daisy-chain fashion. In the case of the buffer chip 270 belonging to the DIMM 220, data is also received from and transmitted to the host 210.The inbound and outbound data stream 215a, 215b are composed of a number of high-speed signals (not shown), where each high-speed signal is implemented by a differential pair. These point to point links allow high speed, simultaneous data communication in both directions.Each buffer chip 270 also has a Phase-Locked Loop, or PLL (not shown). During normal operation, the buffer chip uses a clock output from the PLL. The clock output of the PLL is derived from a reference clock signal (not shown) that is supplied to the buffer chip 270.In addition to the narrow, high-speed interface on the host side of the buffer chips 270 that was described above, there is also an interface (not shown) between the buffer chips 270 and the DRAM devices 260. In normal operation the signaling on the host side of the buffer chip 270 operates at a higher frequency and uses a different protocol than the DRAM side of the buffer chip 270.During normal operation in the buffered P2P topology, signals transmitted by the host 210 travel on the outbound data stream 215b to the buffer chip 270 of DIMM 220. Some of the signals are destined for other DIMMs, and in that case they are retransmitted along the outbound data path 215b to DIMM 230, DIMM 240, DIMM 250, etc. Signals that are destined for DRAM devices 260 located on the DIMM 220 are sent to the appropriate DRAM device using the interface between the buffer chip 270 and the DRAM devices 260. A similar action is performed for signals destined for DRAM devices 260 that are located on DIMMs 230-250.Signals originating from the DRAM devices 260 follow the reverse path. That is, the DRAM devices 260 transmit signals to the corresponding buffer chip 270. The buffer chip 270 then merges these signals with others that are returning to the host 210 along the inbound data path 215a.In conventional memory channels, testers connected to the edge connectors of DIMMs have a direct link to the DRAM devices that reside on each of the DIMMs. On the other hand, in memory channels with a P2P topology, the presence of the buffer chip 270 eliminates this direct connection from the high-speed interface to the DRAM devices 260.Consequently, the fact that the buffered P2P memory channel 200 does not have a direct path to the DRAM devices 260 from the high-speed interface due to the intervening buffer chips 270 becomes an issue where debugging is concerned.Embodiments of the invention provide an apparatus and method for enabling debug functionality for memory devices in a buffered P2P memory channel. The general approach of some embodiments is to map connector signals from a tester that is coupled to the high-speed interface at the edge connector of a DIMM to the other side of the buffer chip 270 where the interface between the DRAMs and the buffer chip is located. Some embodiments accomplish this by bypassing the normal operating circuitry of buffer chip 270 to provide a direct connection between high speed pins and the low speed pins. In other embodiments, the general approach is to use the existing circuitry of the buffer chip 270 to connect the edge connector of the DIMM to the DRAM signals.FIG. 3 is a block diagram illustrating a data bypass circuit 30 according to some embodiments of the invention. The data bypass circuit 30 resides on the buffer chip 270 of FIG. 2. In these embodiments, passgates 300 and 310 are activated when the DataBypass signal is asserted, directly connecting the pins 305 of a differential pair on the high-speed interface to the pins 325 of the DRAM interface.The I/O transceivers 320 are the normal input/output buffers that the buffer chip 270 uses during normal operation. These I/O transceivers 320 and other circuitry of the buffer chip 270 (not shown) are bypassed when the data bypass circuit 30 is activated.Other data bypass circuits 30 according to alternative embodiments could be implemented with inverters. While inverters would have lower capacitive loading on the inputs and better drive capabilities than the passgate implementation shown in FIG. 3, this approach would require some additional direction control multiplexing for bi-directional signals.FIG. 4 is a block diagram that illustrates a PLL bypass circuit 40 according to some embodiments of the invention. The PLL 410 is a part of the buffer circuit 270. As explained above, when the buffer chip 270 is in normal operation, the PLL 410 produces a clock signal from an external reference clock signal REF CLK. This clock signal REF CLK is subsequently supplied to other components on the buffer chip 270.However, when the data bypass circuit 30 of FIG. 3 is activated, the regular clock output of PLL 410 is not desired. As shown in FIG. 4, an XOR circuit 420 with multiple clock inputs CLKXOR1, CLKXOR2, REF CLK is selected by MUX 430 when the Bypass Mode signal is asserted. The clock inputs CLKXOR1 and CLKXOR2 are supplied to the pins by a tester that is connected to the DIMM by the edge connector. The use of multiple clock inputs CLKXOR1, CLKXOR2, REF CLK reduces the frequency that is otherwise required by a single reference clock input. The multiple clock inputs can be combined to generate a higher frequency internal clock that is used by the buffer chip 270.The XOR circuit 420 uses Exclusive-OR logic gates (not shown) to generate the internal clock signal. These logic gates are well-known and thus will not be described in greater detail. It is also anticipated that other combinations and types of logic gates besides XOR gates could be used to perform substantially the same function as the XOR circuit 420.In alternative embodiments, a MUX could be arranged in the PLL bypass circuit 40 to select between the clock output of the PLL and the externally supplied clock signal REF CLK. In this configuration the PLL 410 is disabled and the reference clock is used directly in the buffer chip. The same result could be accomplished using the PLL bypass circuit 40 of FIG. 4 with the clock inputs CLKXOR1 and CLKXOR2 maintained at a constant level.The data bypass circuit 30 illustrated in FIG. 3 and the PLL bypass circuit 40 illustrated in FIG. 4 may be used concurrently to provide a direct connection between the high-speed pins and the DRAM devices, and also to disable the clock output of the PLL. Both the Data Bypass signal of FIG. 3 and the Bypass Mode signal of FIG. 4 may be implemented either by writing to a register, by enabling a direct connect pin, or through use of the System Maintenance (SM) bus (not shown).FIG. 5 is a block diagram illustrating a buffer chip 270 of FIG. 2. Reference to FIG. 5 will aid in the explanation of other embodiments of the invention, in particular, those embodiments that use the normal operating circuitry of the buffer chip 270 to provide a connection to the DRAM devices 260.Referring to FIG. 5, the signals Outbound Data In and Outbound Data Out indicate where the outbound data path 215a of FIG. 2a travels through the buffer chip 270. The "10*2" notation indicates that this data path is composed of 10 differential signals. Similarly, Inbound Data Out and Inbound Data In represent the inbound data path 215b of FIG. 2, which is composed of 14 differential signals. The buffer chip 270 also has one differential input signal REF CLK, which is used as the external clock input.The REF CLK signal is used as clock input for the registers DRAM Clock, Cmd Out, and Data Out. These three registers provide inputs for the DRAM devices 260 of FIG. 2. In normal operation of the buffer chip 270, address signals, command signals, and data signals are demultiplexed and decoded from the signal Outbound Data In and sent to either the CMD Out or Data Out register. The DRAM Clock register provides a total of eight clock signals to the DRAM devices with CK and CK#. The Cmd Out register provides 29 address and command signals ADR/CMD, and the Data Out register provides 72 DQ signals to the DRAM along with 18 differential DQS signals. Data sent to the buffer chip 270 from the DRAMs is received at the Data In register, after which it is serialized and merged with the Inbound Data In signal to form the Inbound Data Out signal.Of course, the buffer chip 270 illustrated in FIG. 5 is only one possible example of a buffer chip that may be used in a P2P memory channel. Other embodiments of the invention may use buffer chips that have more or less input and output signals than the buffer chip 270. Furthermore, each DIMM may have multiple buffer chips that jointly share the burden of distributing signals to the DRAM devices located on the DIMM. Thus, still other embodiments of the invention may use multiple buffer chips to map edge connector signals to the DRAM devices.According to other embodiments of the invention, the general approach is to use the normal operating circuitry of the buffer chip 270 to convert high speed pins into low speed pins and map them to pins of the DRAMs 260. Thus, a conventional tester (not shown) at the edge connector of the DIMM is connected to pins on the buffer chip that in normal operation would carry high-speed differential signals. For example, a typical speed for the high-speed differential signals is 4.8 GHz. On the other hand, conventional devices used to test DRAM devices on DIMMs operate at speeds on the order of 200 MHz.Throughout the remainder of the disclosure, the operation of the buffer chip 270 while the tester is connected to it via the DIMM edge connector will be referred to as "test mode."While in test mode, the REF CLK input pins continue to be used, but are instead driven by the tester. This allows the use of most of the existing on-chip clock distribution network for the buffer chip 270. The reference clock serves as input for the PLL circuit 510.Furthermore, input signals from the tester are connected to a number of the pins from Outbound Data In and Inbound Data In that would otherwise carry high speed differential signals during normal operation. Outbound Data In provides 20 (10*2) input signal paths for the tester to access the buffer chip 270 and Inbound Data In provides 28 (14*2) input signal paths. Thus, there are up to 48 input connections that can be utilized by the tester.Similarly, Inbound Data Out may provide up to 28 (14*2) output connections for the tester. Some of these output connections are configured as Pass/Fail outputs during the operation of the buffer chip 270 in test mode.During test mode, command, address, and data signals are passed to the DRAM after introducing some internal delay in the buffer chip 270. The simplest way to accomplish this is to delay all inputs by one DRAM clock cycle, where a DRAM clock cycle is the period between two rising edges of the DRAM clock CK.For example, data from the tester is 16 bits wide at a single data rate (SDR) of 200 MHz. On the way to the DRAM, the SDR is doubled to arrive at a double data rate (DDR), and the width is halved by clocking out 8 bits of data on the rising edge of the clock and the remaining 8 bits on the falling edge of the clock.In these embodiments, DDR transactions between the buffer chip 270 and the DRAMs are burst oriented, reading or writing 4 words of data across 4 clock edges. Normally input data from the tester is replicated 9 times across the memory data bus, converting 8 bits of DDR input data to 72 bits of DDR data. To complete a burst operation, 8 bits of data across 4 clock edges or 32 bits of data. On the tester side of the buffer chip 270, the same 32 bits of data are transferred, but at 16 bits at a time on two rising edges of two DRAM cycles.Alternative embodiments of the invention may use a burst transaction that reads or writes 8 words of data across 8 clock edges. Alternative embodiments of the invention may also introduce an internal delay of more than one DRAM clock cycle, for example, two DRAM clock cycles.In test mode, the tester drives data to be written to the DRAM on a write pass and data to be compared on a read pass. The actual DRAM data and the expected data from the tester are compared in the buffer chip 270. If the actual DRAM data and the expected data differ, Pass/Fail outputs allocated from Inbound Data Out will indicate which DRAM failed. Alternative embodiments of the invention may simply pass actual DRAM data to the tester, which then performs the comparison between the actual data and the expected data.FIG. 6 is a timing diagram illustrating a DRAM activate, read, and write sequence during test mode according to other embodiments of the invention. In FIG. 6, the signals REF CLK, CK, CK*, ADR/CMD, DQS, and DQ are the same signals as those shown in FIG. 5. Additionally, signals to and from the tester are represented by Tester ADR/CMD, TesterDataIn, and TesterDataOut. In this example, the tester drives REF CLK at 100 MHz. REF CLK is then converted by the internal PLL 510 (see FIG. 5) into the outgoing signals CK and CK* at 200 MHz.As explained above, address and command pins are connected to the tester via the high speed differential inputs. TesterDataIn is connected to a 16 bit interface.The timing diagram of FIG. 6 illustrates the case where an internal delay of two DRAM clock cycles is imparted by the buffer chip 270. This delay is illustrated between the TesterDataIn signal at the high speed interface and the DQ signal at the DRAM interface. The "NOP" notation for these signals indicates time periods where no operation is occurring.Having described and illustrated the principles of the invention in several exemplary embodiments, it should be apparent that the invention can be modified in arrangement and detail without departing from such principles. We claim all modifications and variation coming within the spirit and scope of the following claims. |
In one embodiment, the present invention includes a method for receiving incoming data in a processor and performing a checksum operation on the incoming data in the processor pursuant to a user-level instruction for the checksum operation. For example, a cyclic redundancy checksum may be computed in the processor itself responsive to the user-level instruction. Other embodiments are described and claimed. |
What is claimed is: 1. A method comprising:, receiving incoming data in a processor; and performing a checksum operation on the incoming data in the processor responsive to a user-level instruction for the checksum operation. 2. The method of claim 1 , further comprising performing the checksum operation in a pipeline of the processor, wherein the processor comprises a general- purpose processor, and wherein the checksum operation comprises a cyclic redundancy check (CRC) operation. 3. The method of claim 1 , further comprising performing the checksum operation via a hardware engine of the processor, wherein the processor comprises a general-purpose processor. 4. The method of claim 3, further, comprising performing a polynomial division operation in the hardware engine responsive to the user-level instruction. 5. The method of claim 3, wherein the hardware engine comprises an exclusive-OR (XOR) tree coupled to a source register and a destination register. 6. The method of claim 5, further comprising: inputting the incoming data from the source register and a current value stored in at least a portion of the destination register into the XOR tree; performing the checksum operation in the XOR tree using the incoming data and the current value; and storing an output of the XOR tree in the destination register. 7. The method of claim 6, wherein the output of the XOR tree corresponds to a running remainder of the checksum operation. 8. The method of claim 7, further comprising using the running remainder as a checksum when a buffer that provides the incoming data to the source register is empty. 9. The method of claim 1, further comprising: loading the incoming data into a source register of the processor; reflecting the incoming data; and performing at least one exclusive-OR (XOR) operation with the reflected incoming data and reflected data from a destination register, and storing a result of the at least one XOR operation in the destination register in a reflected order. 10. The method of claim 1 , further comprising performing the checksum operation in a logic block of the processor using the incoming data and a remainder value and without lookup table information. 11. An apparatus comprising: a first register to store source data; a second register to store result data; and an execution unit coupled to the first register and the second register to perform a cyclic redundancy check (CRC) operation with the source data and the result data and to provide at least a portion of an output of the execution unit corresponding to a running remainder of the CRC operation to the second register. 12. The apparatus of claim 11, wherein the execution unit is to perform the CRC operation responsive to a user-level instruction. 13. The apparatus of claim 11, wherein the execution unit comprises an exclusive- OR (XOR) tree logic of a general-purpose processor pipeline. 14. The apparatus of claim 13, wherein the XOR tree logic is to perform polynomial division according to a fixed polynomial. 15. The apparatus of claim 11 , wherein the execution unit comprises an integer unit of a processor pipeline, the integer unit comprising a plurality of separate logic blocks each to perform the CRC operation on data of a different size. 16. The apparatus of claim 15, wherein a user-level instruction is to indicate the size of the data on which to perform the CRC operation. 17. An article comprising a machine-readable storage medium including instructions that if executed by a machine enable the machine to perform a method comprising: accumulating a cyclic redundancy check (CRC) value in a dedicated execution unit of a pipeline of a processor from a source operand of a first register and a destination operand of a second register; storing the accumulated CRC value in the second register; and determining if additional data is to be subjected to the CRC. 18. The article of claim 17, wherein the method further comprises incrementally accumulating the CRC value and storing the incrementally accumulated CRC value in the second register until no additional data is to be subjected to the CRC. 19. The article of claim 17, wherein the method further comprises accumulating the CRC value responsive to an instruction of an instruction set architecture of the processor for the CRC. 20. The article of claim 19, wherein the method further comprises accumulating the CRC value in one of a plurality of portions of the dedicated execution unit based on a size of the source operand, wherein the instruction is to indicate the size of the source operand. 21. A system comprising: a processor including first and second execution units to perform operations responsive to instructions of an instruction set architecture (ISA) for the processor, wherein the first execution unit includes a hardware engine to perform cyclic redundancy check (CRC) operations, the processor further including a first register to provide a source operand to the hardware engine and a second register to provide a destination operand to the hardware engine; and a dynamic random access memory (DRAM) coupled to the processor. 22. The system of claim 21, wherein the first execution unit comprises an integer unit and the second execution unit comprises a floating point unit. 23. The system of claim 21, wherein the processor includes a buffer to provide data to the first register. 24. The system of claim 23, wherein the hardware engine is to perform a CRC operation on the data until the buffer is empty responsive to one or more instructions of the ISA for the CRC operation. 25. The system of claim 24, wherein the hardware engine is to provide a running remainder of the CRC operation to the second register. 26. The system of claim 21, wherein the hardware engine includes a plurality of logic blocks each to perform a CRC operation on data of a different size. 27. The system of claim 26, wherein the hardware engine is to provide data to one of the plurality of logic blocks corresponding to a given data size to perform the CRC operation responsive to an instruction of the ISA for the CRC operation of the given data size. |
PERFORMING A CYCLIC REDUNDANCY CHECKSUM OPERATION RESPONSIVE TO A USER-LEVEL INSTRUCTIONBackground[0001] Embodiments of the present invention relate to data processing, and more particularly to determining checksums such as cyclic redundancy checks (CRCs).[0002] In data processing systems, it is desirable that data transmitted between a first location and a second location is received accurately, so that additional processing performed on that data at the second location also can be accurate. Further, to enable detection of errors in data transmission, oftentimes a data packet will be transmitted with a checksum attached. For example, a CRC sum can be generated by a transmitting source and appended to data to be transmitted. This checksum, which may be calculated according to one of many different algorithms, can then be compared to a similar checksum generated at the receiving end from the received data. If the two checksums are identical, the transmitted data is correct. If however the generated checksum varies from the transmitted checksum, an error is indicated. Such checksums are used throughout networking technologies to detect transmission errors.[0003] In different applications, different manners of implementing CRC information exists. For example, CRC calculations can be performed in either hardware or software. To implement a CRC calculation in hardware, typically a dedicated hardware engine is provided within a system to perform the CRC calculation. Accordingly, data to be subjected to such a CRC calculation is sent to the hardware engine for calculation of the CRC, which is then appended to the data, e.g., for transmission from the system. Various drawbacks exist to using such an offload engine, including the overhead of sending data to the engine. Furthermore, it is difficult to perform a stateless hardware offload. That is, typically additional state- based overhead data also needs to be transmitted, increasing complexity and slowing the progress of useful work.[0004] Because many systems lack such an offload engine, CRC calculations are often performed in software. To implement CRC calculations in software, typically lookup table schemes are used. However, such software calculations of CRC values are notoriously slow, compute-intensive operations. Further, the memory footprint of the lookup table can be large, impacting performance. Accordingly, these slow calculations can degrade network performance, and further consume processing resources. As an example, it can take between 5 and 15 cycles to perform a CRC calculation per byte of data. As a result, software CRC performance is too low for general use in high-speed networks.Brief Description of the Drawings[0005] FIG. 1 is a flow diagram of a method in accordance with one embodiment of the present invention..[0006] FIG. 2 is a block diagram of a processor in accordance with one embodiment of the present invention.[0007] FIG. 3 is a block diagram of a portion of a processor to perform a checksum operation in accordance with an embodiment of the present invention.[0008] FIG. 4 is a block diagram of another portion of a processor in accordance with an embodiment of the present invention.[0009] FIG. 5 is a block diagram of a system in accordance with an embodiment of the present invention.Detailed Description[0010] In various embodiments, checksum operations may be effected using an instruction set architecture (ISA) extension to compute checksum values. More specifically, a user-level instruction may be provided within an ISA to enable a programmer to directly perform a desired checksum operation such as a CRC operation in a general-purpose processor (e.g., a central processor unit (CPU)) via the instruction. The CRC operation may be a 32-bit CRC operation (i.e., a CRC32 operation generating a 32-bit running reminder, discussed further, below), and in different embodiments may, for example, correspond to the CRC used in an Institute of Electrical and Electronics Engineers (IEEE) 802.3 Ethernet protocol (published 2002) or other protocols. [0011] In different implementations, various opcode instructions may be provided to perform CRC computations on different groupings of data. For example, in some embodiments CRC computations may be supported on groups of 8, 16, 32 and 64 bits using different opcodes, although the scope of the present invention is not so limited. In this way, CRC calculations may be rapidly performed in hardware without the need for lookup tables or the like. Furthermore, the computations may be performed using generic, architecturally visible processor registers via integer operations performed according to the different opcodes. As a result, CRCs may be computed in a processor without the need for the overhead and complexity of offload hardware, such as network offload hardware. Accordingly, greater numbers of data transmissions (e.g., in terms of input/outputs (FOs) per second) can occur. Note that while described primarily herein in connection with CRC operations, embodiments of the present invention may be used to perform other checksum operations.[0012] Referring now to FIG. 1, shown is a flow diagram of a method in accordance with one embodiment of the present invention. Method 100 may be used to obtain a checksum using a user-level instruction implemented on processor hardware, e.g., an execution unit of a CPU. As shown in FIG. 1, method 100 may begin by performing a series of exclusive-OR (XOR) operations on data in source and destination registers (block 110). Note that the XOR operations may correspond to a polynomial arithmetic operation and more particularly to a polynomial division operation. The data in the source register may correspond, e.g., to data present in a processor pipeline that has been received by the processor or is to be transmitted therefrom. As an example, a group of data in a buffer corresponding to a desired group size (e.g., 16 bit, 32 bit or the like) may be provided to the source register, which may be a general-purpose register of the processor. Alternately, the source data may be obtained from a memory, in some embodiments. The destination register may correspond to a storage location for a running remainder obtained from the XOR operations. The destination register also may be a general-purpose register of the processor. [0013] In various embodiments, the XOR operations may be performed in dedicated hardware within a processor pipeline. For example, an execution unit of a processor, e.g., an integer execution unit may be extended with circuitry to implement a series of XOR operations. For example, this circuitry may correspond to a XOR tree to handle polynomial division by a desired polynomial. In various embodiments, a polynomial for use in the XOR operations may be hard-wired into the logic gates of the XOR tree. Furthermore, the XOR tree may be configured to implement desired pre-processing and post-processing via the XOR operations, e.g., bit reflections and the like. Furthermore, the XOR tree logic may include multiple partitions, each configured to handle operations on different data sizes.[0014] Still referring to FIG. 1, next a result, which may correspond to a running remainder obtained from the XOR operations, may be stored in the destination register (block 120). Note that the destination register may, upon initialization of a system, be set to a predetermined value, e.g., all ones, all zeros or another such value. Then during execution of checksum operations, this running remainder is continually updated with the result of the current checksum operation. More specifically, the remainder of the polynomial division implemented by the current checksum operation may be stored in the destination register.[0015] Next, it may be determined whether additional source data is present (diamond 130). For example, in some embodiments a buffer may include data that has been received by a system and is to have a checksum verified. The data may be fed in chunks into the source register to effect the checksum operation. Accordingly, it may be determined in diamond 130 if additional source data is present in this buffer. If so, the next data chunk may be provided to the source register, and control passes back to block 110, discussed above.[0016] If instead at diamond 130 it is determined that no additional source data is present, control passes to block 140. There, the result of the checksum operation may be provided as the current value (e.g., running remainder) that is stored in the destination register (block 140). As discussed above, this checksum value may be used in many different manners. For example, in the case of received data, the computed checksum may be compared to a received checksum to confirm that the data was accurately received. In a transmission situation, the checksum may be appended to data to be transmitted so that the data may be verified on a receiving end. Of course other uses of checksums, such as for hash functions or generation of numbers pursuant to a pseudo random numbering scheme may also occur.[0017] A processor to implement checksum operations in accordance with an embodiment of the present invention may take many different forms depending on a desired architecture. Referring now to FIG. 2, shown is a block diagram of a processor in accordance with one embodiment of the present invention. As shown in FIG. 2, processor 200 includes a data path 205. Data path 205 may be controlled by front end control stages that may include a register alias table (RAT) 270, which may receive decoded instructions from a front end of the processor (not shown in FIG. 2). RAT 270 may be used to receive microoperations ([mu]ops) from the front end and rename the [mu]ops for the resources of the data path. In data path 205, the renamed [mu]ops may then be provided to a reorder buffer (ROB) 250. ROB 250 may act as a register file to store [mu]ops and corresponding source operands until the [mu]op is ready for passing to a reservation station (RS) 230. Similarly, ROB 250 may also store corresponding results of [mu]ops that have already executed. These results may be held in ROB 250 until the [mu]ops are retired (at which point the ROB entry is freed).[0018] Reservation station 230 may be used to store [mu]ops until their corresponding source operands are present and/or until the [mu]op is ready for execution in one of a plurality of execution units of data path 205. Reservation station 230 may include a plurality of dispatch ports to couple instructions and data to selected ones of execution units of data path 205. In some embodiments, multiple dispatch ports may be used in each cycle.[0019] As shown in FIG. 2, the execution units in data path 205 include an address generation unit (AGU) 220, an integer (INT) execution unit 222, a store data (STD) unit 224, a floating point (FP) execution unit 226, and a single instruction multiple data (SIMD) execution unit 228. As shown in FIG. 2, integer execution unit 222 further includes logic 221. Logic 221 may include one or more hardware engines to perform checksum operations in accordance with an embodiment of the present invention. More specifically, logic 221 may include a plurality of exclusive-OR (XOR) logic trees to implement polynomial arithmetic and related data manipulations. In various embodiments, logic 221 may include different hardware engines to implement CRC operations on differently sized data chunks. As an example, a plurality of user- level instructions of an ISA each may define a CRC operation for a particular data size. Logic 221, in some embodiments, may include a corresponding number of separate hardware engines, also referred to herein as XOR trees, to effect these different CRC operations.[0020] While not shown in FIG. 2, additional or different execution units may be present in different embodiments. After execution of a [mu]op in one of the execution units, result data may be passed back to RS 230 and ROB 250 for storage, e.g., until retirement. Thus in one embodiment, both source and data registers for performing a CRC operation may be located in RS 230 or ROB 250. While not shown in FIG. 2, it is to be understood that additional buffers such as a memory order buffer (MOB) and other resources may be present within processor 200.[0021] It is further to be understood that the representation shown in FIG. 2 is intended for ease of discussion and in various embodiments many more stages or differently named stages may exist in a given processor. For example, a write back stage may be coupled to the execution units to receive result data for later delivery to a memory hierarchy. Alternately, one or. more other buffers such as store buffers, load buffers and the like may be coupled to RS 230. As one example, one or more retirement buffers may be coupled to RS 230 for storage of [mu]ops and associated result data until retirement of the associated instruction.[0022] Of course, other implementations are possible. Referring now to FIG. 3, shown is a block diagram of a portion of a processor to perform a checksum operation in accordance with an embodiment of the present invention. As shown in FIG. 3, a portion of a processor 300 is shown. More specifically, processor 300 includes an XOR tree 310, a first register 320 and a second register 330, all of which may be part of a processor pipeline. XOR tree 310 may be configured differently in various embodiments. For example, XOR tree 310 may be implemented using a plurality of 3- input XOR gates in a first level, outputs of which are coupled to similar XOR gates of a second level, and so forth. Ln such an embodiment, each level of the XOR tree may be a third as large as the previous level. Of course, other configurations are possible.[0023] As further shown in FIG. 3, processor 300 includes a buffer 340, which also may be within the processor pipeline (e.g., as a buffer, queue or the like). Alternately, buffer 340 may be a cache memory associated with processor 300. In the embodiment of FIG. 3, first register 320 may correspond to a source register, while second register 330 may correspond to a destination register. In various embodiments, these registers may be general-purpose registers within processor 300. Of course, processor 300 may include many other registers, logic, functional units and the like, and the portion shown in FIG. 3 is for ease of illustration.[0024] As shown in FIG. 3, to perform a checksum in accordance with an embodiment of the present invention, at least a first portion of first register 320 is provided to XOR tree 310, along with a portion of second register 330. In the embodiment shown in FIG. 3, which illustrates an 8-bit CRC accumulation, a single byte of data (Bo) is provided to XOR tree 310 from first register 320, while a 4-byte portion of second register 330 is provided to XOR tree 310. This 4-byte portion may correspond to the running remainder of a CRC32 operation. Using this data, XOR tree 310 may perform data manipulations via XOR operations to generate a result that includes a remainder portion. This remainder portion may be the running remainder that is stored back in second register 330, as shown in FIG. 3. In this way, CRC operations can be efficiently performed in minimal cycle time and using minimal processor resources. In the embodiment of FIG. 3, for 8-bit accumulate operations, additional portions of first register 320 may be provided incrementally to XOR tree 310 along with the current contents of second register 330 (i.e., the 32-bit running remainder). Accordingly, to obtain a CRC checksum on 64 bits of data in first register 320, eight iterations of XOR operations in XOR tree 310 may be performed, each using a single byte of data from first register 320, along with the current running remainder in second register 330. If additional data is present in buffer 340 to be validated via a checksum, the additional data may be loaded into first register 320 so that it may then be processed in XOR tree 310.[0025] Note that different hardware may be present to handle CRC calculations of different bit widths. Accordingly, with reference back to FIG. 2, logic 221 may include different XOR tree structures to handle such CRC calculations. Referring now to FIG. 4, shown is a block diagram of another portion of a processor in accordance with an embodiment of the present invention. As shown in FIG. 4, processor 300 includes a different XOR tree 410 (e.g., in addition to XOR tree 310 of FIG. 3) that is coupled to receive data from first register 320 and second register 330. As further shown in FIG. 4, buffer 340 is present and may be used to provide data for CRC computations. Note that in the embodiment of FIG. 4, XOR tree 410 is configured to handle a 64-bit CRC accumulation. Accordingly, the entire contents of first register 320 (i.e., bytes Bo-B7) may be coupled at one time to XOR tree 410 for manipulation in XOR operations with data in second register 330. The result data, the desired portion of which corresponds to a running remainder, is stored back in second register 330. While described with these particular implementations in FIGS. 3 and 4, it is to be understood that the scope of the present invention is not so limited, and in other embodiments different hardware configurations for performing CRC operations may be present.[0026] Referring now to Table 1 below, shown is a listing of example instructions of an instruction set architecture (ISA) to support CRC operations in accordance with various embodiments of the present invention. As shown in Table 1, each instruction, which may be referenced by an opcode, is used to perform a CRC32 operation using a source register and a destination register. As shown, differs flavors are possible, with each instruction to perform the CRC operation on a given size of destination operand and source operand. Thus with reference to the first line of Table 1, this instruction is used to perform a CRC32 operation on an 8-bit source operand and a 32-bit destination operand. Similarly, the second line of Table 1 is used to perform a CRC32 operation on a 16-bit source operand and a 32-bit destination operand. In similar fashion, the third line of Table 1 shows an instruction to perform a CRC32 operation on a 32-bit source operand and a 32-bit destination operand.[0027] Because these first three instructions are performed with maximum data chunks of 32 bits, note that the instructions are valid in both a 64-bit mode of operation as well as a legacy (i.e., 32-bit) mode of operation. In contrast, the fourth and fifth lines of Table 1 denote CRC operations to be performed on 8-bit and 64-bit source operands, respectively with a 64-bit destination operand. Thus these final two instructions may be performed only in a 64-bit mode of operation.Table 1Opcode Instruction DescriptionCode 2 CRC32 r32, r/m8 Accumulate CRC32 on r/m8Code 1 CRC32 r32, r/ml6 Accumulate CRC32 on r/ml6Code 1 CRC32 r32, r/m32 Accumulate CRC32 on r/m32Code 2 CRC32 r64, r/m8 Accumulate CRC32 on r/m8Code 1 CRC32 r64, r/m64 Accumulate CRC32 on r/m64In various embodiments, these user-level instructions may be used by a programmer, e.g., as intrinsics to implement a CRC operation in accordance with the flow diagram of FIG. 1, for example.[0028] In general, a user-level CRC instruction may be implemented in the following manner. Starting with an initial value in a first operand (i.e., a destination operand), a CRC32 value for a second operand (i.e., a source operand) may be accumulated and the result stored back in the destination operand. In different implementations, the source operand can be a register or a memory location. The destination operand may be a 32 or 64-bit register. If the destination is a 64-bit register, then the 32-bit result may be stored in the least significant double word and 00OOO0OOH stored in the most significant double word of the register.[0029] Note that the initial value supplied in the destination operand may be a double word integer stored in a 32-bit register, or the least significant double word of a 64-bit register. To incrementally accumulate a CRC32 value, software retains the result of the previous CRC operation in the destination operand, and then executes the CRC operation again with new input data in the source operand. Accordingly, each instruction takes a running CRC value in the first operand and updates the CRC value based on the second operand. In this manner, a CRC can be generated over any desired amount of data by performing the operation in a loop, until all desired data is subjected to the CRC operation.[0030] In some implementations, data contained in the source operand is processed in reflected bit order. This means that the most significant bit of the source operand is treated as the least significant bit of the quotient, and so on, for all the bits of the source operand. Likewise, the result of the CRC operation can be stored in the destination register in reflected bit order. This means that the most significant bit of the resulting CRC (i.e., bit 31) is stored in the least significant bit of the destination register (bit 0), and so on, for all the bits of the CRC.[0031] While different manners of implementing these user-level instructions can be effected, Tables 2-6 below show example pseudocode representations of a hardware implementation for each of the user-level instructions of Table 1.Table 2CRC32 instruction for 64-bit source operand and 64-bit destination operand:TEMPI [63-0] "- BIT_REFLECT64 (SRC[63-0])TEMP2[31-0] "- BIT_REFLECT32 (DEST[31-0])TEMP3[95-0] - TEMPI [63-0] " 32TEMP4[95-0] - TEMP2[31-0] " 64TEMP5[95-0] - TEMP3[95-0] XOR TEMP4[95-0]TEMP6[31-0] - TEMP5[95-0] MOD2 11EDC6F41HDEST[31-0] -"- BIT-REFLECT (TEMP6[31-0])DEST[63-32] "- Q00QO0OOHTable 3CRC32 instruction for 32-bit source operand and 32-bit destination operand:TEMP1[31-0] "- BIT_REFLECT32 (SRC[31-0])TEMP2[31-0] *- BIT_REFLECT32 (DEST[31-0])TEMP3[63-0] - TEMP 1[31-O] " 32TEMP4[63-0] - TEMP2[31-0] " 32TEMP5[63-0] - TEMP3[63-0] XOR TEMP4[63-0]TEMP6[31-0] "- TEMP5[63-0] MOD2 11EDC6F41HDEST[31-0] "- BIT-REFLECT (TEMP6[31-0]) Table 4CRC32 instruction for 16-bit source operand and 32-bit destination operand:TEMPI [15-0] - BIT_REFLECT16 (SRC[15-0])TEMP2[31-0] -- BIT_REFLECT32 (DEST[31-0])TEMP3[47-0] - TEMPI [15-0] " 32TEMP4[47-0] - TEMP2[31-0] " 16TEMP5[47-0] *- TEMP3[47-0] XOR TEMP4[47-0]TEMP6[31-0] <- TEMP5[47-0] MOD2 11EDC6F41HDEST[31-0] "- BIT-REFLECT (TEMP6[31-0])Table 5CRC32 instruction for 8-bit source operand and 64-bit destination operand:TEMPI [7-0] *- BIT_REFLECT8(SRC[7-0])TEMP2[31-0] *- BIT_REFLECT32 (DEST[31-0])TEMP3[39-0] - TEMPI [7-0] " 32TEMP4[39-0] - TEMP2[31-0] " 8TEMP5[39-0] "- TEMP3[39-0] XOR TEMP4[39-0]TEMP6[31-0] *- TEMP5[39-0] MOD2 11EDC6F41HDEST[31-0] "- BIT_REFLECT (TEMP6[31-0])DEST[63-32] <- O000O00OHTable 6CRC32 instruction for 8-bit source operand and 32-bit destination operand:TEMPI [7-0] - BIT_REFLECT8(SRC[7-0])TEMP2J31-0] *- BIT_REFLECT32 (DEST[31-0])TEMP3[39-0] - TEMPI [7-0] " 32TEMP4[39-0] - TEMP2[31-0] " 8TEMP5[39-0] - TEMP3[39-0] XOR TEMP4[39-0]TEMP6[31-0] - TEMP5[39-0] MOD2 11EDC6F41HDEST[31-0] "- BIT-REFLECT (TEMP6[31-0])[0032] Note that the general structure of these pseudocode snippets are the same. First, data in a source register is bit reflected (i.e., its bits are placed into a temporary register in reverse bit order). The destination register is similarly bit reflected. Next, shift operations, more particularly shift left operations, may be effected on both of the bit-reflected source and data operands. The resulting values may then be subjected to an XOR operation. This operation may correspond to a polynomial division by a selected polynomial value. While this value may take many different forms in different embodiments, in particular implementations for performing CRC32 operations, the polynomial may correspond to 11EDC6F41H, although the scope of the present invention is not so limited. The remainder of this polynomial division (i.e., the remainder from the polynomial division modulus 2) is stored back into the low order bits of the destination operand in a bit-reflected order (e.g., bits 0-31 of either a 32-bit or 64-bit register). In the instance of a 64-bit register, the most significant bits (MSBs) may be loaded with zeros. While set forth with this particular implementation with respect to Tables 2-6, it is to be understood that other manners of providing a user-level CRC instruction may be performed.[0033] By performing CRC operations in a processor pipeline itself according to a user-level instruction, there is no need to send data to an offload engine. Similarly, the operation can be performed without providing state, reducing overhead, hi this way, as implemented in a three-cycle path a CRC operation may be performed at less than approximately 0.4 cycles per byte. Accordingly, performance may be improved using user-level instructions along with dedicated hardware in a processor pipeline. Furthermore, three-cycle latency may be realized with minimum real estate consumption and power consumption. Embodiments of the present invention may be used to enable processing of various storage protocols, for example, an Internet Small Computer System Interface (iSCSI) protocol at rates greater than 10 gigabits per second. Embodiments of the present invention further allow the use of data present in a processor or closely coupled thereto, reducing the need for on-cache data. In this way, data in a processor buffer may be fed to an XOR tree to enable rapid, on-the-fly CRC calculations.[0034] Embodiments may be implemented in many different system types. Referring now to FIG. 5, shown is a block diagram of a multiprocessor system in accordance with an embodiment of the present invention. As shown in FIG. 5, the multiprocessor system is a point-to-point interconnect system, and includes a first processor 470 and a second processor 480 coupled via a point-to-point interconnect 450. As shown in FIG. 5, each of processors 470 and 480 may be multicore processors, including first and second processor cores (i.e., processor cores 474a and 474b and processor cores 484a and 484b). While not shown for ease of illustration, first processor 470 and second processor 480 (and more specifically the cores therein) may include XOR tree logic within their execution units to execute user-level CRC instructions in accordance with an embodiment of the present invention. First processor 470 further includes a memory controller hub (MCH) 472 and point-to-point (P-P) interfaces 476 and 478. Similarly, second processor 480 includes a MCH 482 and P-P interfaces 486 and 488. As shown in FIG. 5, MCH's 472 and 482 couple the processors to respective memories, namely a memory 432 and a memory 434, which may be portions of main memory locally attached to the respective processors.[0035] First processor 470 and second processor 480 may be coupled to a chipset 490 via P-P interconnects 452 and 454, respectively. As shown in FIG. 5, chipset 490 includes P-P interfaces 494 and 498. Furthermore, chipset 490 includes an interface 492 to couple chipset 490 with a high performance graphics engine 438. In one embodiment, an Advanced Graphics Port (AGP) bus 439 may be used to couple graphics engine 438 to chipset 490. AGP bus 439 may conform to the Accelerated Graphics Port Interface Specification, Revision 2.0, published May 4, 1998, by Intel Corporation, Santa Clara, California. Alternately, a point-to-point interconnect 439 may couple these components.[0036] In turn, chipset 490 may be coupled to a first bus 416 via an interface 496. In one embodiment, first bus 416 may be a Peripheral Component Interconnect (PCI) bus, as defined by the PCI Local Bus Specification, Production Version, Revision 2.1, dated June 1995 or a bus such as the PCI Express bus or another third generation input/output (I/O) interconnect bus, although the scope of the present invention is not so limited.[0037] As shown in FIG. 5, various I/O devices 414 may be coupled to first bus 416, along with a bus bridge 418 which couples first bus 416 to a second bus 420. In one embodiment, second bus 420 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 420 including, for example, a keyboard/mouse 422, communication devices 426 and a data storage unit 428 which may include code 430, in one embodiment. Further, an audio I/O 424 may be coupled to second bus 420. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 5, a system may implement a multi-drop bus or another such architecture. [0038] Embodiments may be implemented in code and may be stored on a storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, compact disk readonly memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto- optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.[0039] While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
Examples include techniques for dynamically modifying a platform form factor of a mobile device. In some examples, a system may include a split memory array having a first memory within a docking system and a second memory element within a small form factor (SFF) mobile device. A platform form factor determination component may dynamically select between multiple platform form factors based on a determination that the SFF mobile device is coupled with the docking system. An interface logic component may access the first memory storage of the docking system during a memory (e.g., graphics) computation when the mobile device is physically and electrically/communicably coupled with the docking system to allow the SFF mobile device to have full LFF functionality. When the SFF mobile device is disconnected from the docking system, the interface logic component may access only the second memory storage of the SFF mobile device to provide SFF functionality. |
CLAIMS:What is claimed is:1. A system comprising:a split memory array including a first memory element contained within a docking system and a second memory element contained within a mobile device;an interface logic component of the mobile device; anda platform form factor (PFF) determination component to dynamically select between multiple platform form factors based on a determination that the mobile device is coupled with the docking system, the interface logic component to access the first memory storage element of the docking system during a graphics computation based on a determination that the mobile device is coupled with the docking system.2. The system of claim 1, the PFF determination component to select a small form factor display mode in the case the mobile device is physically and electrically coupled with the docking system.3. The system of claim 2, the PFF determination component to perform the graphics computation in the small form factor display mode using only the second memory element of the mobile device.4. The system of claim 2, the PFF determination component to select a large form factor display mode in response to the mobile device being physically and electrically coupled with the docking system.5. The system of claim 4, the PFF determination component to perform the graphics computation in the large form factor display mode using the first memory element and the second memory element.6. The system of claim 4, the PFF determination component to change a display mode from the small form factor display mode to the large form factor display mode when the mobile device is physically and electrically coupled with the docking system, and to change a display mode of the mobile device from the large form factor display mode to the small form factor display mode when the mobile device is not physically and electrically coupled with the docking system.7. The system of claim 1, wherein the mobile device is a small form factor mobile device.8. The system of claim 1, further comprising a connector providing physical and electrical coupling between the mobile device and the docking system, wherein the connector includes: a first connector element coupled with a printed circuit board of the mobile device; and a second connector element extending from the docking system to matingly receive the first connector element.9. The system of claim 8, the connector being a card edge-type connector.10. A computer-implemented method comprising:generating one or more signals to cause a processor of a small form factor (SFF) mobile device to access a first memory storage of a docking system and a second memory storage of the mobile device; andexecuting a memory computation using at least one of the first memory storage and the second memory storage based on a determination that the SFF mobile device is coupled with the docking system.11. The computer- implemented method of claim 10, further comprising:determining whether the SFF mobile device is physically coupled with the docking system; andselecting between multiple platform form factors based on a determination that the SFF mobile device is physically coupled with the docking system.12. The computer-implemented method of claim 10, further comprising selecting a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically coupled with the docking system.13. The computer-implemented method of claim 12, further comprising performing the memory computation in the small form factor display mode using only the second memory storage of the SFF mobile device.14. The computer-implemented method of claim 12, further comprising selecting a large form factor display mode in response to the SFF mobile device being physically coupled with the docking system.15. The computer-implemented method of claim 14, further comprising performing the memory computation in the large form factor display mode using the first memory storage and the second memory storage.16. The computer-implemented method of claim 14, further comprising changing a display mode from the small form factor display mode to the large form factor display mode when the mobile device is physically and electrically coupled with the docking system.17. The computer-implemented method of claim 14, further comprising changing a display mode of the mobile device from the large form factor display mode to the small form factor display mode when the mobile device is not physically and electrically coupled with the docking system.18. An apparatus comprising:a first memory storage;an interface to couple to mobile device, the mobile device to comprise a processor and a second memory storage; and logic, at least a portion of which is in hardware, the logic to cause the processor to access the first memory storage as part of execution of a memory computation in response to the mobile device being coupled to the interface.19. The apparatus of claim 18, further comprising the logic to:determine whether the mobile device is physically and electrically coupled with the docking system; andselect between multiple platform form factors in response to a determination that the mobile device is physically and electrically coupled with the docking system.20. The apparatus of claim 19, further comprising the logic to select a small form factor display mode for execution of the memory computation in response to the mobile device not being physically and electrically coupled with the docking system.21. The apparatus of claim 20, further comprising the logic to perform a graphics computation in the small form factor display mode using only the second memory storage of the mobile device.22. The apparatus of claim 19, further comprising the logic to select a large form factor display mode for execution of the memory computation in response to the mobile device being physically and electrically coupled with the docking system.23. The apparatus of claim 22, further comprising the logic to perform a graphics computation in the large form factor display mode using the first memory storage and the second memory storage.24. The apparatus of claim 22, further comprising a connector providing physical and electrical coupling between the mobile device and the docking system.25. The apparatus of claim 24, wherein the connector is a card edge-type connector. |
Techniques for Dynamically Modifying Platform Form Factors of a Mobile DeviceTECHNICAL FIELDExamples described herein generally relate to platform form factor display modes and more specifically to dynamically modifying a platform form factor of a mobile device.BACKGROUNDIn some examples, small form factor (SFF) devices such as handheld computers, personal digital assistants (PDAs) and smart phones have been used to leverage the capabilities of the Internet and provide users ubiquitous access to information. Despite the proliferation of these devices, usage of SFF mobile devices has been constrained by small screen size, and limited input and memory capabilities. As such, a significant portion of today's applications and web content is still designed for use with desktop computers. Information architecture (IA) and large form factor (LFF) devices offer full performance and full desktop PC functionality for today's applications and web content. However, IA and LFF applications are inherently unfriendly for SFF mobile devices. For example, both IA and LFF display modes require larger power delivery footprints for memory and display, as well as for other subsystems. As a result, it remains a constant challenge to balance the benefits and capabilities of LFF versus SFF.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an example system.FIGS. 2A-B illustrate an example first apparatus.FIG. 3 illustrates an example second apparatus.FIG. 4 illustrates an example third apparatus.FIG. 5 illustrates an example logic flow for an apparatus.FIG. 6 illustrates an example first process flow.FIG. 7 illustrates an example second process flow.FIG. 8 illustrates an example storage medium.FIG. 9 illustrates an example computing platform.DETAILED DESCRIPTIONThe present disclosure is generally directed to dynamically modifying a platform form factor of small form factor (SFF) mobile devices. For example, in an effort to provide full large form factor (LFF) functionality in a SFF mobile device the platform form factor for the SFF mobile device can be dynamically modified. In some examples, a system may include a split memory array having a first memory within a docking system and a second memory element within a small form factor (SFF) device. A platform form factor (PFF) determination component may dynamically select between multiple platform form factors based on a determination whether the SFF mobile device is coupled (e.g., physically and electrically/communicably, or wirelessly) with a docking system. An interface logic component may access the first memory storage of the docking system during a graphics computation when the mobile device is physically and electrically coupled with the docking system to allow the SFF mobile device to have full LFF functionality. When the SFF mobile device is disconnected from the docking system, the interface logic component may access only the second memory storage of the SFF mobile device to provide SFF functionality. As a result, memory space and bandwidth of the SFF mobile device may be conserved.FIG. 1 illustrates an example first system 100. As shown in this figure, the example first system 100 includes a device 101, such as a SFF mobile device, and a docking system 201. As depicted, the device 101 and the docking system 201 are operably/communicably coupled. As further described below, the device 101 and the docking system 201 are physically and electrically coupled to allow a display 160 of the device 101 to be generated or mirrored on one or more displays 230 operably coupled with the docking system 201. With some examples, the device 101 may be implemented as a System-on-Chip (SoC) or the like. For example, the device 101 may be a SoC, and the docking system 201 may operably couple to the SoC. Examples are not limited in this context.In general, FIG. 1 illustrates the system 100 where the docking system 201 provides a convenient interface for transferring data between the device 101 and one or more additional computing devices such as a personal computer or peripheral devices such as speakers and one or more displays 230 without having to reconnect and disconnect cables. The docking system 201 may also provide an interface for connecting to a power source (not shown) so that the device 101 can be powered or charged (e.g., battery). In some cases, the docking system 201 includes a housing (not shown) having a connector for physically and electrically coupling the device 101 and the docking system 201, as will be described in greater detail below. In some examples, the housing of the docking system 201 may be sized and shaped to coincide with the size and shape of a particular device shape and/or style. In other examples, the housing of the docking system 201 or the connector of the docking system 201 may be universal or generic to multiple device shapes and styles.The device 101 and the docking system 201 may be operably coupled via a communication bus 301. In general, the communication bus 301 may be any data communication bus and/or interface, such as, for example without limitation: a peripheral component interconnect express (PCIe), which can be implemented according to the Peripheral Component Interconnect (PCI) Express Base Specification, revision 3.1a, published in December 2015 ("PCI Express specification" or "PCIe specification") the Non- Volatile Memory Express (NVMe) Specification, revision 1.2a, published in October 2015 ("NVM Express specification" or "NVMe specification"); a serial attached small computer system interface (SCSI) (SAS), which can be implemented according to the Serial Attached SCSI (SAS) Specification, revision 3.0, published in November 2013 ("SAS-3 specification"); a universal serial bus (USB), which can be implemented according to the Universal Serial Bus Specification, published April 27, 2000 ("USB 2.0 Specification") or the Universal Serial Bus 3.1 Specification revision 1.0, published July 26, 2013; a system management bus (SMBus), which can be implemented according to the System Management Bus (SMBus) Specification version 2.0, published August 3, 2000; or a serial AT attachment (SAT A), which can be implemented according to the Serial ATA Revision 3.0, published June 2, 2009. . In particular, the device 101 and the docking system 201 may each include an interface, for example, the host interface 120 and the docking interface 220, to operably connect to the bus 301. In particular, the interfaces 120 and 220 may enable the device 101 and the docking system 201 to send and receive information elements over the bus 301. Additionally, a third interface may be provided, for example, the communications interface 140.In general, the host interface 120, the docking interface 220, and the communications interface 140 may include logic and/or features to support communication between the device 101 and the docking system 201. For these examples, host interface 120, docking interface 220, and communications interface 140 may include one or more interfaces that operate according to various communication protocols or standards to communicate over direct or network communication links. Direct communications may occur via use of communication protocols or standards described in one or more industry standards (including progenies and variants) such as those associated with the SMBus specification or the PCI Express specification. Network communications may occur via use of communication protocols or standards such those described in one or more Ethernet standards promulgated by the Institute of Electrical and Electronics Engineers (IEEE). For example, one such Ethernet standard may include Carrier sense Multiple access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications, published in December 2012 (hereinafter "IEEE 802.3-2012).System 100 may be part of a host computing platform that may be, for example, a server, a server array or server farm, a web server, a network server, an Internet server, a work station, a mini-computer, a main frame computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, processor-based systems, or combination thereof. Accordingly, functions and/or specific configurations of system 100 described herein, may be included or omitted in various embodiments of system 100, as suitably desired.The components and features of system 100 may be implemented using any combination of discrete circuitry, application specific integrated circuits (ASICs), logic gates and/or single chip architectures. Further, the features of system 101 may be implemented usingmicrocontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as "logic" or "circuit."It should be appreciated that the example device 101, the docking system 201, and the system 100 shown in the block diagram of FIG. 1 may represent one functionally descriptive example of many potential implementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in examples.Referring more specifically to FIG. 1, the device 101 may be made up, at least in part, of a processor component 110, the host interface 120, memory storage 130, communications interface 140, input and/or output components 150, and a display 160. The memory storage 130 may include control routine 135, which may include programming, logic, and/or features to cause the device to perform various functions. For example, the control routine 135 may include an operating system or other programming to enable the device 101 to perform various functions including generating one or more signals to cause the processor component 110 to access the storage 220 of the docking system 201 and the memory storage 130 of the device 101 to perform a memory (e.g., graphics) computation when the device 101 is physically electrically coupled with the docking system 201. The control routine 135 may further detect whether the device 101 is physically coupled with the docking system, and to select between multiple platform form factors (e.g., SFF or LFF) based on whether the device 101 is physically coupled with the docking system 201.As discussed herein, logics of the memory storage 130 may be graphics logic (also referred to herein as "GFX"), including a graphics processing unit (GPU) or other types of logic that perform computation(s) relating to graphics task(s), such as operation(s) that manipulate an image, video, frame, scene, etc., as will be further discussed herein. While some embodiments are discussed with reference to graphics logic, embodiments herein are not limited to graphics related logic and may be also applied to other types of non- graphic (e.g., general purpose) logic also. Moreover, various embodiments may be performed for any type of computing device such as a desktop computer, a mobile computer (such as a smartphone, tablet, UMPC (Ultra-Mobile Personal Computer), laptop computer, Ultrabook™ computing device, wearable device (such as a smart watch, smart glasses, etc.)), a work station, etc., which may be embodied on a SOC (System On Chip) platform in an embodiment. With some examples, the processor component 110 may include circuity or processor logic, such as, for example, any of a variety of commercial processors. In some examples, the processor component 110 may include multiple processors, a multi-threaded processor, a multi- core processor (whether the multiple cores coexist on the same or separate dies), and/or a multi-processor architecture of some other variety by which multiple physically separate processors are in some way linked. Additionally, in some examples, the processor component 110 may include graphics processing portions, such as a graphics processing unit (GPU), and may include dedicated memory, multiple-threaded processing and/or some other parallel processing capability.The memory storage 130 may include logic, a portion of which includes arrays of integrated circuits, forming non-volatile memory to persistently store data or a combination of non-volatile memory and volatile memory. It is to be appreciated, that the memory storage 130 may be based on any of a variety of technologies. In particular, the arrays of integrated circuits included in memory storage 130 may be arranged to form one or more types of memory, such as, for example, dynamic random access memory (DRAM), NAND memory, NOR memory, 3- Dimensional cross-point memory, ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, polymer memory such as ferroelectric polymer memory, ferroelectric transistor random access memory (FeTRAM or FeRAM), nanowire, phase change memory, magnetoresistive random access memory (MRAM), spin transfer torque MRAM (STT-MRAM) memory, or the like. In some embodiments, the memory storage 210 of the docking system 201 and the memory storage 130 of the device together may form a split memory array.In various examples, the input and/or output components 150 may include one or more components to provide input to or to provide output from the system 101. For example, the input and/or output components 150 may be a keyboard, mouse, joystick, microphone, track pad, speaker, haptic feedback device, or the like. In variousembodiments, the display 160 may be based on any of a variety of displays (e.g., Plasma, LCD, LED, OLED, or the like) for displaying images and may include touch functionality.The host interface 120 may be any of a variety of interfaces to operably connect the device 101 to the docking system 201. In particular, the host interface 120 may be configured to operably connect to docking interface 220 within the docking system 201 via the bus 301.The docking system 201 may be made up, at least in part, of memory storage 210, docking interface 220 and the display 230. In various embodiments, the display 230 may be based on any of a variety of displays (e.g., Plasma, LCD, LED, OLED, or the like) for displaying images and may include touch functionality. In some examples, the display 230 is intended to regenerate or mirror the graphics being rendered by the display 160. In yet other examples, the display 230 will display graphics corresponding to LFF functionality, while the display 160 will simultaneously render graphics corresponding to SFF functionality.In general, the memory storage 210 may include logic, a portion of which includes arrays of integrated circuits, forming non-volatile memory to persistently store data or a combination of non- volatile memory and volatile memory. It is to be appreciated, that the docking system 201, and particularly, the memory storage 210 may be based on any of a variety of technologies. In particular, the arrays of integrated circuits included in memory storage 210 may be arranged to form one or more types of memory, such as, for example, DRAM, NAND memory, NOR memory, 3-Dimensional cross-point memory, ferroelectric memory, silicon-oxide-nitride-oxide- silicon (SONOS) memory, polymer memory such as ferroelectric polymer memory, ferroelectric transistor random access memory (FeTRAM or FeRAM), nanowire, phase change memory, magnetoresistive random access memory (MRAM), spin transfer torque (STT) memory, or the like. In some examples, the memory storage 210 is a graphics (GFX) memory capable of supporting full LFF or PC functionality when the device 101 is docked with the docking system 201. The memory storage 210 may provide the additional memory necessary for switching between SFF and LFF.The docking interface 220 may be any of a variety of interfaces to operably connect the docking system 201 to the device 101. In particular, the docking interface 220 may be configured to operably connect to host interface 120 within the device 101 via the bus 301.The device 101 may further include a sensor(s) 142 for detecting system configuration changes, e.g., detect whether the device 101 is docked or undocked. In some examples, the sensor 142 may alternatively be provided as part of the docking system 201, wherein communication regarding the docking status of the mobile device 101 may be delivered from the docking interface 220 to the processor component 110. The sensor 142 may provide system configuration information to the processor component 110, which may in turn causemodification to the platform form factor to be generated and displayed on the display 230. The sensor 142 may further provide system configuration information to the processor component 110, which may in turn activate or provide instructions to access the memory storage 210 of the docking system 201.Turning to FIGs. 2A-B, an apparatus 200 demonstrating interoperability of the device 101 and the docking system 201 will be described in greater detail. FIG. 2A is a schematic diagram showing the device 101 and the docking system 201 in a disconnected configuration, while FIG. 2B is a side view demonstrating the device 101 and the docking system 201 in a connected configuration. In some examples, the device 101 is a SFF mobile device including a SOC 152, the memory storage 130 (e.g., DRAM), and NAND 154 coupled to a printed circuit board (PCB) 165. As illustrated, the SOC 152 may include one or more CPU cores 156, one or more GPU cores 158, an I/O interface 164, and a memory controller 164. Various components of the SOC 152 may be coupled to an interconnect or bus such as discussed herein with reference to the other figures. Also, the SOC 152 may include more or less components, such as those discussed herein with reference to the other figures. For example, each component of the SOC 152 may include one or more other components, e.g., as discussed with reference to the other figures herein. In one embodiment, the SOC 152 is provided on one or more Integrated Circuit (IC) dies, e.g., which are packaged into a single semiconductor device. As further shown, the SOC 152 is operably coupled to the memory storage 130 via the memory controller 164.In some examples, NAND 154 causes the GPU core(s) 158 to access the memory storage 210 of the docking system 201 during a memory computation when the device 101 is physically and electrically coupled with the docking system 201, for example, as shown the cross-sectional view of FIG. 2B. NAND 154 may further cause the CPU core(s) 156 and/or the GPU core(s) 158 to determine whether the device 101 is physically and electrically coupled with the docking system 201, for example based on an output from the sensor 142 (FIG. 1), and to select between multiple platform form factors based on the determination whether the device 101 is docked or disconnected. In some examples, NAND 154 may further cause the CPU core(s) 156 and/or the GPU core(s) 158 to select a SFF display mode in the case the device 101 is determined to be disconnected from the docking system 201, and to select a LFF display mode in the case the device 101 is physically and electrically coupled with the docking system 201. In some examples, NAND 154 causes the CPU core(s) 156 and/or the GPU core(s) 158 to perform the graphics computation in the LFF display mode using memory storage 130 and memory storage 210 when the device 101 is docked, and to perform the graphics computation in the SFF display mode using only memory storage 130 when the device 101 is not docked. In some examples, NAND 154 causes the CPU core(s) 156 and/or the GPU core(s) 158 to change the display mode from SFF to LFF when the device 101 is initially coupled with the docking system 201, and to change the display mode from LFF to SFF as soon as the device 101 is decoupled from the docking system 201.In some examples, the NAND 154 takes the place of an additional DRAM component that may be present on the PDB 165 for SFF mobile devices. As a result, routing to the docking system 201 may be on a different layer than the NAND 154 routing so the physical area is not consumed, providing a net area benefit in the device 101. This routing may be accomplished via a connector 170 coupling the PCB 165 of the device 101 with the memory storage 210 (e.g., DRAM) of the docking system 201. More specifically, as shown, the connector 170 may include a first connector element 170A extending from the PCB 165 of the device 101, and a second connector element 170B extending from an external area of the docking system 201 to matingly receive the first connector element 170A. In some embodiments, the first connector element 170A may include a plurality of elongate receptacles (e.g., electrical contact surfaces) 172 for receiving a corresponding plurality of pins 174 of the second connector element 170B.More specifically, in some examples, the connector 170 is a card edge-type connector having a housing, such as the second connector element 170B, and a plurality of contacts, such as the plurality of pins 174, retained in the housing. In some examples, the connector 170 is a high pin count card edge connector having approximately 150-200 pins 174 arranged in two rows to sandwich a module of the docking system 201 at one end thereof, and to straddle the first connector element 170A at another end thereof. As shown, the plurality of pins 174 may be formed with a receiving space between each row of pins to retain the first connector element 170A.As further shown in FIG. 2B, in some examples, the first connector element 170A may be surrounded or buttressed by the second connector element 170B when the device 101 is docked with the docking system 201. As configured, the second connector element 170B may provide mechanical support, while the plurality of pins 174 engage the plurality of elongate receptacles 172. In some embodiments, the plurality of pins 174 form a normally-open contact switch with the elongate receptacles 172, wherein a closed circuit is formed between the first connector element 170A and the plurality of pins 174 via the elongate receptacles 172 when the first connector element 170A is brought into engagement with the second connector element 170B. In some examples, the closed contact arrangement sends a signal to the sensor 145 and/or processor component 110 (FIG. 1) indicating that the device 101 and the docking system 201 are physically and electrically engaged. In some examples, a data communication path 178 (FIG. 1) may extend between the plurality of pins 174 and the memory storage 210 and the PCB 165 to provide data communication therebetween.FIG. 3 illustrates an example of a portion 300 of the system 100 depicted in FIG. 1. In particular, FIG. 3 depicts a block diagram of the host interface 120 in greater detail. As shown, the host interface 120 may comprise an interface logic component 125, including GFX logic 126, and a platform form factor determination component 127. In general, the interface logic component 125 may include circuitry and/or features to facilitate communication over the bus 301. For example, where the bus 301 is a NVMe bus, the interface logic component 125 may include circuitry and/or features to communicate via an NVMe bus and particularly in compliance with any NVMe standards. For example, the interface logic 125 may include circuitry to implement communications protocols in compliance with the NVMe Specification. In general, the platform form factor determination component 127 may include circuitry and/or features to determine and negotiate a display mode (e.g., SFF or LFF) with another interface (e.g., the docking interface 220, or the like) and to enable the interface logic component 125 to operate based on the determined or negotiated display mode. This is explained in greater detail below, for example, with respect to FIG. 5. For example, the interface logic component 125 may implement a change to the graphics to be displayed by the device 101, e.g., video/image streaming resolution, pixel resolution, frame rate, format, and/or compression levels. In particular, the GFX logic 126 may perform memory computation(s) relating to graphics task(s), such as operation(s) that manipulate an image, frame, scene, etc., e.g., as will be further discussed herein.In some examples, host interface 120 may also include a graphics interface 128 that communicates with a display device, such as display 160 or display 230 of FIG. 1. In one example, the graphics interface 128 may communicate with the display 160/230 via an accelerated graphics port (AGP) or PCIe interface). In one example, the display 160/230 may communicate with the graphics interface 128 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display 160/230. The display signals produced by the display 160 may pass through various control devices before being interpreted by and subsequently displayed on the display 160/230.In some examples, the platform form factor determination component 127 may dynamically select between multiple platform form factors based on a determination whether the device 101 is physically and electrically coupled with the docking system 201. For example, the platform form factor determination component 127 may receive a control signal (e.g., from the control routine 135, or the like) to include an indication that the device 101 is coupled/decoupled with the docking system 201. The platform form factor determination component 127 may further select a LFF display mode 134 in the case the mobile device 101 is docked at the docking system 201, or may select a SFF display mode 134 in the case the mobile device 101 is disconnected from the docking system 201.Based on receiving the docking indication, the platform form factor determination component 127 may send a control signal to the docking interface 220 to negotiate a graphics computation in either the SFF display mode 132 or the LFF display mode 134. This is explained in greater detail below with reference to FIG. 5. However, in general, the platform form factor determination component 127 may send a control signal to the docking interface 220 to access the memory storage 210 of the docking system 201 to perform a LFF graphics computation in the case the mobile device 101 is connected with the docking system 201. In some examples, the platform form factor determination component 127 may send a control signal to the docking interface 220 to access both the memory storage 210 and the memory storage 130, as the GFX bandwidth of the LFF graphics computation may require increased memory resources from multiple locations.In some examples, the platform form factor determination component 127 may also cause a change in the display mode from the SFF display mode 132 to the LFF display mode 134 when the mobile device 101 is physically and electrically coupled with the docking system 201. The platform form factor determination component 127 may also cause a change in the display mode from the LFF display mode 134 to the SFF display mode 132 when the mobile device 101 is no longer physically and electrically coupled with the docking system 201, e.g., in the case a user has removed from the device 101 from the docking system 201.FIG. 4 illustrates an example of a portion 400 of the system 100 depicted in FIG. 1. In particular, FIG. 4 depicts a block diagram of the docking interface 220 in greater detail. As shown, the docking interface 220 may comprise a docking interface logic component 225, which may include circuitry and/or features to facilitate communication over the bus 301. For example, the docket interface logic 225 may include circuitry to implement communications protocols in compliance with the NVMe specification. In general, the docking interface logic component 225 may include elements to determine and negotiate a display mode (e.g., SFF or LFF) with another interface (e.g., the host interface 120, or the like) and to operate based on the determined or negotiated display mode. This is explained in greater detail below, for example, with respect to FIG. 5. For example, the docking interface logic component 225 may implement a change to the graphics to be displayed by the display 230, e.g., video/image streaming resolution, pixel resolution, frame rate, format, and/or compression levels. In some examples, a GFX logic of the docking interface logic component 225 may perform memory computation(s) relating to graphics task(s), such as operation(s) that manipulate an image, frame, scene, etc., e.g., as will be further discussed herein.In some examples, the docking interface 220 may also include a graphics interface 228 that communicates with a display device, such as the display 160 or the display 230. In one example, the graphics interface 228 may communicate with the display 160/230 via an accelerated graphics port (AGP) or PCIe interface. In one example, the display 160/230 may communicate with the graphics interface 228 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display 160/230. The display signals produced by the display 160/230 may pass through various control devices before being interpreted by and subsequently displayed on the display 160/230. In some examples, the docking interface logic component 225 may include logic to cause a processor, such as the processor component 110 of FIG. 1, to access a first memory storage within the docking system 201 as part of execution of a memory computation in response to the device 101 being coupled to the interface. In some examples, the docking interface logic component 225 may include logic to determine whether the device 101 is physically and electrically coupled with the docking system 201, and to select between multiple platform form factors in response to a determination that the device 101 is physically and electrically coupled with the docking system 201. In some examples, the docking interface logic component 225 may include logic to select a small form factor display mode for execution of the memory computation in response to the device 101 not being physically and electrically coupled with the docking system 201, and to select a large form factor display mode for execution of the memory computation in response to the device 101 being physically and electrically coupled with the docking system 201.In some examples, the docking interface logic component 225 further includes logic to perform a graphics computation in the large form factor display mode using the first memory storage within the docking system 201 and the second memory storage located within the device 101. In some examples, the docking interface logic component 225 further includes logic to perform a graphics computation in the small form factor display mode using only the second memory storage of the device 101.Included herein is one or more techniques and/or logic flows representative of example methodologies for performing novel aspects of the disclosed architecture. While, for purposes of simplicity of explanation, the one or more methodologies shown herein are shown and described as a series of acts, those skilled in the art will understand and appreciate that the methodologies are not limited by the order of acts. Some acts may, in accordance therewith, occur in a different order and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all acts illustrated in a methodology may be required for a novel implementation.A technique or a logic flow may be implemented in software, firmware, and/or hardware. In software and firmware embodiments, a technique or a logic flow may be implemented by computer executable instructions stored on at least one non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. The embodiments are not limited in this contextFIG. 5 illustrates an example technique 500 for dynamically adjusting a platform form factor of a mobile device using a split memory array. In particular, the technique 500 depicts an example logic flow to provide switching between SFF and LFF based on whether the device 101 is docked with the docking system 201. The technique 500 may begin at block 5.1, wherein a request is received to perform a graphics computation at the processor component 110. At block 5.2, the processor component 110 sends a signal requesting graphics files from a memory array 180 to execute the graphics computation. The platform form factor determination component 127 receives the signal from the processor component 110, and at block 5.3, determines whether the device 101 is physically and electrically coupled with the docking system 201. In some examples, the closed contact arrangement of the connector 170 sends a signal to the sensor 145 indicating that the device 101 and the docking system 201 are physically and electrically engaged.Continuing to block 5.5, in the case the platform form factor determination component 127 determines that the device 101 is not docked with the docking system 201, a signal is sent to the memory storage 130 of the memory array 180. At block 5.6, the memory storage 130 returns the graphics files to the processor component 110 necessary for execution of the graphics computation in the SFF display mode. Continuing to block 5.7, in the case the platform form factor determination component 127 determines that the device 101 is docked with the docking system 201, a signal is sent to the memory storage 210 of the memory array 180. At block 5.8, the memory storage 210 returns the graphics files necessary for the processor component 110 to execute the graphics computation in LFF display mode. At block 5.9, the graphics computation is executed and an output of the graphics computation is rendered via the display 160 and/or the display 230.FIG. 6 illustrates an example of a first logic flow. As shown in this figure, the first logic flow includes a logic flow 600. Logic flow 600 may be representative of some or all of the operations executed by one or more logic, features, or devices described herein, such as the host interface 120 or the platform form factor determination component 127.In this illustrated example, logic flow 600 at block 610 may determine if a mobile device is physically and electrically coupled with a docking system. For example, the platform form factor determination component 127 of the device 101 may receive an indication (e.g., from the docking system 201, the control routine 135, or the like) that the device 101 is docked with the docking system 201.The logic flow 600 at block 620 may select between multiple platform form factors based on a determination whether the mobile device is physically and electrically coupled with the docking system. For example, the platform form factor determination component 127 of the device 101 may select between SFF display mode 132 and LFF display mode 134. In one example, the platform form factor determination component 127 selects the SFF display mode in the case the device 101 is not physically and electrically coupled with the docking system. In another example, the platform form factor determination component 127 selects the LFF display mode in the case the device 101 is determined to be physically and electrically coupled with the docking system.The logic flow 600 at block 630 may receive a control signal to cause a processor of a mobile device to access one or more memory storage locations of a memory array split between a mobile device and a docking system based on the mobile device being physically and electrically coupled with the docking system. For example, the platform form factor determination component 127 of the device 101 may access the memory storage 130 of the device 101 and the memory storage 210 of the docking system 201 when the device 101 is docked with the docking system 201 so as to provide adequate GFX bandwidth for a LFF graphics computation. In another example, the platform form factor determination component 127 of the device 101 may access just the memory storage 130 of the device 101 when the device 101 is disconnected from the docking system 201 so as to perform a SFF graphics computation. Without access to the memory storage 210 of the docking system 201, GFX bandwidth limits prevent the device 101 from performing a LFF graphics computation.FIG. 7 illustrates an example of a second logic flow 700. Logic flow 700 may be representative of some or all of the operations executed by one or more logic, features, or devices described herein, such as docking interface 220 or platform form factor determination component 127.In this illustrated example, logic flow 700 at block 710 may generate one or more signals to cause a processor to access a first memory storage as part of execution of a memory computation in response to the device 101 being coupled to an interface of the docking system 201. For example, the platform form factor determination component 127 of the device 101 may access the memory storage 130 of the device 101 and the memory storage 210 of the docking system 201 when the device 101 is docked with the docking system 201 so as to provide adequate GFX bandwidth for a LFF graphics computation.The logic flow at block 720 may execute the memory computation in a LFF display mode after accessing both the memory storage 130 of the device 101 and the memory storage 210 of the docking system 201. In some examples, when the device 101 is docked with the docking system 201, additional GFX bandwidth is available for a LFF graphics computation.The logic flow at block 730 may render an output of the memory computation on a display operably coupled with the docking system 201, such as an external monitor. In some examples, a visual output of a graphics computation is rendered via one or more LFF functional computer monitors connected with the docking system 201. FIG. 8 illustrates an example of a first storage medium. As shown in this figure, the first storage medium includes a storage medium 800. The storage medium 800 may comprise an article of manufacture. In some examples, storage medium 800 may include any non-transitory computer readable medium or machine readable medium, such as an optical, magnetic or semiconductor storage. Storage medium 800 may store various types of computer executable instructions, such as instructions to implement logic flow 600 and logic flow 700. Examples of a computer readable or machine readable storage medium may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or nonremovable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like. The examples are not limited in this context.FIG. 9 illustrates an example platform form factor determination component (PFFDC) system 900. In some examples, as shown in this figure, the system 900 may include a processor component 940, other components 950, and/or a communications interface 960. According to some examples, system 900 may be implemented in a device to be coupled to an interface, such as an SSD, a memory component, a communications component, an input component, an output component, or the like.According to some examples, processor component 940 may execute processing operations or logic for apparatus 120, 220, 127, and/or storage medium 800. Processor component 940 may include various hardware elements, software elements, or a combination of both. Examples of hardware elements may include devices, logic devices, components, processors, microprocessors, circuits, processor circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software elements may include software components, programs, applications, computer programs, application programs, device drivers, system programs, software development programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given example.In some examples, other components 950 may include common computing elements or circuitry, such as one or more processors, multi-core processors, co-processors, memory units, interfaces, oscillators, timing devices, and so forth. Examples of memory units may include without limitation various types of computer readable and machine readable storage media in the form of one or more higher speed memory units, such as read-only memory (ROM), random- access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory or any other type of storage media suitable for storing information.In some examples, communications interface 960 may include logic and/or features to support a communication interface. For these examples, communications interface 960 may include one or more communication interfaces that operate according to various communication protocols or standards to communicate over communication links or channels. Communications may occur via use of communication protocols or standards described in one or more industry standards (including progenies and variants) such as those associated with the PCI Express, SATA or SCSI standard or specifications.The components and features of PFFDC system 900 may be implemented using any combination of discrete circuitry, application specific integrated circuits (ASICs), logic gates and/or single chip architectures. Further, the features of PFFDC system 900 may beimplemented using microcontrollers, programmable logic arrays and/or microprocessors or any combination of the foregoing where suitably appropriate. It is noted that hardware, firmware and/or software elements may be collectively or individually referred to herein as "logic" or "circuit."It should be appreciated that the example PFFDC system 900 shown in the block diagram of this figure may represent one functionally descriptive example of many potentialimplementations. Accordingly, division, omission or inclusion of block functions depicted in the accompanying figures does not infer that the hardware components, circuits, software and/or elements for implementing these functions would necessarily be divided, omitted, or included in embodiments.One or more aspects of at least one example may be implemented by representative instructions stored on at least one machine-readable medium which represents various logic within the processor, which when read by a machine, computing device or system causes the machine, computing device or system to fabricate logic to perform the techniques described herein. Such representations may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware elements may include devices, components, processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, ASICs, PLDs, DSPs, FPGAs, memory units, logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. In some examples, software elements may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, APIs, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an example is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints, as desired for a given implementation.Some examples may include an article of manufacture or at least one computer-readable medium. A computer-readable medium may include a non-transitory storage medium to store logic. In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non- volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. In some examples, the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, API, instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.According to some examples, a computer-readable medium may include a non-transitory storage medium to store or maintain instructions that when executed by a machine, computing device or system, cause the machine, computing device or system to perform methods and/or operations in accordance with the described examples. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a machine, computing device or system to perform a certain function. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.Some examples may be described using the expression "in one example" or "an example" along with their derivatives. These terms mean that a particular feature, structure, or characteristic described in connection with the example is included in at least one example. The appearances of the phrase "in one example" in various places in the specification are not necessarily all referring to the same example.Some examples may be described using the expression "coupled" and "connected" along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, descriptions using the terms "connected" and/or "coupled" may indicate that two or more elements are in direct physical or electrical contact with each other. The term "coupled," however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate example. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively. Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects.Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.The following examples of the present disclosure are provided. Example 1. An exemplary apparatus may include a docking system containing a first memory storage and a mobile device having an interface to communicatively couple with the docking system, the mobile device containing a second memory storage, and logic, at least a portion of which is in hardware, the logic to cause a processor of the mobile device to access the first memory storage of the docking system during execution of a memory computation based on a determination that the mobile device is communicatively coupled with the docking system.Example 2. The apparatus of example 1, further including the logic to determine whether the mobile device is physically coupled with the docking system, and select between multiple platform form factors based on a determination that the mobile device is physically coupled with the docking system.Example 3. The apparatus of example 1, further including the logic to select a small form factor display mode for execution of the memory computation in response to the mobile device not being physically coupled with the docking system.Example 4. The apparatus of example 3, further including the logic to perform a graphics computation in the small form factor display mode using only the second memory storage of the mobile device.Example 5. The apparatus of example 3, further including the logic to select a large form factor display mode for execution of the memory computation in response to the mobile device being physically coupled with the docking system.Example 6. The apparatus of example 5, further including the logic to perform a graphics computation in the large form factor display mode using the first memory storage and the second memory storage.Example 7. The apparatus of example 5, further including the logic to change a display mode from the small form factor display mode to the large form factor display mode when the mobile device is physically coupled with the docking system, and to change the display mode from the large form factor display mode to the small form factor display mode when the mobile device is not physically coupled with the docking system.Example 8. The apparatus of example 1, further including a connector providing physical and electrical coupling between the mobile device and the docking system, wherein the connector includes a first connector element coupled with a printed circuit board of the mobile device, and a second connector element extending from the docking system to matingly receive the first connector element.Example 9. The apparatus of example 8, the connector being a card edge-type connector.Example 10. An exemplary computer-implemented method may include generating one or more signals to cause a processor of a small form factor (SFF) mobile device to access a first memory storage of a docking system and a second memory storage of the mobile device, and executing a memory computation using at least one of the first memory storage and the second memory storage based on a determination that the SFF mobile device is coupled with the docking system.Example 11. The computer- implemented method of example 10, further including determining whether the SFF mobile device is physically coupled with the docking system, and selecting between multiple platform form factors based on a determination that the SFF mobile device is physically coupled with the docking system.Example 12. The computer-implemented method of example 10, further including selecting a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically coupled with the docking system.Example 13. The computer-implemented method of example 12, further including performing the memory computation in the small form factor display mode using only the second memory storage of the SFF mobile device.Example 14. The computer-implemented method of example 10, further including selecting a large form factor display mode in response to the SFF mobile device being physically coupled with the docking system.Example 15. The computer-implemented method of example 14, further including performing the memory computation in the large form factor display mode using the first memory storage and the second memory storage.Example 16. At least one machine readable medium including a plurality of instructions that in response to being executed by a system cause the system to carry out a method according to any one of claims 10 to 15.Example 17. An apparatus including means for performing the methods of any one of claims 10 to 15.Example 18. An exemplary system may include a split memory array including a first memory element contained within a docking system and a second memory element contained within a mobile device, an interface logic component of the mobile device, and a platform form factor (PFF) determination component to dynamically select between multiple platform form factors based on a determination that the mobile device is coupled with the docking system, the interface logic component to access the first memory storage element of the docking system during a graphics computation based on a determination that the mobile device is coupled with the docking system. Example 19. The system of example 18, the PFF determination component to select a small form factor display mode in the case the mobile device is physically and electrically coupled with the docking system.Example 20. The system of example 19, the PFF determination component to perform the graphics computation in the small form factor display mode using only the second memory element of the mobile device.Example 21. The system of example 19, the PFF determination component to select a large form factor display mode in response to the mobile device being physically and electrically coupled with the docking system.Example 22. The system of example 21, the PFF determination component to perform the graphics computation in the large form factor display mode using the first memory element and the second memory element.Example 23. The system of example 21, the PFF determination component to change a display mode from the small form factor display mode to the large form factor display mode when the mobile device is physically and electrically coupled with the docking system, and to change a display mode of the mobile device from the large form factor display mode to the small form factor display mode when the mobile device is not physically and electrically coupled with the docking system.Example 24. An exemplary apparatus may include a first memory storage, an interface to couple to a small form factor (SFF) mobile device, the SFF mobile device to comprise a processor and a second memory storage, and logic, at least a portion of which is in hardware, the logic to cause the processor to access the first memory storage as part of execution of a memory computation in response to the mobile device being coupled to the interface.Example 25. The apparatus of example 24, further including the logic to determine whether the SFF mobile device is physically and electrically coupled with the docking system, and select between multiple platform form factors in response to a determination that the mobile device is physically and electrically coupled with the docking system.Example 26. The apparatus of example 24, further including the logic to select a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically and electrically coupled with the docking system, and to select a large form factor display mode for execution of the memory computation in response to the SFF mobile device being physically and electrically coupled with the docking system.Example 27. The apparatus of example 26, further including the logic to perform a graphics computation in the large form factor display mode using the first memory storage and the second memory storage. Example 28. The apparatus of example 26, further including the logic to perform a graphics computation in the small form factor display mode using only the second memory storage of the SFF mobile device.Example 29. The apparatus of example 24, further including a connector providing physical and electrical coupling between the mobile device and the docking system, wherein the connector is a card edge-type connector.Example 30. At least one non-transitory computer-readable storage medium for dynamically modifying platform form factors of a small form factor (SFF) mobile device, the at least one non-transitory computer-readable storage medium including a set of instructions that, in response to being executed on a processing component at a computing platform, cause the processing component to access a first memory storage of a docking system during execution of a memory computation based on a determination that a SFF mobile device is coupled with the docking system.Example 31. The at least one non- transitory computer-readable storage medium of example 30, further including a set of instructions that, in response to being executed on the processing component at the computing platform, cause the processing component to determine whether the SFF mobile device is physically coupled with the docking system, and select between multiple platform form factors based on a determination that the SFF mobile device is physically coupled with the docking system.Example 32. The at least one non-transitory computer-readable storage medium of example 30, further including a set of instructions that, in response to being executed on the processing component at the computing platform, cause the processing component to select a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically coupled with the docking system.Example 33. The at least one non-transitory computer-readable storage medium of example 32, further including a set of instructions that, in response to being executed on the processing component at the computing platform, cause the processing component to perform a graphics computation in the small form factor display mode using only the second memory storage of the SFF mobile device.Example 34. The at least one non-transitory computer-readable storage medium of example 32, further including a set of instructions that, in response to being executed on the processing component at the computing platform, cause the processing component to select a large form factor display mode for execution of the memory computation in response to the SFF mobile device being physically coupled with the docking system. Example 35. The at least one non-transitory computer-readable storage medium of example 34, further including a set of instructions that, in response to being executed on the processing component at the computing platform, cause the processing component to perform a graphics computation in the large form factor display mode using the first memory storage and the second memory storage.Example 36. The at least one non-transitory computer-readable storage medium of example 34, further including a set of instructions that, in response to being executed on the processing component at the computing platform, cause the processing component to change a display mode from the small form factor display mode to the large form factor display mode when the SFF mobile device is physically coupled with the docking system, and to change the display mode from the large form factor display mode to the small form factor display mode when the SFF mobile device is not physically coupled with the docking system.Example 37. An exemplary platform form factor modification method may include generating one or more signals to cause a processor of a small form factor (SFF) mobile device to access a first memory storage of a docking system and a second memory storage of the mobile device to execute a memory computation based on a determination that the SFF mobile device is coupled with the docking system.Example 38. The platform form factor modification method of example 37, further including determining whether the SFF mobile device is physically coupled with the docking system, and selecting between multiple platform form factors based on a determination that the SFF mobile device is physically coupled with the docking system.Example 39. The platform form factor modification method of example 37, further including selecting a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically coupled with the docking system.Example 40. The platform form factor modification method of example 39, further including performing the memory computation in the small form factor display mode using only the second memory storage of the SFF mobile device.Example 41. The platform form factor modification method of example 37, further including selecting a large form factor display mode in response to the SFF mobile device being physically coupled with the docking system.Example 42. The platform form factor modification method of example 41, further including performing the memory computation in the large form factor display mode using the first memory storage and the second memory storage. Example 43. At least one machine readable medium including a plurality of instructions that in response to being executed by a system cause the system to carry out a method according to any one of examples 37-42.Example 43. An apparatus including means for performing the methods of any one of examples 37-42.Example 44. An exemplary platform form factor modification apparatus including a docking system containing a first memory storage and a small form factor (SFF) mobile device coupleable with the docking system, the SFF mobile device containing a second memory storage, and a platform form factor (PFF) determination component to cause a processor of the SFF mobile device to access the first memory storage of the docking system during execution of a memory computation based on a determination that the SFF mobile device is coupled with the docking system.Example 45. The PFF modification apparatus of example 44, the PFF determination component further causing the processor to determine whether the SFF mobile device is physically coupled with the docking system, and select between multiple platform form factors based on a determination that the SFF mobile device is physically coupled with the docking system.Example 46. The PFF modification apparatus of example 44, the PFF determination component further causing the processor to select a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically coupled with the docking system.Example 47. The PFF modification apparatus of example 46, the PFF determination component further causing the processor to perform a graphics computation in the small form factor display mode using only the second memory storage of the SFF mobile device.Example 48. The PFF modification apparatus of example 46, the PFF determination component further causing the processor to select a large form factor display mode for execution of the memory computation in response to the SFF mobile device being physically coupled with the docking system.Example 49. The PFF modification apparatus of example 48, the PFF determination component further causing the processor to perform a graphics computation in the large form factor display mode using the first memory storage and the second memory storage.Example 50. The PFF modification apparatus of example 48, the PFF determination component further causing the processor to change a display mode from the small form factor display mode to the large form factor display mode when the SFF mobile device is physically coupled with the docking system, and to change the display mode from the large form factor display mode to the small form factor display mode when the SFF mobile device is not physically coupled with the docking system.Example 51. The PFF modification apparatus of example 44, further including a connector providing physical and electrical coupling between the mobile device and the docking system, wherein the connector includes a first connector element coupled with a printed circuit board of the mobile device, and a second connector element extending from the docking system to matingly receive the first connector element.Example 52. The PFF modification apparatus of example 51, the connector being a card edge-type connector.Example 53. An exemplary platform form factor modification apparatus including a first memory storage, an interface to couple to a small form factor (SFF) mobile device, the SFF mobile device to include a processor and a second memory storage, and a platform form factor (PFF) modification component to cause a processor of the SFF mobile device to access the first memory storage as part of execution of a memory computation in response to the SFF mobile device being coupled to the interface.Example 54. The PFF modification apparatus of example 53, the PFF modification component further causing the processor to determine whether the SFF mobile device is physically and electrically coupled with the docking system, and select between multiple platform form factors in response to a determination that the mobile device is physically and electrically coupled with the docking system.Example 55. The PFF modification apparatus of example 53, the PFF modification component further causing the processor to select a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically and electrically coupled with the docking system, and to select a large form factor display mode for execution of the memory computation in response to the SFF mobile device being physically and electrically coupled with the docking system.Example 56. The PFF modification apparatus of example 55, the PFF modification component further causing the processor to perform a graphics computation in the large form factor display mode using the first memory storage and the second memory storage.Example 57. The PFF modification apparatus of example 55, the PFF modification component further causing the processor to perform a graphics computation in the small form factor display mode using only the second memory storage of the SFF mobile device.Example 58. The PFF modification apparatus of example 55, further including a connector providing physical and electrical coupling between the SFF mobile device and the docking system, wherein the connector is a card edge-type connector. Example 59. At least one machine readable medium for platform form factormodification, the at least one machine readable medium including a plurality of instructions that in response to being executed by a processor on a computing platform, cause the processor to access the first memory storage as part of execution of a memory computation in response to the SFF mobile device being coupled to the interface.Example 60. The at least one machine readable medium of example 59, further including a plurality of instructions that in response to being executed by a processor on a computing platform, cause the processor to determine whether the SFF mobile device is physically and electrically coupled with the docking system, and select between multiple platform form factors in response to a determination that the mobile device is physically and electrically coupled with the docking system.Example 61. The at least one machine readable medium of example 60, further including a plurality of instructions that in response to being executed by a processor on a computing platform, cause the processor to select a small form factor display mode for execution of the memory computation in response to the SFF mobile device not being physically and electrically coupled with the docking system, and to select a large form factor display mode for execution of the memory computation in response to the SFF mobile device being physically and electrically coupled with the docking system.Example 62. The at least one machine readable medium of example 60, further including a plurality of instructions that in response to being executed by a processor on a computing platform, cause the processor to perform a graphics computation in the large form factor display mode using the first memory storage and the second memory storage.Example 63. The at least one machine readable medium of example 60, further including a plurality of instructions that in response to being executed by a processor on a computing platform, cause the processor to perform a graphics computation in the small form factor display mode using only the second memory storage of the SFF mobile device.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed examples require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate example. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively. Moreover, the terms "first," "second," "third," and so forth, are used merely as labels, and are not intended to impose numerical requirements on their objects.Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. |
Aspects disclosed in the detailed description include providing load address predictions using address prediction tables based on load path history in processor- based systems. In one aspect, a load address prediction engine provides a load address prediction table containing multiple load address prediction table entries. Each load address prediction table entry includes a predictor tag field and a memory address field for a load instruction. The load address prediction engine generates a table index and a predictor tag based on an identifier and a load path history for a detected load instruction. The table index is used to look up a corresponding load address prediction table entry. If the predictor tag matches the predictor tag field of the load address prediction table entry corresponding to the table index, the memory address field of the load address prediction table entry is provided as a predicted memory address for the load instruction. |
What is claimed is:1. A load address prediction engine, comprising a load address prediction table configured to store a plurality of load address prediction table entries each comprising a predictor tag field and a memory address field, and configured to:receive a load instruction;generate a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction;determine whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries; andresponsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries, provide a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction.2. The load address prediction engine of claim 1, further configured to generate the table index and the predictor tag based on a branch direction history or a branch path history, or combinations thereof.3. The load address prediction engine of claim 1, further configured to:determine whether the predicted memory address for the load instruction is present in a system data cache of a processor;responsive to determining that the predicted memory address for the load instruction is present in a system data cache of the processor: retrieve data for the predicted memory address from the system data cache; andprovide the retrieved data as a data value prediction to a back-end instruction pipeline of an execution pipeline of the processor; and responsive to determining that the predicted memory address for the load instruction is not present in a system data cache of the processor:prefetch data corresponding to the predicted memory address from a system memory of the processor; andstore the prefetched data in the system data cache of the processor.4. The load address prediction engine of claim 3, wherein:each load address prediction table entry of the plurality of load address prediction table entries further comprises a confidence value field; and the load address prediction engine is configured to provide the memory address of the memory address field of the load address prediction table entry as the predicted memory address for the load instruction further responsive to the confidence value field of the load address prediction table entry exceeding a confidence threshold value field of the load address prediction engine.5. The load address prediction engine of claim 4, further configured to, subsequent to execution of the load instruction:responsive to determining that the predictor tag is present in the predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries:determine whether an actual memory address of the load instruction matches the predicted memory address for the load instruction; responsive to determining that an actual memory address of the load instruction matches the predicted memory address for the load instruction, increment the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that an actual memory address of the load instruction does not match the predicted memory address for the load instruction, reset the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that the predictor tag is not present in the predictor tag field of the load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries:determine whether the confidence value field of the load address prediction table entry corresponding to the table index is nonzero;responsive to determining that the confidence value field of the load address prediction table entry corresponding to the table index is non-zero, decrement the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that the confidence value field of the load address prediction table entry corresponding to the table index is zero (0), initialize the load address prediction table entry corresponding to the table index with the predictor tag and the actual memory address for the load instruction.6. The load address prediction engine of claim 3, wherein:each load address prediction table entry of the plurality of load address prediction table entries further comprises a cache way field; and the load address prediction engine is configured to determine whether the predicted memory address for the load instruction is present in the system data cache of the processor based on the cache way field of the load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries.7. The load address prediction engine of claim 1, configured to provide the memory address of the memory address field of the load address prediction table entry as the predicted memory address for the load instruction to a back-end instruction pipeline of a processor for memory disambiguation.8. The load address prediction engine of claim 1 integrated into an integrated circuit (IC).9. The load address prediction engine of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a mobile phone; a cellular phone; a smart phone; a tablet; a phablet; a computer; a portable computer; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; and an automobile.10. A load address prediction engine of a processor, the load address prediction engine comprising:a means for receiving a load instruction;a means for generating a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction; a means for determining whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table of the load address prediction engine; and a means for providing a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction, responsive to determining that the predictor tag is present in the predictor tag field of the load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries of the load address prediction table of the load address prediction engine.11. A method for providing load address predictions, comprising:receiving, by a load address prediction engine of a processor, a load instruction; generating a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction;determining whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table of the load address prediction engine; and responsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries, providing a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction.12. The method of claim 11, wherein generating the table index and the predictor tag is further based on a branch direction history or a branch path history, or combinations thereof.13. The method of claim 11, further comprising:determining whether the predicted memory address for the load instruction is present in a system data cache of the processor;responsive to determining that the predicted memory address for the load instruction is present in a system data cache of the processor: retrieving data for the predicted memory address from the system data cache; andproviding the retrieved data as a data value prediction to a back-end instruction pipeline of an execution pipeline of the processor; and responsive to determining that the predicted memory address for the load instruction is not present in a system data cache of the processor:prefetching data corresponding to the predicted memory address from a system memory of the processor; andstoring the prefetched data in the system data cache of the processor.14. The method of claim 13, whereineach load address prediction table entry of the plurality of load address prediction table entries further comprises a confidence value field; and providing the memory address of the memory address field of the load address prediction table entry as the predicted memory address for the load instruction is further responsive to the confidence value field of the load address prediction table entry exceeding a confidence threshold value field of the load address prediction engine.15. The method of claim 14, further comprising, subsequent to execution of the load instruction:responsive to determining that the predictor tag is present in the predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries:determining whether an actual memory address of the load instruction matches the predicted memory address for the load instruction; responsive to determining that an actual memory address of the load instruction matches the predicted memory address for the load instruction, incrementing the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that the actual memory address of the load instruction does not match the predicted memory address for the load instruction, resetting the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that the predictor tag is not present in the predictor tag field of the load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries:determining whether a confidence value field of the load address prediction table entry corresponding to the table index is nonzero; responsive to determining that the confidence value field of the load address prediction table entry corresponding to the table index is non-zero, decrementing the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that the confidence value field of the load address prediction table entry corresponding to the table index is zero (0), initializing the load address prediction table entry corresponding to the table index for the load instruction.16. The method of claim 13, wherein:each load address prediction table entry of the plurality of load address prediction table entries further comprises a cache way field; and determining whether the predicted memory address for the load instruction is present in the system data cache of the processor is based on the cache way field of the load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries.17. The method of claim 11, comprising providing the memory address of the memory address field of the load address prediction table entry as the predicted memory address for the load instruction to a back-end instruction pipeline of the processor for memory disambiguation.18. A non-transitory computer-readable medium having stored thereon computer executable instructions which, when executed by a processor, cause the processor to: receive a load instruction;generate a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction;determine whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table; and responsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries, provide a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction.19. The non-transitory computer-readable medium of claim 18 having stored thereon computer executable instructions which, when executed by a processor, further cause the processor to generate the table index and the predictor tag based on a branch direction history or a branch path history, or combinations thereof.20. The non-transitory computer-readable medium of claim 18 having stored thereon computer executable instructions which, when executed by a processor, further cause the processor to:determine whether the predicted memory address for the load instruction is present in a system data cache of the processor;responsive to determining that the predicted memory address for the load instruction is present in a system data cache of the processor: retrieve data for the predicted memory address from the system data cache; andprovide the retrieved data as a data value prediction to a back-end instruction pipeline of an execution pipeline of the processor; and responsive to determining that the predicted memory address for the load instruction is not present in a system data cache of the processor:prefetch data corresponding to the predicted memory address from a system memory of the processor; andstore the prefetched data in the system data cache of the processor.21. The non- transitory computer-readable medium of claim 20 having stored thereon computer executable instructions which, when executed by a processor, further cause the processor to provide the memory address of the memory address field of the load address prediction table entry as the predicted memory address for the load instruction responsive to a confidence value field of the load address prediction table entry exceeding a confidence threshold value field of the load address prediction engine.22. The non- transitory computer-readable medium of claim 21 having stored thereon computer executable instructions which, when executed by a processor, further cause the processor to, subsequent to execution of the load instruction:responsive to determining that the predictor tag is present in the predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries:determine whether an actual memory address of the load instruction matches the predicted memory address for the load instruction; responsive to determining that an actual memory address of the load instruction matches the predicted memory address for the load instruction, increment the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that the actual memory address of the load instruction does not match the predicted memory address for the load instruction, reset the confidence value field of the load address prediction table entry corresponding to the table index; andresponsive to determining that the predictor tag is not present in the predictor tag field of the load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries:determine whether a confidence value field of the load address prediction table entry corresponding to the table index is non-zero;responsive to determining that a confidence value field of the load address prediction table entry corresponding to the table index is non-zero, decrement the confidence value field of the load address prediction table entry corresponding to the table index; and responsive to determining that the confidence value field of the load address prediction table entry corresponding to the table index is zero (0), initialize the load address prediction table entry corresponding to the table index with the predictor tag and the actual memory address for the load instruction.23. The non- transitory computer-readable medium of claim 20 having stored thereon computer executable instructions which, when executed by a processor, further cause the processor to determine whether the predicted memory address for the load instruction is present in the system data cache of the processor based on a cache way field of the load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries.24. The non- transitory computer-readable medium of claim 18 having stored thereon computer executable instructions which, when executed by a processor, further cause the processor to provide the memory address of the memory address field of the load address prediction table entry as the predicted memory address for the load instruction to a back-end instruction pipeline of the processor for memory disambiguation. |
PROVIDING LOAD ADDRESS PREDICTIONS USING ADDRESS PREDICTION TABLES BASED ON LOAD PATH HISTORY IN PROCESSOR- BASED SYSTEMSPRIORITY APPLICATION[0001] The present application claims priority to U.S. Patent Application Serial No. 15/087,069, filed on March 31, 2016, and entitled "PROVIDING LOAD ADDRESS PREDICTIONS USING ADDRESS PREDICTION TABLES BASED ON LOAD PATH HISTORY IN PROCESSOR-BASED SYSTEMS," which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to improving execution latency of load instructions during execution of a computer program by processor-based systems, and, in particular, to speeding up execution of load instructions and load- dependent instructions in a processor.II. Background[0003] Conventional processors are capable of fetching and executing several program instructions during every processor clock cycle. To guarantee correct execution of program instructions, a processor monitors, detects, and attempts to satisfy address and data dependencies among program instructions. For example, the processor may determine that a producer-consumer relationship exists between a load instruction and a subsequent store instruction, and, thus, may seek to ensure that a result generated by the load instruction is available before permitting the store instruction to execute.[0004] Ensuring satisfaction of data dependencies is particularly critical with respect to load instructions, as load instructions may represent a significant fraction of the total number of program instructions that are executed by the processor. However, satisfying data dependencies for load instructions may negatively impact the execution latency of such load instructions. The speed with which a load instruction may execute can often vary depending on where the sought-after data is located in the memory hierarchy (e.g., in a Level 1 (LI) cache, a Level 2 (L2) cache, and/or a system memory, as non-limiting examples) of a processor-based system. As a general principle, the closer to the processor that data is stored, the sooner the load instruction requiring the data can execute. Consequently, this variability in data access time may negatively impact the execution latency of a dependent load instruction (i.e., an instruction that consumes a data value produced by a previous load instruction), because the dependent load instruction must wait until the previous load instruction is executed.[0005] Because of this data access latency variability, conventional processor optimizations have focused on speeding up execution of load instructions (e.g., through data prefetching) and/or speeding up execution of dependent load instructions (e.g., through data value prediction). Data prefetching involves retrieving a data value that is expected or predicted to be referenced by a load instruction into a higher cache level (e.g., an LI cache) to enable the load instruction to execute in a more timely fashion. Data value prediction is a technique that attempts to speed up the execution of a dependent load instruction by predicting a data value that will be produced by a previous load instruction, and allows the dependent load instruction to execute using the predicted data value. Upon the subsequent execution of the load instruction, the predicted data value can be confirmed as valid, or disconfirmed as mispredicted. If the predicted data value is determined to be mispredicted, recovery actions are performed, including flushing and re-executing the instructions using the mispredicted data value.[0006] While the use of data prefetching and/or data value prediction, as well as other optimizations, may result in significant performance gains, it may be desirable to provide mechanisms that further improve the performance of such optimizations.SUMMARY OF THE DISCLOSURE[0007] Aspects disclosed in the detailed description include providing load address predictions using address prediction tables based on load path history in processor- based systems. Such processor-based systems can include superscalar processor-based systems, as a non-limiting example. In this regard, in one exemplary aspect, a load address prediction engine is provided for predicting a memory address that may be a target of a load instruction detected in a front-end instruction pipeline of an execution pipeline of a processor. The load address prediction engine includes a load address prediction table containing a plurality of load address prediction table entries. Each of the load address prediction table entries corresponds to a detected load instruction, and includes a predictor tag field and a memory address field. Upon receiving a load instruction, the load address prediction engine generates a table index and a predictor tag based on both an identifier for the load instruction (such as a program counter) and a load path history for the load instruction. The table index is used to look up a corresponding load address prediction table entry in the load address prediction table. If the predictor tag matches the predictor tag field of the load address prediction table entry corresponding to the table index, the memory address field of the load address prediction table entry is provided as a predicted memory address for the load instruction.[0008] In this manner, the load address prediction engine may improve processor performance by providing memory address predictions for load instructions. In some aspects, the predicted memory address may be used to access a system data cache. If a cache hit results on the system data cache, a data value for the predicted memory address may be read from the system data cache and used to perform data value prediction, thus resulting in improved processor performance. Some aspects may also provide that the predicted memory address may be provided to a back-end instruction pipeline of the execution pipeline of the processor to assist in memory disambiguation. Accordingly, in this manner, the load address prediction engine may enhance the effectiveness of conventional processor optimizations. Some aspects may also provide that each load address prediction table entry in the load address prediction table includes a cache way indicator that represents a cache way in which a memory block corresponding to the predicted memory address is expected to be present within the system data cache. By providing a cache way indicator, the need to avoid accessing all cache ways within the system data cache is avoided, thus reducing system power consumption.[0009] In some aspects, each load address prediction table entry in the load address prediction table may also include a confidence value field. The confidence value field may represent a level of confidence, relative to a confidence threshold value field provided by the load address prediction engine, that the predicted memory address is correct for the corresponding load instruction. The confidence value field may be incremented when a predicted memory address is confirmed as correct for a load instruction. The confidence value field may be decremented if a predicted memory address is determined to be incorrect for a load instruction, or if a miss on the load address prediction table occurs and the confidence value for an existing load address prediction table entry for the load instruction is high.[0010] In another aspect, a load address prediction engine is provided. The load address prediction engine comprises a load address prediction table configured to store a plurality of load address prediction table entries, each comprising a predictor tag field and a memory address field. The load address prediction engine is configured to receive a load instruction. The load address prediction engine is further configured to generate a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction. The load address prediction engine is also configured to determine whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries. The load address prediction engine is additionally configured to, responsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of the plurality of load address prediction table entries, provide a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction.[0011] In another aspect, a load address prediction engine of a processor is provided. The load address prediction engine comprises a means for receiving a load instruction. The load address prediction engine further comprises a means for generating a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction. The load address prediction engine also comprises a means for determining whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table of the load address prediction engine. The load address prediction engine additionally comprises a means for providing a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction, responsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table of the load address prediction engine.[0012] In another aspect, a method for providing load address predictions is provided. The method comprises receiving, by a load address prediction engine of a processor, a load instruction. The method further comprises generating a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction. The method also comprises determining whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table of the load address prediction engine. The method additionally comprises, responsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry of the plurality of load address prediction table entries corresponding to the table index, providing a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction.[0013] In another aspect, a non-transitory computer-readable medium is provided, having stored thereon computer-executable instructions. When executed by a processor, the computer-executable instructions cause the processor to receive a load instruction. The computer-executable instructions further cause the processor to generate a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction. The computer-executable instructions also cause the processor to determine whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table. The computer-executable instructions additionally cause the processor to, responsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry of the plurality of load address prediction table entries corresponding to the table index, provide a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction. BRIEF DESCRIPTION OF THE FIGURES[0014] Figure 1 is a block diagram of an exemplary processor including a load address prediction engine for providing load address predictions;[0015] Figure 2 is a block diagram illustrating contents of an exemplary load address prediction table of the load address prediction engine of Figure 1;[0016] Figures 3A-3C are diagrams illustrating exemplary communications flows for the load address prediction engine of Figure 1 for detecting incoming load instructions and providing load address predictions to enable data value prediction, data value prefetching, and/or memory disambiguation;[0017] Figures 4A-4C are flowcharts illustrating an exemplary process for detecting incoming load instructions and providing load address predictions, and for training a load address prediction table by the load address prediction engine of Figure 1 ; and[0018] Figure 5 is a block diagram of an exemplary processor-based system that can include the load address prediction engine of Figure 1.DETAILED DESCRIPTION[0019] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0020] Aspects disclosed in the detailed description include providing load address predictions using address prediction tables based on load path history in processor- based systems. A load address prediction engine is provided for predicting a memory address that may be referenced by a given load instruction detected in a front-end instruction pipeline of an execution pipeline of a processor. A table index and a predictor tag are generated by the load address prediction engine based on both an identifier for the load instruction (such as a program counter) and a load path history for the load instruction. The load address prediction engine then determines whether a load address prediction table entry corresponding to the table index in a load address prediction table contains the predictor tag. If so, a memory address field of the load address prediction table entry is provided as a predicted memory address for the load instruction. As discussed in greater detail below, the predicted memory address for the load instruction may be utilized to enhance the effectiveness of processor optimizations such as data value prediction, data value prefetching, and memory disambiguation. Some aspects may provide further performance optimizations using a confidence value field in the load address prediction table entries of the load address prediction table. In some aspects, power optimizations may also be realized through the use of an optional cache way field in the load address prediction table entries of the load address prediction table.[0021] In this regard, Figure 1 is a block diagram of an exemplary processor 100 including a load address prediction engine 102 providing load address predictions, as disclosed herein. The processor 100 includes a memory interface 104, through which a system memory 106 may be accessed. In some aspects, the system memory 106 may comprise double-rate dynamic random access memory (DRAM) (DDR), as a non- limiting example. The processor 100 further includes an instruction cache 108, and a system data cache 110. The system data cache 110, in some aspects, may comprise a Level 1 (LI) data cache. The processor 100 may encompass any one of known digital logic elements, semiconductor circuits, processing cores, and/or memory structures, among other elements, or combinations thereof. Aspects described herein are not restricted to any particular arrangement of elements, and the disclosed techniques may be easily extended to various structures and layouts on semiconductor dies or packages.[0022] The processor 100 further comprises an execution pipeline 112, which may be subdivided into a front-end instruction pipeline 114 and a back-end instruction pipeline 116. As used herein, "front-end instruction pipeline 114" may refer to pipeline stages that are conventionally located at the "beginning" of the execution pipeline 112, and that provide fetching, decoding, and/or instruction queueing functionality. In this regard, the front-end instruction pipeline 114 of Figure 1 includes one or more fetch/decode pipeline stages 118 and one or more instruction queue stages 120. As non- limiting examples, the one or more fetch/decode pipeline stages 118 may include Fl, F2, and/or F3 fetch/decode stages (not shown). "Back-end instruction pipeline 116" refers herein to subsequent pipeline stages of the execution pipeline 112 for issuing instructions for execution, for carrying out the actual execution of instructions, and/or for loading and/or storing data required by or produced by instruction execution. In the example of Figure 1, the back-end instruction pipeline 116 comprises a rename stage 122, a register access stage 124, a reservation stage 126, one or more dispatch stages 128, and one or more execution stages 130. It is to be understood that the stages 118, 120 of the front-end instruction pipeline 114 and the stages 122, 124, 126, 128, 130 of the back-end instruction pipeline 116 shown in Figure 1 are provided for illustrative purposes only, and that other aspects of the processor 100 may contain additional or fewer pipeline stages than illustrated herein.[0023] The processor 100 additionally includes a register file 132, which provides physical storage for a plurality of registers 134(0)-134(X). In some aspects, the registers 134(0)- 134(X) may comprise one or more general purpose registers (GPRs), a program counter (not shown), and/or a link register (not shown). During execution of computer programs by the processor 100, the registers 134(0)-134(X) may be mapped to one or more architectural registers 136 using a register map table 138.[0024] In exemplary operation, the front-end instruction pipeline 114 of the execution pipeline 112 fetches program instructions (not shown) from the instruction cache 108. Program instructions may be further decoded by the one or more fetch/decode pipeline stages 118 of the front-end instruction pipeline 114, and passed to the one or more instruction queue stages 120 pending issuance to the back-end instruction pipeline 116. After the program instructions are issued to the back-end instruction pipeline 116, stages of the back-end instruction pipeline 116 (e.g., the execution stage(s) 130)) then execute the issued program instructions, and retire the executed program instructions.[0025] As noted above, one important function of the processor 100 is to prevent hazards by ensuring satisfaction of data dependencies among program instructions, particularly load instructions. Because variations in data access times for load instructions may negatively impact execution latency of such load instructions, conventional processors have provided optimizations such as data prefetching, data value prediction, and memory disambiguation in order to speed up execution of load instructions. However, it may be desirable to provide additional mechanisms that may further improve these optimizations.[0026] In this regard, the processor 100 includes the load address prediction engine 102 to provide load address predictions for load instructions. While the load address prediction engine 102 is illustrated as an element separate from the front-end instruction pipeline 114 and the back-end instruction pipeline 116 for the sake of clarity, it is to be understood that the load address prediction engine 102 may be integrated into one or more of the stages 118, 120 of the front-end instruction pipeline 114 and/or one or more of the stages 122, 124, 126, 128, 130 of the back-end instruction pipeline 116. The load address prediction engine 102 comprises a load address prediction table 140, which contains one or more load address prediction table entries (not shown) for storing predicted memory addresses that may be the target of detected load instructions. As indicated by arrows 142 and 144, the load address prediction engine 102 is communicatively coupled to the front-end instruction pipeline 114 and the back-end instruction pipeline 116, respectively, of the execution pipeline 112. Similarly, the load address prediction engine 102 is communicatively coupled to the system data cache 110, as indicated by bidirectional arrow 146.[0027] In exemplary operation, the load address prediction engine 102 receives an incoming load instruction (not shown) from the front-end instruction pipeline 114. The load address prediction engine 102 generates a table index (not shown) and a predictor tag (not shown) based on an identifier (e.g., a program counter) for the load instruction. The table index and the predictor tag for the load instruction are also based on a load path history, which represents a previous sequence of load instructions that led to the current load instruction. Incorporating the load path history into the table index and the predictor tag for the load instruction provides additional history context for the load instruction, which may result in more unique values generated for a given load instruction. As a non-limiting example, the load path history may be incorporated into the table index and the predictor tag in some aspects by generating a hash of a plurality of low order bits of a program counter of the load instruction itself, along with a plurality of bits of program counters of recent instructions (e.g., one or more most recent branch instructions) preceding the load instruction. The table index and the predictor tag may then be derived from the resulting hash value.[0028] The table index is used by the load address prediction engine 102 to access a load access prediction table entry within the load address prediction table 140. The predictor tag generated by the load address prediction engine 102 is then compared with the content of the predictor tag value of the load access prediction table entry corresponding to the table index. If the predictor tag matches the predictor tag value of the load address prediction table entry, a memory address value (not shown) is read from the load address prediction table entry and provided by the load address prediction engine 102 as a predicted memory address for the load instruction. The predicted memory address may then be used to facilitate load instruction optimizations such as data value prediction, data prefetching, and/or memory disambiguation, as non-limiting examples. Operations of exemplary aspects of the load address prediction engine 102 in facilitating load instruction optimizations are discussed in greater detail below with respect to Figures 3A-3C.[0029] To illustrate an exemplary load address prediction table 200 that may correspond to the load address prediction table 140 of Figure 1 in some aspects, Figure 2 is provided. Elements of Figure 1 are referenced for the sake of clarity in describing Figure 2. As seen in Figure 2, the load address prediction engine 102 provides a load address prediction table 200 that includes multiple load address prediction table entries 202(0)-202(Y). Each of the load address prediction table entries 202(0)-202(Y) may be associated with a load instruction (not shown) detected by the load address prediction engine 102 in the front-end instruction pipeline 114 of Figure 1. According to some aspects, in order to eliminate or reduce any aliasing issues with respect to the load address prediction table entries 202(0)-202(Y), the load address prediction table 200 may comprise a direct-mapped tagged table.[0030] Each of the load address prediction table entries 202(0)-202(Y) includes a predictor tag field 204, which stores a predictor tag (not shown) generated for the corresponding load instruction by the load address prediction engine 102. As noted above, the contents of each predictor tag field 204 may be generated by the load address prediction engine 102 based on an identifier for the load instruction (such as a PC) in combination with a load path history for the load instruction. In some aspects, the predictor tag may further incorporate a branch direction history (not shown) and/or a branch path history (not shown) to provide further historical context for the corresponding load instruction.[0031] Each load address prediction table entries 202(0)-202(Y) also includes a memory address field 206. The memory address field 206 is populated during training of the load address prediction table 200, and represents a memory address that was previously referenced by the load instruction corresponding to the load address prediction table entries 202(0)-202(Y). Upon a hit in the load address prediction table 200, the contents of the memory address field 206 may be provided by the load address prediction engine 102 as a predicted memory address for the load instruction for data value prediction, data value prefetching, and/or memory disambiguation optimization procedures.[0032] To provide further performance optimizations, each of the load address prediction table entries 202(0)-202(Y) of the load address prediction table 200 in some aspects may also provide a confidence value field 208. The load address prediction engine 102 may further provide a confidence threshold value field 210 that is preset to indicate a minimum confidence threshold. The confidence value field 208 for each of the load address prediction table entries 202(0)-202(Y) may be compared to the confidence threshold value field 210 to determine if the load address prediction table entry 202(0)-202(Y) may be considered sufficiently reliable for load address prediction. In this manner, the confidence value field 208, together with the confidence threshold value field 210, may be used as a saturating counter to indicate a confidence level in the predicted memory address for the load instruction. As a non-limiting example, upon initialization of one of the load address prediction table entries 202(0)-202(Y) such as the load address prediction table entry 202(0), the confidence value field 208 may be set to zero (0). Upon subsequent hits, the confidence value field 208 may be incremented, but the predicted memory address indicated by the memory address field 206 may not be provided until the confidence value field 208 exceeds the confidence threshold value field 210. Conversely, if a predicted memory address is provided for a load instruction but is subsequently determined to be mispredicted, the confidence value field 208 may be decremented or reset to zero (0).[0033] Some aspects of the load address prediction table 200 may provide additional power optimizations by including an optional cache way field 212 in each of the load address prediction table entries 202(0)-202(Y). The cache way field 212 may indicate a way within the system data cache 110 in which data corresponding to the memory address field 206 is located. In the event that a predicted memory address from the memory address field 206 is used for data value prediction, the cache way field 212 may be provided to more efficiently retrieve data for the predicted memory address from a specific way within the system data cache 110, rather than requiring multiple sets to be read within the system data cache 110.[0034] It is to be understood that some aspects may provide that the load address prediction table entries 202(0)-202(Y) of the load address prediction table 200 may include other fields in addition to the fields 204, 206, 208, and 212 illustrated in Figure 2. It is to be further understood that the load address prediction table 200 in some aspects may be implemented as a cache configured according to associativity and replacement policies known in the art. In the example of Figure 2, the load address prediction table 200 is illustrated as a single data structure. However, in some aspects, the load address prediction table 200 may also comprise more than one (1) data structure or cache.[0035] To illustrate exemplary communications flows for the load address prediction engine 102 of Figure 1 for detecting incoming load instructions and providing load address predictions, Figures 3A-3C are provided. Figure 3A illustrates exemplary communications flows for detecting a load instruction, generating a table index and a predictor tag for the load instruction. Figure 3B illustrates exemplary communications flows for performing a lookup in the load address prediction table 140 of Figure 1, and providing a data value prediction based on a hit in the load address prediction table 140. Figure 3C illustrates exemplary communications flows performing a data value prefetch based on a miss in the system data cache 110, and/or providing a predicted memory address for memory disambiguation. It is to be understood that, for purposes of illustration, it is assumed that the load address prediction table 140 of Figures 3A-3C has already undergone training, as described in greater detail below with respect to Figure 4C. For the sake of clarity, elements of Figures 1 and 2 are referenced in describing Figures 3A-3C.[0036] In Figure 3A, the load address prediction engine 102 receives a load instruction 300 from the front-end instruction pipeline 114, as indicated by arrow 302. The load instruction 300 includes an identifier 304, which may comprise a program counter, as a non-limiting example. The load instruction 300 also includes a reference referred to as an "actual memory address 306," which is the memory address to be computed as the address that the instruction 300 intends to access. Because the actual memory address 306 may not be definitively determined until much later in the execution pipeline 112, the load address prediction engine 102 is used to generate a predicted memory address (not shown) in an attempt to optimize system performance and power consumption.[0037] After receiving the load instruction 300, the load address prediction engine 102 generates a table index 308 and a predictor tag 310. The load address prediction engine 102 bases the table index 308 and the predictor tag 310 on the identifier 304 and the load path history 312 of the load instruction 300, as indicated by arrows 314, 316 and arrows 318, 320. In some aspects, additional historical context may be incorporated into the table index 308 and the predictor tag 310 through the use of a branch direction history 322 (as indicated by arrows 324 and 326) and/or a branch path history 328 (as indicated by arrows 330 and 332). Once the table index 308 and the predictor tag 310 have been generated, operations continue with Figure 3B.[0038] In Figure 3B, the load address prediction engine 102 uses the table index 308 as an index into the load address prediction table 140, as indicated by arrow 334. In this example, the table index 308 corresponds to the load address prediction table entry 202(0). The load address prediction engine 102 then compares the predictor tag 310 with the contents of the predictor tag field 204 of the load address prediction table entry 202(0), as indicated by arrow 336. If the predictor tag 310 does not match the contents of the predictor tag field 204 (i.e., a miss on the load address prediction table 140), then the load address prediction table 140 does not contain a predicted memory address for the load instruction 300, and processing of the load instruction 300 proceeds conventionally. As described below with respect to Figure 4C, after execution of the load instruction 300, the load address prediction table 140 may be updated based on the results of executing the load instruction 300.[0039] If the predictor tag 310 does match the contents of the predictor tag field 204, the load address prediction engine 102 uses the memory address field 206 of the load address prediction table entry 202(0) to provide a predicted memory address 338 for the load instruction 300. As noted above, in aspects in which the load address prediction table 140 employs a confidence value field 208 in conjunction with the confidence threshold value field 210, the load address prediction engine 102 may provide the predicted memory address 338 only if the confidence value field 208 exceeds the confidence threshold value field 210. [0040] In some aspects, the predicted memory address 338 may be used to determine whether data for the predicted memory address 338 exists in the system data cache 110, as indicated by arrow 340. If a hit occurs on the system data cache 110 for the predicted memory address 338, retrieved data 342 corresponding to the predicted memory address 338 is read from the system data cache 110. The retrieved data 342 is provided to the front-end instruction pipeline 114 as a data value prediction, as indicated by arrow 344.[0041] Referring now to Figure 3C, if a miss occurs in the system data cache 110 for the predicted memory address 338, the load address prediction engine 102 may facilitate a data value prefetch for the predicted memory address 338. To do so, prefetch data 346 corresponding to the predicted memory address 338 may be read from the system memory 106, as indicated by arrow 348. The prefetch data 346 is then stored in conjunction with the predicted memory address 338 in the system data cache 110, as indicated by arrows 350 and 352. In this manner, the prefetch data 346 may be available in the system data cache 110 in the event of a future hit on the predicted memory address 338 in the system data cache 110.[0042] In some aspects, the predicted memory address 338 may also be provided to the back-end instruction pipeline 116 to be used with existing mechanisms to improve memory disambiguation. In memory disambiguation, the computed address of a load instruction, such as the load instruction 300, is checked against the computed addresses of older store instructions (not shown). If the address of the load instruction 300 matches the address of a prior store instruction, the load instruction 300 must wait for the store instruction's data to become available for use, instead of probing the system data cache 110. As the load address prediction table 140 is trained using the addresses of the load instruction 300, it may be used to help predict a load or store address before a load or store address is computed. This, in turn, may enable more efficient execution of load instructions.[0043] Figures 4A-4C are flowcharts illustrating an exemplary process for detecting incoming load instructions and providing load address predictions and for training the load address prediction table 140 by the load address prediction engine 102 of Figure 1. For the sake of clarity, elements of Figures 1, 2, and 3A-3C are referenced in describing Figures 4A-4C. Operations begin in Figure 4A with the load address prediction engine 102 of the processor 100 receiving the load instruction 300 (e.g., from the front-end instruction pipeline 114 of the execution pipeline 112 of the processor 100) (block 400). In this regard, the load address prediction engine 102 may be referred to herein as "a means for receiving a load instruction." The load address prediction engine 102 generates the table index 308 and the predictor tag 310 based on the identifier 304 and the load path history 312 for the load instruction 300 (block 402). Accordingly, the load address prediction engine 102 may be referred to herein as "a means for generating a table index and a predictor tag based on an identifier and a load path history indicator for the load instruction." In some aspects, operations of block 402 for generating the table index 308 and the predictor tag 310 may be further based on the branch direction history 322 and/or the branch path history 328 of the load instruction 300 (block 404).[0044] The load address prediction engine 102 then determines whether the predictor tag 310 is present in a predictor tag field 204 of a load address prediction table entry 202(0), corresponding to the table index 308, of the plurality of load address prediction table entries 202(0)-202(Y) of the load address prediction table 140 of the load address prediction engine 102 (block 406). The load address prediction engine 102 may thus be referred to herein as "a means for determining whether the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table of the load address prediction engine." If the predictor tag 310 is not present in the predictor tag field 204 of the load address prediction table entry 202(0) corresponding to the table index 308, processing of the load instruction 300 continues (block 408). Processing then resumes at block 410 of Figure 4C.[0045] However, if the load address prediction engine 102 determines at decision block 406 that the predictor tag 310 is present in the predictor tag field 204 of the load address prediction table entry 202(0) corresponding to the table index 308, the load address prediction engine 102 in some aspects may further determine whether the confidence value field 208 of the load address prediction table entry 202(0) exceeds the confidence threshold value field 210 of the load address prediction engine 102 (block 412). If so (or if the aspect of the load address prediction engine 102 does not utilize the confidence value field 208 and the confidence threshold value field 210), processing resumes at block 414 of Figure 4B. If the load address prediction engine 102 determines at block 412 that the confidence value field 208 does not exceed the confidence threshold value field 210, processing of the load instruction 300 continues (block 408). Processing then resumes at block 410 of Figure 4C.[0046] Referring now to Figure 4B, the load address prediction engine 102 provides the predicted memory address 338 from a memory address field 206 of the load address prediction table entry 202(0) as a predicted memory address 338 for the load instruction 300 (block 414). In this regard, the load address prediction engine 102 may be referred to herein as "a means for providing a memory address from a memory address field of the load address prediction table entry as a predicted memory address for the load instruction, responsive to determining that the predictor tag is present in a predictor tag field of a load address prediction table entry, corresponding to the table index, of a plurality of load address prediction table entries of a load address prediction table of the load address prediction engine." Some aspects may provide that the load address prediction engine 102 next determines whether the predicted memory address 338 for the load instruction 300 is present in the system data cache 110 of the processor 100 (block 416). In aspects in which the load address prediction table entry 202(0) includes an optional cache way field 212, the determination made in decision block 416 may be based in part on the cache way field 212 of the load address prediction table entry 202(0) corresponding to the table index 308. If the predicted memory address 338 is present in the system data cache 110, the load address prediction engine 102 retrieves data 342 for the predicted memory address 338 from the system data cache 110 (block 418). The retrieved data 342 is then provided as a data value prediction to the back-end instruction pipeline 116 of the execution pipeline 112 of the processor 100 (block 420). Processing may then resume at block 422 of Figure 4B.[0047] If the load address prediction engine 102 determines at decision block 416 of Figure 4B that the predicted memory address 338 for the load instruction 300 is not present in the system data cache 110 of the processor 100, the load address prediction engine 102 may prefetch data 346 corresponding to the predicted memory address 338 from the system memory 106 of the processor 100 (block 424). The prefetched data 346 is then stored in the system data cache 110 of the processor 100 (block 426). Processing then resumes at block 422 of Figure 4B. [0048] With continuing reference to Figure 4B, the load address prediction engine 102 in some aspects may also provide the memory address 338 of the memory address field 206 of the load address prediction table entry 202(0) as the predicted memory address 338 for the load instruction 300 to the back-end instruction pipeline 116 of the execution pipeline 112 of the processor 100 for memory disambiguation (block 422). Processing then continues at block 410 of Figure 4C.[0049] Turning now to Figure 4C, the load address prediction engine 102 carries out operations for training the load address prediction table 140 after execution of the load instruction 300. The load address prediction engine 102 first determines, subsequent to execution of the load instruction 300, whether the predictor tag 310 is present in the predictor tag field 204 of the load address prediction table entry 202(0)), corresponding to the table index 308, of the plurality of load address prediction table entries 202(0)- 202(Y (block 410). If so, then a load address prediction table entry 202(0) exists for the load instruction 300, and should be updated based on the results of the execution of the load instruction 300.[0050] Accordingly, the load address prediction engine 102 next determines whether the actual memory address 306 (i.e., the computed address) of the load instruction 300 matches the predicted memory address 338 for the load instruction 300 (block 428). In some aspects, the load address prediction engine 102 may also compare the way of the actual memory address 306 with the cache way field 212. If the actual memory address 306 matches the predicted memory address 338 (and, optionally, if the cache way field 212 is correct), the load address prediction engine 102 may increment the confidence value field 208 of the load address prediction table entry 202(0) corresponding to the table index 308 (block 430). However, if the actual memory address 306 does not match the predicted memory address 338, the load address prediction engine 102 resets the confidence value field 208 of the load address prediction table entry 202(0) corresponding to the table index 308 (block 432). Note that in the unlikely event that the actual memory address 306 matches the predicted memory address 338 but the cache way field 212 is incorrect, the load address prediction engine 102 updates the cache way field 212.[0051] If the load address prediction engine 102 determines at decision block 410 that the predictor tag 310 is not present in the predictor tag field 204 of the load address prediction table entry 202(0), corresponding to the table index 308, of the plurality of load address prediction table entries 202(0)-202(Y), then a load address prediction table entry 202(0) does not appear to exist for the load instruction 300. The load address prediction engine 102 next determines whether the confidence value field 208 of the load address prediction table entry 202(0) corresponding to the table index 308 is nonzero (block 434). If so, the mismatch with the predictor tag 310 may be a transient condition, so the load address prediction engine 102 decrements the confidence value field 208 of the load address prediction table entry 202(0) corresponding to the table index 308 (block 436). If the load address prediction engine 102 determines at decision block 434 that the confidence value field 208 of the load address prediction table entry 202(0) corresponding to the table index 308 is zero (0)), the load address prediction engine 102 initializes the load address prediction table entry 202(0) corresponding to the table index 308 using the predictor tag 310 and the actual memory address 306 for the load instruction 300 (block 438).[0052] Providing load address prediction using address prediction tables based on load path history in processor-based systems according to aspects disclosed herein may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a mobile phone, a cellular phone, a smart phone, a tablet, a phablet, a server, a computer, a portable computer, a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, and an automobile.[0053] In this regard, Figure 5 illustrates an example of a processor-based system 500 that can employ the load address prediction engine (LAPE) 102 of Figure 1. In this example, the processor-based system 500 may correspond to the processor 100 of Figure 1, and includes one or more CPUs 502, each including one or more processors 504. The CPU(s) 502 may have cache memory 506 coupled to the processor(s) 504 for rapid access to temporarily stored data. The CPU(s) 502 is coupled to a system bus 508 and can intercouple devices included in the processor-based system 500. As is well known, the CPU(s) 502 communicates with these other devices by exchanging address, control, and data information over the system bus 508. For example, the CPU(s) 502 can communicate bus transaction requests to a memory controller 510 as an example of a slave device. Although not illustrated in Figure 5, multiple system buses 508 could be provided.[0054] Other devices can be connected to the system bus 508. As illustrated in Figure 5, these devices can include a memory system 512, one or more input devices 514, one or more output devices 516, one or more network interface devices 518, and one or more display controllers 520, as examples. The input device(s) 514 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The output device(s) 516 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The network interface device(s) 518 can be any devices configured to allow exchange of data to and from a network 522. The network 522 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wide local area network, a wireless local area network, BLUETOOTH (BT), and the Internet. The network interface device(s) 518 can be configured to support any type of communications protocol desired. The memory system 512 can include one or more memory units 524(0)-524(N).[0055] The CPU(s) 502 may also be configured to access the display controller(s) 520 over the system bus 508 to control information sent to one or more displays 526. The display controller(s) 520 sends information to the display(s) 526 to be displayed via one or more video processors 528, which process the information to be displayed into a format suitable for the display(s) 526. The display(s) 526 can include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a light emitting diode (LED) display, a plasma display, etc.[0056] The devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0057] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0058] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0059] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein. |
Various embodiments of methods and systems for mode-based reallocation of workloads in a portable computing device ("PCD") that contains a heterogeneous, multi-processor system on a chip ("SoC") are disclosed. Because individual processing components in a heterogeneous, multi-processor SoC may exhibit different performance capabilities or strengths, and because more than one of the processing components may be capable of processing a given block of code, mode-based reallocation systems and methodologies can be leveraged to optimize quality of service ("QoS") by allocating workloads in real time, or near real time, to the processing components most capable of processing the block of code in a manner that meets the performance goals of an operational mode. Operational modes may be determined by the recognition of one or more mode-decision conditions in the PCD. |
CLAIMS What is claimed is: 1. A method for mode-based workload reallocation in a portable computing device ("PCD") having a heterogeneous, multi-processor system on a chip ("SoC"), the method comprising: determining the performance capabilities of each of a plurality of individual processing components in the heterogeneous, multi-processor SoC, wherein the performance capabilities comprise a maximum processing frequency and a quiescent supply current; recognizing one or more mode-decision conditions present in the PCD, wherein a mode-decision condition is associated with either a high performance processing ("HPP") mode or a power saving ("PS") mode; based on the one or more mode-decision conditions, selecting either the HPP mode or the PS mode; and based on the selected mode, reallocating a workload across the processing components based on the performance capabilities of each, wherein: if the selected mode is the HPP mode, reallocating is based on the maximum processing frequency; and if the selected mode is the PS mode, reallocating is based on the quiescent supply current. 2. The method of claim 1, wherein a recognized mode-decision condition comprises a user setting. 3. The method of claim 1, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a user interface response time. 4. The method of claim 3, wherein the user interface response time is greater than 100 milliseconds. 5. The method of claim 1, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a connection to a battery charger. 6. The method of claim 1, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a certain use case. 7. The method of claim 1, wherein a recognized mode-decision condition is associated with the PS mode and comprises a battery capacity. 8. The method of claim 7, wherein the battery capacity is less than ten percent of a maximum battery capacity. 9. The method of claim 1, wherein a recognized mode-decision condition is associated with the PS mode and comprises a temperature reading that exceeds a predetermined threshold. 10. The method of claim 1, wherein a recognized mode-decision condition is associated with the PS mode and comprises a total processing capacity utilization of the processing components that is below a predetermined threshold. 11. A computer system for mode-based workload reallocation in a portable computing device ("PCD") having a heterogeneous, multi-processor system on a chip ("SoC"), the system comprising: a monitor module configured to: recognize one or more mode-decision conditions present in the PCD, wherein a mode-decision condition is associated with either a high performance processing ("HPP") mode or a power saving ("PS") mode; and a modal allocation manager module configured to: determine the performance capabilities of each of a plurality of individual processing components in the heterogeneous, multi-processor SoC, wherein the performance capabilities comprise a maximum processing frequency and a quiescent supply current; based on the one or more mode-decision conditions, select either the HPP mode or the PS mode; and based on the selected mode, reallocate a workload across the processing components based on the performance capabilities of each, wherein: if the selected mode is the HPP mode, the workload is reallocated based on the maximum processing frequency; and if the selected mode is the PS mode, the workload is reallocated based on the quiescent supply current. 12. The computer system of claim 11, wherein a recognized mode-decision condition comprises a user setting. 13. The computer system of claim 11, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a user interface response time. 14. The computer system of claim 13, wherein the user interface response time is greater than 100 milliseconds. 15. The computer system of claim 11, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a connection to a battery charger. 16. The computer system of claim 11, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a certain use case. 17. The computer system of claim 11, wherein a recognized mode-decision condition is associated with the PS mode and comprises a battery capacity. 18. The computer system of claim 17, wherein the battery capacity is less than ten percent of a maximum battery capacity. 19. The computer system of claim 11, wherein a recognized mode-decision condition is associated with the PS mode and comprises a temperature reading that exceeds a predetermined threshold. 20. The computer system of claim 11, wherein a recognized mode-decision condition is associated with the PS mode and comprises a total processing capacity utilization of the processing components that is below a predetermined threshold. 21. A computer system for mode-based workload reallocation in a portable computing device ("PCD") having a heterogeneous, multi-processor system on a chip ("SoC"), the system comprising: means for determining the performance capabilities of each of a plurality of individual processing components in the heterogeneous, multi-processor SoC, wherein the performance capabilities comprise a maximum processing frequency and a quiescent supply current; means for recognizing one or more mode-decision conditions present in the PCD, wherein a mode-decision condition is associated with either a high performance processing ("HPP") mode or a power saving ("PS") mode; means for selecting either the HPP mode or the PS mode based on the one or more mode-decision conditions; and means for reallocating a workload across the processing components based on the performance capabilities of each based on the selected mode, wherein: if the selected mode is the HPP mode, reallocating is based on the maximum processing frequency; and if the selected mode is the PS mode, reallocating is based on the quiescent supply current. 22. The computer system of claim 21, wherein a recognized mode-decision condition comprises a user setting. 23. The computer system of claim 21, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a user interface response time. 24. The computer system of claim 23, wherein the user interface response time is greater than 100 milliseconds. 25. The computer system of claim 21, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a connection to a battery charger. 26. The computer system of claim 21, wherein a recognized mode-decision condition is associated with the HPP mode and comprises a certain use case. 27. The computer system of claim 21, wherein a recognized mode-decision condition is associated with the PS mode and comprises a battery capacity. 28. The computer system of claim 27, wherein the battery capacity is less than ten percent of a maximum battery capacity. 29. The computer system of claim 21, wherein a recognized mode-decision condition is associated with the PS mode and comprises a temperature reading that exceeds a predetermined threshold. 30. The computer system of claim 21, wherein a recognized mode-decision condition is associated with the PS mode and comprises a total processing capacity utilization of the processing components that is below a predetermined threshold. 31. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for mode-based workload reallocation in a portable computing device ("PCD") having a heterogeneous, multiprocessor system on a chip ("SoC"), said method comprising: determining the performance capabilities of each of a plurality of individual processing components in the heterogeneous, multi-processor SoC, wherein the performance capabilities comprise a maximum processing frequency and a quiescent supply current; recognizing one or more mode-decision conditions present in the PCD, wherein a mode-decision condition is associated with either a high performance processing ("HPP") mode or a power saving ("PS") mode; based on the one or more mode-decision conditions, selecting either the HPP mode or the PS mode; and based on the selected mode, reallocating a workload across the processing components based on the performance capabilities of each, wherein: if the selected mode is the HPP mode, reallocating is based on the maximum processing frequency; and if the selected mode is the PS mode, reallocating is based on the quiescent supply current. 32. The computer program product of claim 31, wherein a recognized mode- decision condition comprises a user setting. 33. The computer program product of claim 31, wherein a recognized mode- decision condition is associated with the HPP mode and comprises a user interface response time. 34. The computer program product of claim 33, wherein the user interface response time is greater than 100 milliseconds. 35. The computer program product of claim 31, wherein a recognized mode- decision condition is associated with the HPP mode and comprises a connection to a battery charger. 36. The computer program product of claim 31, wherein a recognized mode- decision condition is associated with the HPP mode and comprises a certain use case. 37. The computer program product of claim 31, wherein a recognized mode- decision condition is associated with the PS mode and comprises a battery capacity. 38. The computer program product of claim 37, wherein the battery capacity is less than ten percent of a maximum battery capacity. 39. The computer program product of claim 31, wherein a recognized mode- decision condition is associated with the PS mode and comprises a temperature reading that exceeds a predetermined threshold. 40. The computer program product of claim 31, wherein a recognized mode- decision condition is associated with the PS mode and comprises a total processing capacity utilization of the processing components that is below a predetermined threshold. |
MODAL WORKLOAD SCHEDULING IN A HETERGENEOUS MULTI-PROCESSOR SYSTEM ON A CHIP DESCRIPTION OF THE RELATED ART Portable computing devices ("PCDs") are becoming necessities for people on personal and professional levels. These devices may include cellular telephones, portable digital assistants ("PDAs"), portable game consoles, palmtop computers, and other portable electronic devices. One unique aspect of PCDs is that they typically do not have active cooling devices, like fans, which are often found in larger computing devices such as laptop and desktop computers. Consequently, thermal energy generation is often managed in a PCD through the application of various thermal management techniques that may include wilting or shutting down electronics at the expense of processing performance. Thermal management techniques are employed within a PCD in an effort to seek a balance between mitigating thermal energy generation and impacting the quality of service ("QoS") provided by the PCD. When excessive thermal energy generation is not a concern, however, the QoS may be maximized by running processing components within the PCD at a maximum frequency rating. In a PCD that has heterogeneous processing components, the various processing components are not created equal. As such, when thermal energy generation is not a concern in a heterogeneous processor, running all the processing components at a maximum frequency rating that is dictated by the slowest processing component may underutilize the actual processing capacity available in the PCD. Similarly, when conditions in a heterogeneous PCD dictate that power savings are preferable to processing speeds (such as when thermal energy generation is a concern, for example), the assumption that all the processing components are functionally equivalent at a given reduced processing speed may result in workload allocations that consume more power than necessary. Accordingly, what is needed in the art is a method and system for allocating workload in a PCD across heterogeneous processing components to meet performance goals associated with operational modes of the PCD, taking into account known performance characteristics of the individual processing components. SUMMARY OF THE DISCLOSURE [0005] Various embodiments of methods and systems for mode -based workload reallocation in a portable computing device that contains a heterogeneous, multi-processor system on a chip ("SoC") are disclosed. Because individual processing components in a heterogeneous, multi-processor SoC may exhibit different performance capabilities or strengths, and because more than one of the processing components may be capable of processing a given block of code, mode-based reallocation systems and methodologies can be leveraged to optimize quality of service ("QoS") by allocating workloads in real time, or near real time, to the processing components most capable of processing the block of code in a manner that meets the performance goals of an operational mode. [0006] One such method involves determining the performance capabilities of each of a plurality of individual processing components in the heterogeneous, multi-processor SoC. The performance capabilities may include the maximum processing frequency and the quiescent supply current exhibited by each processing component. Notably, as one of ordinary skill in the art would recognize, those processing components with the relatively higher maximum processing frequencies may be best suited for processing workloads when the PCD is in a high performance processing ("HPP") mode while those processing components exhibiting the relatively lower quiescent supply currents may be best suited for processing workloads when the PCD is in a power saving ("PS") mode. [0007] Indicators of one or more mode-decision conditions in the PCD are monitored. Based on the recognized presence of any one or more of the mode-decision conditions, an operational mode associated with certain performance goals of the PCD is determined. For instance, an indication that a battery charger has been plugged into the PCD, thereby providing an essentially unlimited power source, may trigger a HPP operational mode having an associated performance goal of processing workloads at the fastest speed possible. Similarly, an indication that a battery capacity has fallen below a predetermined threshold, thereby creating a risk that the PCD may lose its power source, may trigger a PS operational mode having an associated performance goal of processing workloads with the least amount of power expenditure. [0008] Based on the operational mode and its associated performance goal(s), an active workload of the processing components may be reallocated across the processing components based on the individual performance capabilities of each. In this way, those processing components that are best positioned to process the workload in a manner that satisfies the performance goals of the operational mode are prioritized for allocation of the workload. BRIEF DESCRIPTION OF THE DRAWINGS In the drawings, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as "102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all figures. FIG. 1 is a graph illustrating the processing capacities and leakage rates associated with exemplary cores 0, 1, 2 and 3 in a given quad core chipset of a portable computing device ("PCD"). FIG. 2 is a chart illustrating exemplary conditions or triggers that may dictate an operational mode of a PCD. FIG. 3 is a functional block diagram illustrating an embodiment of an on-chip system for mode-based workload reallocation in a heterogeneous, multi-core PCD. FIG. 4 is a functional block diagram of an exemplary, non-limiting aspect of a PCD in the form of a wireless telephone for implementing methods and systems for mode-based workload reallocation. FIG. 5A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip illustrated in FIG. 4. FIG. 5B is a schematic diagram illustrating an exemplary software architecture of the PCD of FIG. 4 for supporting mode-based workload reallocation. FIG. 6 is a logical flowchart illustrating an embodiment of a method for mode-based workload reallocation across heterogeneous processing components in the PCD of FIG. 4. FIG. 7 is a logical flowchart illustrating an embodiment of a mode-based workload reallocation sub-routine. DETAILED DESCRIPTION The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as exclusive, preferred or advantageous over other aspects. In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed. As used in this description, the terms "component," "database," "module," "system," "thermal energy generating component," "processing component," "processing engine," "application processor" and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution and represent exemplary means for providing the functionality and performing the certain steps in the processes or process flows described in this specification. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal). In this description, the terms "central processing unit ("CPU")," "digital signal processor ("DSP")," "chip" and "chipset" are non-limiting examples of processing components that may reside in a PCD and are used interchangeably except when otherwise indicated. Moreover, as distinguished in this description, a CPU, DSP, or a chip or chipset may be comprised of one or more distinct processing components generally referred to herein as "core(s)" and "sub-core(s)." In this description, it will be understood that the terms "thermal" and "thermal energy" may be used in association with a device or component capable of generating or dissipating energy that can be measured in units of "temperature." Consequently, it will further be understood that the term "temperature," with reference to some standard value, envisions any measurement that may be indicative of the relative warmth, or absence of heat, of a "thermal energy" generating device or component. For example, the "temperature" of two components is the same when the two components are in "thermal" equilibrium. In this description, the terms "workload," "process load," "process workload" and "block of code" are used interchangeably and generally directed toward the processing burden, or percentage of processing burden, that is associated with, or may be assigned to, a given processing component in a given embodiment. Further to that which is defined above, a "processing component" may be, but is not limited to, a central processing unit, a graphical processing unit, a core, a main core, a sub-core, a processing area, a hardware engine, etc. or any component residing within, or external to, an integrated circuit within a portable computing device. Moreover, to the extent that the terms "thermal load," "thermal distribution," "thermal signature," "thermal processing load" and the like are indicative of workload burdens that may be running on a processing component, one of ordinary skill in the art will acknowledge that use of these "thermal" terms in the present disclosure may be related to process load distributions, workload burdens and power consumption. In this description, the terms "thermal mitigation technique(s)," "thermal policies," "thermal management" and "thermal mitigation measure(s)" are used interchangeably. One of ordinary skill in the art will recognize that the term "DMIPS" represents the number of Dhrystone iterations required to process a given number of millions of instructions per second. In this description, the term is used as a general unit of measure to indicate relative levels of processor performance in the exemplary embodiments and will not be construed to suggest that any given embodiment falling within the scope of this disclosure must, or must not, include a processor having any specific Dhrystone rating. In this description, the terms "allocation" and "reallocation" are generally used interchangeably. Use of the term "allocation" is not limited to an initial allocation and, as such, inherently includes a reallocation. In this description, the term "portable computing device" ("PCD") is used to describe any device operating on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation ("3G") and fourth generation ("4G") wireless technology have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, a laptop computer with a wireless connection, among others. [0028] In this description, the term "performance" is generally used to reference the efficiency of one processing component compared to another and, as such, may be quantified in various units depending on the context of its use. For example, a high capacity core may exhibit better performance than a low capacity core when the context is the speed in MHz at which the cores can process a given workload. Similarly, a low capacity core may exhibit better performance than a high capacity core when the context is the quiescent supply currents ("IDDq"), i.e. the power consumption in mA, associated with the cores when processing a given workload. [0029] Managing processing performance for QoS optimization in a PCD that has a heterogeneous processing component(s) can be accomplished by leveraging the diverse performance characteristics of the individual processing engines that are available for workload allocation. With regards to the diverse performance characteristics of various processing engines that may be included in a heterogeneous processing component, one of ordinary skill in the art will recognize that performance differences may be attributable to any number of reasons including, but not limited to, differing levels of silicon, design variations, etc. Moreover, one of ordinary skill in the art will recognize that the performance characteristics associated with any given processing component may vary in relation with the operating temperature of that processing component, the power supplied to that processing component, etc. [0030] For instance, consider an exemplary heterogeneous multi-core processor which may include a number of different processing cores generally ranging in performance capacities from low to high (notably, one of ordinary skill in the art will recognize that an exemplary heterogeneous multi-processor system on a chip ("SoC") which may include a number of different processing components, each containing one or more cores, may also be considered). As would be understood by one of ordinary skill in the art, a low capacity to medium capacity processing core within the heterogeneous processor will exhibit a lower power leakage rate at a given workload capacity, and consequently a lower rate of thermal energy generation, than a processing core having a relatively high performance capacity. The higher capacity core may be capable of processing a given number of DMIPs in a shorter amount of time than a lower capacity core. For these reasons, one of ordinary skill in the art will recognize that a high capacity core may be more desirable for a workload allocation when the PCD is in a "high performance" mode whereas a low capacity core, with its lower current leakage rating, may be more desirable for a workload allocation when the PCD is in a "power saving" mode. [0031] Recognizing that certain cores in a heterogeneous processor are better suited to process a given workload than other cores when the PCD is in certain modes of operation, a mode-based workload reallocation algorithm can be leveraged to reallocate workloads to the processing core or cores which offer the best performance in the context of the given mode. For example, certain conditions in a PCD may dictate that the PCD is in a high performance mode where performance is measured in units of processing speed. Consequently, by recognizing that the PCD is in a high performance mode, a mode- based workload reallocation algorithm may dictate that workloads be processed by those certain cores in the heterogeneous processor that exhibit the highest processing speeds. Conversely, if conditions within the PCD dictate that the PCD is in a power saving mode where performance is measured in units associated with current leakage, a mode- based workload reallocation algorithm may dictate that workloads be processed by those certain cores in the heterogeneous processor that exhibit the lowest IDDq rating. [0032] As a non-limiting example, a particular block of code may be processed by either of a central processing unit ("CPU") or a graphical processing unit ("GPU") within an exemplary PCD. Advantageously, instead of predetermining that the particular block of code will be processed by one of the CPU or GPU, an exemplary embodiment may select which of the processing components will be assigned the task of processing the block of code based on the recognition of conditions within the PCD associated with a given mode. That is, based on the operational mode of the PCD, the processor best equipped to efficiently process the block of code is assigned the workload. Notably, it will be understood that subsequent processor selections for reallocation of subsequent workloads may be made in real time, or near real time, as the operational mode of the PCD changes. In this way, a modal allocation manager ("MAM") module may leverage performance characteristics associated with individual cores in a heterogeneous processor to optimize QoS by selecting processing cores based on the performance priorities associated with operational modes of the PCD. [0033] FIG. 1 is a graph illustrating the processing capacities and leakage rates associated with exemplary cores 0, 1, 2 and 3 in a given quad core chipset of a PCD. Notably, although certain features and aspects of the various embodiments are described herein relative to a quad core chipset, one of ordinary skill in the art will recognize that embodiments may be applied in any multi-core chip. In the exemplary illustration, Core 0 represents the core having the highest processing capacity (Core 0 max freq.) and, as such, would be the most desirable core for workload allocation when the PCD is in a "high performance" mode. Conversely, core 3 represents the core having the lowest current leakage rating (Core 3 leakage) and, as such, would be the most desirable core for workload allocation when the PCD is in a "power saving" mode. The cores may reside within any processing engine capable of processing a given block of code including, but not limited to, a CPU, GPU, DSP, programmable array, etc. [0034] As can be seen from the FIG. 1 illustration, each of the cores exhibits unique performance characteristics in terms of processing speeds and power consumption. Core 0 is capable of processing workloads at a relatively high processing speed (Core 0 max freq.), yet it also has a relatively high IDDq (Core 0 leakage). Core 1 is capable of processing workloads at a speed higher than cores 2 and 3 but is not nearly as fast as Core 0. Thus, Core 1 is the second most efficient of the cores in terms of processing speed. The IDDq rating of Core 1 (Core 1 leakage) also makes it the second most efficient of the cores in terms of leakage rate. Core 2 exhibits a relatively slow processing speed (Core 2 max freq.) and a relatively high IDDq rating (exceeded only by that of Core 1) And, Core 3 exhibits the slowest processing speed of the cores, but advantageously also consumes the least amount of power of all the cores (Core 3 leakage). [0035] Advantageously, the core-to-core variations in maximum processing frequencies and quiescent leakage rates can be leveraged by a MAM module to select processing components best positioned to efficiently process a given block of code when the PCD is in a given operational mode. For example, when the PCD is in a power saving mode, a MAM module may allocate or reallocate workloads first to Core 3, then to Core 1, then to Core 2 and finally to Core 0 so that current leakage is minimized. Similarly, when the PCD is in a high performance mode, a MAM module may allocate or reallocate workloads first to Core 0, then to Core 1, then to Core 2 and finally to Core 3 as needed in order to maximize the speed at which the workloads are processed. [0036] One of ordinary skill in the art will recognize that the various scenarios for workload scheduling outlined above do not represent an exhaustive number of scenarios in which a comparative analysis of performance characteristics may be beneficial for workload allocation in a heterogeneous multi-core processor and/or a heterogeneous multi- processor SoC. As such, it will be understood that any workload allocation component or module that is operable to compare the performance characteristics of two or more processing cores in a heterogeneous multi-core processor or heterogeneous multiprocessor SoC, as the case may be, to determine a workload allocation or reallocation is envisioned. A comparative analysis of processing component performance characteristics according to various embodiments can be used to allocate workloads among a plurality of processing components based on the identification of the most efficient processing component available based on the operational mode. [0037] FIG. 2 is a chart illustrating exemplary conditions or triggers that may dictate an operational mode of a PCD. Based on recognition of one or more of the triggers, a MAM module may determine the operational mode and subsequently allocate or reallocate workloads to processing cores based on the performance goals associated with the given operational mode. [0038] For example, connection of a battery charger to the PCD may trigger a MAM module to designate the operational mode as a high performance processing ("HPP") mode. Accordingly, workloads may be allocated to those one or more processing components having the highest processing frequencies, such as core 0 of FIG. 1. As another example, recognition that battery capacity is low in the PCD may cause the MAM module to designate the operational mode as a power saving ("PS") mode. Consequently, because the performance goals associated with a power saving mode includes conserving power, workloads may be reallocated away from high frequency cores to lower frequency cores that exhibit more efficient power consumption characteristics, such as core 3 of FIG. 1. [0039] Notably, it is envisioned that some embodiments of a MAM module may recognize the presence of multiple mode-decision conditions. To the extent that the recognized conditions point to different operational modes, certain embodiments may prioritize or otherwise reconcile the conditions in order to determine the best operational mode. For example, suppose that a user of a PCD preset the mode to an HPP mode and also plugged in the battery charger, but at the same time a thermal policy manager ("TPM") module is actively engaged in application of thermal mitigation measures. In such a scenario, a MAM module may prioritize the ongoing thermal mitigation over the user setting and charger availability, thereby determining that the operational mode should be a PS mode. [0040] Other exemplary mode-decision conditions illustrated in FIG. 2 as possible triggers for a HPP mode include detection of a performance benchmark, a core utilization greater than some threshold (e.g., > 90%), a user interface response time greater than some threshold (e.g., > 100 msec), recognition of a docked state, and a use case with a high processing speed demand (e.g., a gaming use case). Notably, the HPP mode-decision conditions outlined in the FIG. 2 graph are not offered as an exhaustive list of the triggers that may be used to point a MAM module to a HPP mode and, as such, one of ordinary skill in the art will recognize that other triggers or conditions within a PCD may be used to indicate that workloads should be allocated or reallocated to processing components with high frequency processing capabilities. Moreover, one of ordinary skill in the art will recognize that HPP mode-decision conditions may be associated with scenarios that require more processing capacity in order to optimize QoS and/or scenarios where power availability is abundant. [0041] Other exemplary mode-decision conditions illustrated in FIG. 2 as possible triggers for a PS mode include recognition of a battery capacity below a certain threshold (e.g., < 10% remaining), a user setting to a PS mode, application of one or more thermal mitigation techniques, detection of a relatively high on-chip temperature reading, low processing capacity use case (e.g., wake-up from standby mode, OS background tasks, workload requires less than the maximum frequency associated with the slowest processing component, all cores are running at a relatively low frequency to process the active workload, etc.). Notably, the PS mode-decision conditions outlined in the FIG. 2 graph are not offered as an exhaustive list of the triggers that may be used to point a MAM module to a PS mode and, as such, one of ordinary skill in the art will recognize that other triggers or conditions within a PCD may be used to indicate that workloads should be allocated or reallocated to processing components with low power consumption characteristics. Moreover, one of ordinary skill in the art will recognize that PS mode-decision conditions may be associated with scenarios that do not require high processing capacity in order to optimize QoS and/or scenarios where power availability is limited. [0042] FIG. 3 is a functional block diagram illustrating an embodiment of an on-chip system 102 for mode-based workload reallocation in a heterogeneous, multi-core PCD 100. As explained above relative to the FIGs. 1 and 2 illustrations, the workload reallocation across the processing components 222, 224, 226, 228 may be based on determination of an operational mode. Depending on the performance goals of a given operational mode, a modal allocation manager ("MAM") module 207 may cause workloads to be reallocated among the various processing components 222, 224, 226, 228 such that the performance goals associated with a given operational mode are achieved. Notably, as one of ordinary skill in the art will recognize, the processing component(s) 110 is depicted as a group of heterogeneous processing engines 222, 224, 226, 228 for illustrative purposes only and may represent a single processing component having multiple, heterogeneous cores 222, 224, 226, 228 or multiple, heterogeneous processors 222, 224, 226, 228, each of which may or may not comprise multiple cores and/or sub- cores. As such, the reference to processing engines 222, 224, 226 and 228 herein as "cores" will be understood as exemplary in nature and will not limit the scope of the disclosure. [0043] The on-chip system may monitor temperature sensors 157, for example, which are individually associated with cores 222, 224, 226, 228 with a monitor module 114 which is in communication with a thermal policy manager ("TPM") module 101 and a modal allocation manager ("MAM") module 207. As described above, temperature measurements may represent conditions upon which a mode decision may be made by a MAM module 207. Further, although not explicitly depicted in the FIG. 3 illustration, it will be understood that the monitor module 114 may also monitor other components or conditions within a PCD that may be used as triggers for switching from one operational mode to another. [0044] The TPM module 101 may receive temperature measurements from the monitor module 114 and use the measurements to determine and apply thermal management policies. The thermal management policies applied by the TPM module 101 may manage thermal energy generation by reallocation of workloads from one processing component to another, wilting or variation of processor clock speeds, etc. Notably, through application of thermal management policies, the TPM module 101 may reduce or alleviate excessive generation of thermal energy at the cost of QoS. [0045] It is envisioned that in some embodiments workload allocations dictated by a TPM module 101 may essentially "trump" workload reallocations driven by the MAM module 207. Returning to the example offered above, suppose that a user of a PCD 100 preset the mode to an HPP mode and also plugged in the battery charger, but at the same time the TPM module 101 is actively engaged in application of thermal mitigation measures. In such a scenario, the MAM module 207 may prioritize the ongoing thermal mitigation over the user setting and charger availability, thereby determining that the operational mode should be a PS mode instead of the HPP mode associated with the triggers. Alternatively, under the same exemplary scenario other embodiments of a MAM module 207 may simply defer workload allocation to the TPM module 101 regardless of the mode-decision conditions. [0046] As the mode-decision conditions change or become apparent, the monitor module 114 recognizes the conditions and transmits data indicating the conditions to the MAM module 207. The presence of one or more of the various mode-decision conditions may trigger the MAM module 207 to reference a core characteristics ("CC") data store 24 to query performance characteristics for one or more of the cores 222, 224, 226, 228. Subsequently, the MAM module 207 may select the core 222, 224, 226, 228 best equipped at the time of query to efficiently process a given block of code according to the performance goals of an operational mode associated with the recognized mode- decision conditions. For example, if the performance goal of a PS mode is to minimize current leakage, then the MAM module 207 would allocate the block of code to the particular core 222, 224, 226, 228 queried to have the most efficient IDDq rating. Similarly, if the performance goal of an HPP mode is to process workloads at the fastest speed possible, then the MAM module 207 would allocate the block of code to the particular available core 222, 224, 226, 228 queried to have the highest processing frequency. Notably, for blocks of code that require more than one processing component, it is envisioned that embodiments will allocate the workload to the combination of available processors most capable of meeting the performance goals of the particular operational mode. [0047] Returning to the FIG. 3 illustration, the content of the CC data store 24 may be empirically collected on each of the cores 222, 224, 226, 228, according to bench tests and platform characterizations understood by those with ordinary skill in the art. Essentially, performance characteristics including maximum operating frequencies and IDDq leakage rates may be measured for each of the processing components 222, 224, 226, 228 "at the factory" and stored in CC data store 24. From the data, the MAM module 207 may determine which of the cores 222, 224, 226, 228 are best equipped to process a given workload according to the performance goals of a given operational mode. As would be understood by one of ordinary skill in the art, the CC data store 24 may exist in hardware and/or software form depending on the particular embodiment. Moreover, a CC data store 24 in hardware may be fused inside silicon whereas a CC data store 24 in software form may be stored in firmware, as would be understood by one of ordinary skill in the art. [0048] FIG. 4 is a functional block diagram of an exemplary, non-limiting aspect of a PCD 100 in the form of a wireless telephone for implementing methods and systems for mode- based workload reallocation. As shown, the PCD 100 includes an on-chip system 102 that includes a heterogeneous multi-core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together. The CPU 110 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art. Further, instead of a CPU 110, a digital signal processor ("DSP") may also be employed as understood by one of ordinary skill in the art. Moreover, as is understood in the art of heterogeneous multi-core processors, each of the cores 222, 224, 230 may process workloads at different maximum voltage frequencies and exhibit different IDDq leakage rates. [0049] In general, the TPM module(s) 101 may be responsible for monitoring and applying thermal policies that include one or more thermal mitigation techniques. Application of the thermal mitigation techniques may help a PCD 100 manage thermal conditions and/or thermal loads and avoid experiencing adverse thermal conditions, such as, for example, reaching critical temperatures, while maintaining a high level of functionality. The modal allocation manager ("MAM") module(s) 207 may receive the same or similar temperature data as the TPM module(s) 101, as well as other condition indicators, and leverage the data to define an operational mode. Based on the operational mode, the MAM module(s) 207 may allocate or reallocate workloads according to performance characteristics associated with individual cores 222, 224, 230. In this way, the MAM module(s) 207 may cause workloads to be processed by those one or more cores which are most capable of processing the workload in a manner that meets the performance goals associated with the given operational mode. [0050] FIG. 4 also shows that the PCD 100 may include a monitor module 114. The monitor module 114 communicates with multiple operational sensors (e.g., thermal sensors 157) and components distributed throughout the on-chip system 102 and with the CPU 110 of the PCD 100 as well as with the TPM module 101 and/or MAM module 207. Notably, the monitor module 114 may also communicate with and/or monitor off-chip components such as, but not limited to, power supply 188, touchscreen 132, RF switch 170, etc. The MAM module 207 may work with the monitor module 114 to identify mode-decision conditions that may trigger a switch of operational modes and affect workload allocation and/or reallocation. As illustrated in FIG. 4, a display controller 128 and a touch screen controller 130 are coupled to the CPU 110. A touch screen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touch screen controller 130. PCD 100 may further include a video decoder 134, e.g., a phase-alternating line ("PAL") decoder, a sequential couleur avec memoire ("SECAM") decoder, a national television system(s) committee ("NTSC") decoder or any other type of video decoder 134. The video decoder 134 is coupled to the multi-core central processing unit ("CPU") 110. A video amplifier 136 is coupled to the video decoder 134 and the touch screen display 132. A video port 138 is coupled to the video amplifier 136. As depicted in FIG. 4, a universal serial bus ("USB") controller 140 is coupled to the CPU 110. Also, a USB port 142 is coupled to the USB controller 140. A memory 112 and a subscriber identity module (SIM) card 146 may also be coupled to the CPU 110. Further, as shown in FIG. 4, a digital camera 148 may be coupled to the CPU 110. In an exemplary aspect, the digital camera 148 is a charge-coupled device ("CCD") camera or a complementary metal-oxide semiconductor ("CMOS") camera. As further illustrated in FIG. 4, a stereo audio CODEC 150 may be coupled to the analog signal processor 126. Moreover, an audio amplifier 152 may be coupled to the stereo audio CODEC 150. In an exemplary aspect, a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152. FIG. 4 shows that a microphone amplifier 158 may be also coupled to the stereo audio CODEC 150. Additionally, a microphone 160 may be coupled to the microphone amplifier 158. In a particular aspect, a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150. Also, an FM antenna 164 is coupled to the FM radio tuner 162. Further, stereo headphones 166 may be coupled to the stereo audio CODEC 150. FIG. 4 further indicates that a radio frequency ("RF") transceiver 168 may be coupled to the analog signal processor 126. An RF switch 170 may be coupled to the RF transceiver 168 and an RF antenna 172. As shown in FIG. 4, a keypad 174 may be coupled to the analog signal processor 126. Also, a mono headset with a microphone 176 may be coupled to the analog signal processor 126. Further, a vibrator device 178 may be coupled to the analog signal processor 126. FIG. 4 also shows that a power supply 188, for example a battery, is coupled to the on-chip system 102 via a power management integrated circuit ("PMIC") 180. In a particular aspect, the power supply 188 includes a rechargeable DC battery or a DC power supply that is derived from an alternating current ("AC") to DC transformer that is connected to an AC power source. [0055] The CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157 A and 157B as well as one or more external, off-chip thermal sensors 157C. The on-chip thermal sensors 157 A, 157B may comprise one or more proportional to absolute temperature ("PTAT") temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor ("CMOS") very large-scale integration ("VLSI") circuits. The off-chip thermal sensors 157C may comprise one or more thermistors. The thermal sensors 157 may produce a voltage drop that is converted to digital signals with an analog-to-digital converter ("ADC") controller 103 (See FIG. 5A). However, other types of thermal sensors 157 may be employed without departing from the scope of the invention. [0056] The thermal sensors 157, in addition to being controlled and monitored by an ADC controller 103, may also be controlled and monitored by one or more TPM module(s) 101, monitor module(s) 114 and/or MAM module(s) 207. The TPM module(s) 101, monitor module(s) 114 and/or MAM module(s) 207 may comprise software which is executed by the CPU 110. However, the TPM module(s) 101, monitor module(s) 114 and/or MAM module(s) 207 may also be formed from hardware and/or firmware without departing from the scope of the invention. The TPM module(s) 101 may be responsible for monitoring and applying thermal policies that include one or more thermal mitigation techniques that may help a PCD 100 avoid critical temperatures while maintaining a high level of functionality. The MAM module(s) 207 may be responsible for querying processor performance characteristics and, based on recognition of an operational mode, assigning blocks of code to processors most capable of efficiently processing the code. [0057] Returning to FIG. 4, the touch screen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, thermal sensors 157C, PMIC 180 and the power supply 188 are external to the on-chip system 102. However, it should be understood that the monitor module 114 may also receive one or more indications or signals from one or more of these external devices by way of the analog signal processor 126 and the CPU 110 to aid in the real time management of the resources operable on the PCD 100. In a particular aspect, one or more of the method steps described herein may be implemented by executable instructions and parameters stored in the memory 112 that form the one or more TPM module(s) 101 and and/or MAM module(s) 207. These instructions that form the TPM module(s) 101 and and/or MAM module(s) 207 may be executed by the CPU 110, the analog signal processor 126, the GPU 182, or another processor, in addition to the ADC controller 103 to perform the methods described herein. Further, the processors 110, 126, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein. FIG. 5A is a functional block diagram illustrating an exemplary spatial arrangement of hardware for the chip 102 illustrated in FIG. 4. According to this exemplary embodiment, the applications CPU 110 is positioned on the far left side region of the chip 102 while the modem CPU 168, 126 is positioned on a far right side region of the chip 102. The applications CPU 110 may comprise a heterogeneous multi-core processor that includes a zeroth core 222, a first core 224, and an Nth core 230. The applications CPU 110 may be executing a TPM module 101A and/or MAM module(s) 207 A (when embodied in software) or it may include a TPM module 101 A and/or MAM module(s) 207 A (when embodied in hardware). The application CPU 110 is further illustrated to include operating system ("O/S") module 208 and a monitor module 114. The applications CPU 110 may be coupled to one or more phase locked loops ("PLLs") 209A, 209B, which are positioned adjacent to the applications CPU 110 and in the left side region of the chip 102. Adjacent to the PLLs 209A, 209B and below the applications CPU 110 may comprise an analog-to-digital ("ADC") controller 103 that may include its own thermal policy manager 10 IB and/or MAM module(s) 207B that works in conjunction with the main modules 101A, 207A of the applications CPU 110. The thermal policy manager 101B of the ADC controller 103 may be responsible for monitoring and tracking multiple thermal sensors 157 that may be provided "on-chip" 102 and "off-chip" 102. The on-chip or internal thermal sensors 157 A may be positioned at various locations. As a non-limiting example, a first internal thermal sensor 157A1 may be positioned in a top center region of the chip 102 between the applications CPU 110 and the modem CPU 168,126 and adjacent to internal memory 112. A second internal thermal sensor 157A2 may be positioned below the modem CPU 168, 126 on a right side region of the chip 102. This second internal thermal sensor 157A2 may also be positioned between an advanced reduced instruction set computer ("RISC") instruction set machine ("ARM") 177 and a first graphics processor 135A. A digital-to-analog controller ("DAC") 173 may be positioned between the second internal thermal sensor 157A2 and the modem CPU 168, 126. A third internal thermal sensor 157 A3 may be positioned between a second graphics processor 135B and a third graphics processor 135C in a far right region of the chip 102. A fourth internal thermal sensor 157A4 may be positioned in a far right region of the chip 102 and beneath a fourth graphics processor 135D. And a fifth internal thermal sensor 157A5 may be positioned in a far left region of the chip 102 and adjacent to the PLLs 209 and ADC controller 103. One or more external thermal sensors 157C may also be coupled to the ADC controller 103. The first external thermal sensor 157C1 may be positioned off-chip and adjacent to a top right quadrant of the chip 102 that may include the modem CPU 168, 126, the ARM 177, and DAC 173. A second external thermal sensor 157C2 may be positioned off-chip and adjacent to a lower right quadrant of the chip 102 that may include the third and fourth graphics processors 135C, 135D. One of ordinary skill in the art will recognize that various other spatial arrangements of the hardware illustrated in FIG. 5A may be provided without departing from the scope of the invention. FIG. 5A illustrates one exemplary spatial arrangement and how the main TPM and MAM modules 101 A, 207 A and ADC controller 103 with its TPM and MAM modules 10 IB, 207B may recognize thermal conditions that are a function of the exemplary spatial arrangement illustrated in FIG. 5A, determine an operational mode and allocate workloads to manage thermal conditions and/or meet performance goals associated with the operational mode. FIG. 5B is a schematic diagram illustrating an exemplary software architecture 200 of the PCD 100 of FIG. 4 and FIG. 5 A for supporting mode-based workload reallocation. Any number of algorithms may form or be part of a mode-based workload reallocation methodology that may be applied by the MAM module 207 when certain mode-decision conditions in the PCD 100 are recognized. As illustrated in FIG. 5B, the CPU or digital signal processor 110 is coupled to the memory 112 via a bus 211. The CPU 110, as noted above, is a multiple-core, heterogeneous processor having N core processors. That is, the CPU 110 includes a first core 222, a second core 224, and an Ν ώcore 230. As is known to one of ordinary skill in the art, each of the first core 222, the second core 224 and the N core 230 are available for supporting a dedicated application or program and, as part of a heterogeneous core, may exhibit different maximum processing frequencies and different IDDq current leakage levels. Alternatively, one or more applications or programs can be distributed for processing across two or more of the available heterogeneous cores. The CPU 110 may receive commands from the TPM module(s) 101 and/or MAM module(s) 207 that may comprise software and/or hardware. If embodied as software, the TPM module 101 and/or MAM module 207 comprises instructions that are executed by the CPU 110 that issues commands to other application programs being executed by the CPU 110 and other processors. The first core 222, the second core 224 through to the Nth core 230 of the CPU 110 may be integrated on a single integrated circuit die, or they may be integrated or coupled on separate dies in a multiple-circuit package. Designers may couple the first core 222, the second core 224 through to the Ν ώcore 230 via one or more shared caches and they may implement message or instruction passing via network topologies such as bus, ring, mesh and crossbar topologies. Bus 211 may include multiple communication paths via one or more wired or wireless connections, as is known in the art. The bus 211 may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the bus 211 may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. When the logic used by the PCD 100 is implemented in software, as is shown in FIG. 5B, it should be noted that one or more of startup logic 250, management logic 260, modal workload allocation interface logic 270, applications in application store 280 and portions of the file system 290 may be stored on any computer-readable medium for use by or in connection with any computer-related system or method. In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that can contain or store a computer program and data for use by or in connection with a computer-related system or method. The various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer- readable medium" can be any means that can store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc read-only memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory. In an alternative embodiment, where one or more of the startup logic 250, management logic 260 and perhaps the modal workload allocation interface logic 270 are implemented in hardware, the various logic may be implemented with any or a combination of the following technologies, which are each well known in the art: a discrete logic circuit(s) having logic gates for implementing logic functions upon data signals, an application specific integrated circuit (ASIC) having appropriate combinational logic gates, a programmable gate array(s) (PGA), a field programmable gate array (FPGA), etc. The memory 112 is a non- volatile data storage device such as a flash memory or a solid- state memory device. Although depicted as a single device, the memory 112 may be a distributed memory device with separate data stores coupled to the digital signal processor 110 (or additional processor cores). The startup logic 250 includes one or more executable instructions for selectively identifying, loading, and executing a select program for determining operational modes and selecting one or more of the available cores such as the first core 222, the second core 224 through to the Ν ώcore 230 for workload allocation based on the operational mode. The management logic 260 includes one or more executable instructions for terminating a mode-based workload allocation program, as well as selectively identifying, loading, and executing a more suitable replacement programs. The management logic 260 is arranged to perform these functions at run time or while the PCD 100 is powered and in use by an operator of the device. A replacement program can be found in the program store 296 of the embedded file system 290. [0077] The replacement program, when executed by one or more of the core processors in the digital signal processor, may operate in accordance with one or more signals provided by the TPM module 101, MAM module 207 and monitor module 114. In this regard, the modules 114 may provide one or more indicators of events, processes, applications, resource status conditions, elapsed time, temperature, etc in response to control signals originating from the TPM 101 or MAM module 207. [0078] The interface logic 270 includes one or more executable instructions for presenting, managing and interacting with external inputs to observe, configure, or otherwise update information stored in the embedded file system 290. In one embodiment, the interface logic 270 may operate in conjunction with manufacturer inputs received via the USB port 142. These inputs may include one or more programs to be deleted from or added to the program store 296. Alternatively, the inputs may include edits or changes to one or more of the programs in the program store 296. Moreover, the inputs may identify one or more changes to, or entire replacements of one or both of the startup logic 250 and the management logic 260. By way of example, the inputs may include a change to the management logic 260 that instructs the MAM module 207 to recognize an operational mode as a HPP mode when the video codec 134 is active. [0079] The interface logic 270 enables a manufacturer to controllably configure and adjust an end user's experience under defined operating conditions on the PCD 100. When the memory 112 is a flash memory, one or more of the startup logic 250, the management logic 260, the interface logic 270, the application programs in the application store 280 or information in the embedded file system 290 can be edited, replaced, or otherwise modified. In some embodiments, the interface logic 270 may permit an end user or operator of the PCD 100 to search, locate, modify or replace the startup logic 250, the management logic 260, applications in the application store 280 and information in the embedded file system 290. The operator may use the resulting interface to make changes that will be implemented upon the next startup of the PCD 100. Alternatively, the operator may use the resulting interface to make changes that are implemented during run time. The embedded file system 290 includes a hierarchically arranged core characteristic data store 24. In this regard, the file system 290 may include a reserved section of its total file system capacity for the storage of information associated with the performance characteristics of the various cores 222, 224, 226, 228. FIG. 6 is a logical flowchart illustrating an embodiment of a method 600 for mode- based workload reallocation across heterogeneous processing components in a PCD 100. In the FIG. 6 embodiment, the performance characteristics of each individual processing component, such as cores 222, 224, 226, 228, is characterized at block 605 and stored in CC data store 24. Notably, as described above, the various processing components in a multi-core, heterogeneous SoC are unique their individual performance characteristics. That is, certain processing components may exhibit higher processing frequencies than other processing components within the same SoC. Moreover, certain other processing components may exhibit lower power leakage rates than other processing components. Advantageously, a MAM module 207 running and implementing a mode-based reallocation algorithm may leverage the inherent differences in the performance characteristics of the heterogeneous processing components to allocate or reallocate workloads to the particular processing component(s) best equipped to process a workload consistent with operational goals (such as power saving or high speed processing). Once the performance characteristics of the various processing cores 222, 224, 226, 228 are determined, the cores may be ranked at block 610 and identified for their individual performance strengths. For instance, referring back to FIGs. 1 and 3, core 226 may be identified as the core with the fastest processing frequency, such as core 0 of FIG. 1. Similarly, core 222 may be identified as the core with the lowest leakage rate, such as core 3 of FIG. 1. In this way, each of the cores may be ranked relative to its peers in terms of performance characteristics. At block 615, the MAM module 207 in conjunction with the monitor module 114 tracks the active workload allocation across the heterogeneous cores 222, 224, 226, 228. At block 620, the monitor module 114 polls the various mode-decision conditions such as, but not limited to, the conditions outlined in FIG. 2. Based on the polling of the mode- decision conditions at block 620, the recognized conditions are reconciled by the monitor module 114 and/or the MAM module 207 based on priority. Subsequently, at decision block 630, the reconciled mode-decision conditions are leveraged to determine an operational mode for the PCD 110. The operational mode, in turn, may trigger the MAM module 207 to reallocate workloads across the heterogeneous cores 222, 224, 226, 228 at sub-routine 635. As described above, the reallocation of workloads by the MAM module 207 is based on the rankings of performance characteristics determined at blocks 605 and 610. After workload reallocation, the process returns to block 615 and the active workload is monitored until a subsequent reallocation is necessitated by a change in the active workload or a change in the operational mode. Turning to FIG. 7, the mode-based workload reallocation sub-routine 635 begins after decision block 630. If decision block 630 determines that PCD 110 is in a high performance processing mode, then the "HPP" branch is followed. If, however, the decision block 630 determines that PCD 110 is in a power saving mode, then the "PS" branch is followed. Following the HPP branch after decision block 630, the sub-routine 635 moves to block 640. At block 640, the cores determined at blocks 605 and 610 to exhibit the highest processing frequency capabilities are identified. For example, briefly referring back to the FIG. 1 illustration, the rank order of the cores by highest processing frequency performance would be cores 0 and 1 followed by cores 2 and then 3. Next, at block 645 the active workloads on the processing cores 222, 224, 226, 228 are reallocated per directions from the MAM module 207 such that the cores with the highest maximum processing frequencies are assigned the workload tasks. The process returns to block 615 of FIG. 6. Following the PS branch after decision block 630, the sub-routine 635 moves to block 650. At block 650, the cores determined at blocks 605 and 610 to exhibit the lowest power leakage characteristics are identified. For example, briefly referring back to the FIG. 1 illustration, the rank order of the cores by lowest power leakage performance would be cores 3 and 1 followed by cores 2 and then 0. Next, at block 655 the active workloads on the processing cores 222, 224, 226, 228 are reallocated per directions from the MAM module 207 such that the cores with the lowest power leakage are assigned the workload tasks. The process returns to block 615 of FIG. 6. Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method. Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example. Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the drawings, which may illustrate various process flows. In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media. Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims. |
An etchant including C2HxFy, where x is an integer from two to five, inclusive, where y is an integer from one to four, inclusive, and where x plus y equals six. The etchant etches doped silicon dioxide with selectivity over both undoped silicon dioxide and silicon nitride. Thus, undoped silicon dioxide and silicon nitride may be employed as etch stops in dry etch processes which utilize the C2HxFy-containing etchant. C2HxFy may be employed as either a primary etchant or as an additive to another etchant or etchant mixture. |
What is claimed is: 1. A process for selectively etching a structure comprising doped silicon dioxide, the process comprising:exposing the structure to an etchant comprising C2HxFy, where x is an integer from 3 to 5, inclusive, y is an integer from 1 to 3, inclusive, and x+y=6; and removing the structure down to an etch stop adjacent the structure and comprising undoped silicon dioxide, said removing being effected without substantially removing said etch stop. 2. The process of claim 1, further comprising disposing a protective structure over the structure comprising doped silicon dioxide so as to protect at least selected regions of the structure comprising doped silicon dioxide.3. The process of claim 2, further comprising patterning said protective structure so as to expose said selected regions of the structure comprising doped silicon dioxide.4. The process of claim 2, wherein said protective structure comprises a photoimageable material.5. The process of claim 4, further comprising patterning said protective structure by photolithography.6. The process of claim 2, wherein said protective structure comprises a non-photoimageable material.7. The process of claim 1, wherein said removing is effected by reactive ion etching, plasma etching, high density plasma etching, point plasma etching, magnetic ion etching, magnetically enhanced reactive ion etching, plasma enhanced reactive ion etching, or electron cyclotron resonance.8. A method for patterning doped silicon dioxide, comprising dry etching at least one exposed region of the doped silicon dioxide with an etchant comprising C2HxFy, where x is an integer from 3 to 5, inclusive, y is an integer from 1 to 3, inclusive, where x+y=6, said etchant being formulated to etch doped silicon dioxide at a faster rate than undoped silicon dioxide and than silicon nitride.9. The method of claim 8, further comprising forming a mask over the doped silicon dioxide.10. The method of claim 9, wherein said dry etching is effected at at least one selected area of the doped silicon dioxide exposed through said mask.11. The method of claim 8, further comprising providing an etch stop adjacent the doped silicon dioxide.12. The method of claim 11, wherein said dry etching is effected until at least a portion of said etch stop is exposed.13. The method of claim 11, wherein said providing said etch stop comprises providing an etch stop comprising undoped silicon dioxide or silicon nitride.14. A method for employing an etch stop comprising undoped silicon dioxide for a doped silicon dioxide dry etch process, comprising etching a structure including doped silicon dioxide with an etchant comprising at least C2HxFy, where x is an integer from 3 to 5, inclusive, y is an integer from 1 to 3, inclusive, and x+y=6.15. A process for defining a semiconductor device structure, comprising:forming a layer comprising doped silicon dioxide over an etch stop comprising undoped silicon dioxide or silicon nitride; forming a mask over said layer; exposing selected regions of said layer through said mask; and dry etching at least said selected regions with an etchant comprising C2HxFy, where x is an integer from 3 to 5, inclusive, y is an integer from 1 to 3, inclusive, and x+y=6. 16. The process of claim 15, wherein said dry etching comprises dry etching said layer comprising doped silicon dioxide at a faster rate than said etch stop is etched.17. The process of claim 15, wherein said dry etching comprises dry etching said layer with selectivity over said etch stop.18. A process for forming a structure of a semiconductor device, comprising:forming a structure comprising doped silicon dioxide adjacent another structure comprising at least one of undoped silicon dioxide and silicon nitride; and patterning said structure with an etchant comprising at least C2HxFy, where x is an integer from 3 to 5, inclusive, y is an integer from 1 to 3, inclusive, and x+y=6, without substantially removing material of said another structure. 19. The process of claim 18, further comprising forming a mask over at least a portion of said structure.20. The process of claim 19, further comprising exposing at least a portion of said structure through said mask.21. The process of claim 20, wherein said patterning is effected through said mask.22. The process of claim 18, wherein said patterning is effected with said etchant being selective for doped silicon dioxide over undoped silicon dioxide and over silicon nitride.23. The process of claim 18, wherein said patterning effected with said etchant comprises patterning doped silicon dioxide at a faster rate than undoped silicon dioxide and than silicon nitride. |
CROSS-REFERENCE TO RELATED APPLICATIONThis application is a continuation of application Ser. No. 09/102,152, filed Jun. 22, 1998, now U.S. Pat. No. 6,117,791, issued Sep. 12, 2000.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to processes for selectively etching doped silicon dioxide that overlies silicon nitride or undoped silicon dioxide. Particularly, the process of the present invention includes an etchant mixture which includes the use of an ethane gas having the general formula C2HxFy, where x is an integer from two to five, inclusive, y is an integer from one to four, inclusive, and x plus y equals 6. The present invention also relates to etchant mixtures which include a component having the general formula C2HxFy, where x is an integer from two to five, inclusive, y is an integer from one to four, inclusive, and x plus y equals 6.2. Background of Related ArtThe fabrication of multi-layered structures upon semiconductor devices typically involves the patterning of doped silicon dioxide layers, including, without limitation, layers of phosphosilicate glass (PSG), borosilicate glass (BSG) and borophosphosilicate glass (BPSG). Such materials are typically employed as passivation layers on semiconductor devices. Etching techniques are typically employed to pattern many types of semiconductor device structures, including the formation of contacts through passivation layers. Etch stop layers are typically formed on underlying structures in order to terminate the etch process once the desired patterning of the passivation layer, or etch substrate, has occurred. Silicon nitride (Si3N4) is typically utilized as an etch stop during the patterning of silicon dioxide.Typically, etching techniques include the depositing, masking and patterning of protective layers, such as photoresists, which act as templates, or protective masks, in order to define structures from a passivation layer by etching techniques. Wet etch or dry etch techniques may be employed to define semiconductor device structures from doped silicon dioxide passivation layers.An exemplary wet etch process is disclosed in U.S. Pat. No. 5,300,463 (the "'463 patent"), issued to David A. Cathey et al. The wet etch process of the '463 patent, which employs hydrofluoric acid (HF) as an etchant, is selective for doped silicon dioxide over undoped silicon dioxide. Despite its specificity, that technique is somewhat undesirable from the standpoint that it suffers from many of the shortcomings that are typically associated with wet etch processes. Specifically, the technique of the '463 patent is an isotropic etch. Consequently, the structures defined thereby have different dimensions than those of the target area of the etch substrate that is exposed through the protective mask. Moreover, as those of skill in the art are aware, since wet etch techniques are typically isotropic, if the thickness of the film being etched is approximately equivalent to the minimum desired pattern dimension, the undercutting that is typically caused by isotropic etching becomes intolerable. Similarly, with the ever-decreasing size of structures that are carried on the active surfaces of semiconductor devices, etching must be very accurate and maintained within very precise tolerances in order to preserve the alignment of such minute structures and to optimize the electrical characteristics of such structures. Such precision cannot be obtained while defining structures on semiconductor devices with many conventional wet etch processes. Thus, the lack of precision and isotropic nature of typical wet etching processes are inconsistent with the overall goal of etch processes in forming structures on state-of-the-art semiconductor devices: reproducing the features defined by the protective mask with a high degree of fidelity.In contrast, many dry etch techniques, including, without limitation, glow-discharge sputtering, ion milling, reactive ion etching (RIE), reactive ion beam etching (RIBE) and high-density plasma etching, are capable of etching in a substantially anisotropic fashion, meaning that the target area of an etch substrate is etched primarily in a substantially vertical direction relative to the exposed, or active, surface of the etch substrate. Thus, such dry etch techniques are capable of defining structures with substantially upright sidewalls from the etch substrate. Consequently, such dry etch techniques are capable of accurately reproducing the features of a protective mask. Thus, due to ever-decreasing dimensions of structures on semiconductor devices, dry etching is often desirable for defining structures upon semiconductor device active surfaces.Many techniques that employ plasmas to dry etch silicon dioxide layers, however, lack the specificity of comparable wet etch techniques since fluorocarbons, such as CF4 and CHF3, are typically employed in plasma dry etches of silicon dioxide layers. The radio-frequency (RF) plasmas that are typically utilized with many silicon dioxide dry etch processes generate activated species, such as fluoride ions and fluorine free radicals, from such fluorocarbon etchants. While these activated species attack the silicon dioxide layer in order to etch the same, the activated fluorine radicals and fluoride ions of many dry etch techniques may also attack other materials, such as silicon and silicon nitride. Consequently, in addition to etching the desired layer, many dry etch techniques that employ plasmas also undesirably etch the etch stop layers and other structures of the semiconductor device that are exposed or which become exposed during the etching process.Etch stop materials employed in dry etch techniques are typically etched at a lower rate than the associated, usually underlying, etch substrate. Since the dry etchant etches the etch stop layer at a slower rate than the outer layer, the etch stop layer acts to protect structures therebeneath from the dry etch process, even as the etch stop itself is being consumed.Since the gate structures of many semiconductor devices include a silicon nitride (Si3N4) cap, selectivity between silicon dioxide (SiO2) and silicon nitride is desirable in order to etch contacts through passivation layers. Many of the so-called silicon dioxide-selective plasma dry etch techniques, however, have a SiO2 to Si3N4 selectivity ratio, or etch rate of SiO2 to etch rate of Si3N4, of less than about 3:1.U.S. Pat. No. 5,286,344 (the "'344 patent"), issued to Guy Blalock et al. on Feb. 15, 1994, discloses a dry etch process which has much better selectivity for silicon dioxide over silicon nitride than many other conventional silicon dioxide dry etch techniques. Specifically, CH2F2, which is employed as an additive to a primary etchant such as CF4 or CHF3, imparts the dry etchant mixture with improved selectivity for silicon dioxide over silicon nitride. The high energy ions that are required to etch both silicon dioxide and silicon nitride act by dissociating a chemical bond at the respective oxide or nitride surface. The dissociation energy that is required to etch silicon nitride, however, is less than that required to etch silicon dioxide. The use of CH2F2 in the dry etchant causes polymer deposition on the silicon nitride surface that offsets the dissociation properties of silicon nitride relative to silicon dioxide relative to conventional dry etchants which lack additives such as CH2F2. Thus, the etchant of the '344 patent etches silicon dioxide over an etch stop of silicon nitride with a selectivity of greater than 30:1. As with other conventional silicon dioxide dry etch techniques, however, the only material that is disclosed as a useful etch stop in the '344 patent is silicon nitride. Thus, the utility of the dry etch process that is disclosed in the '344 patent is limited to defining semiconductor device structures which include a silicon nitride dielectric layer, such as, for example, contacts over silicon nitride-capped gates. Moreover, the relative flow rates of each of the dry etchant components disclosed in the '344 patent are limited to narrow ranges in order to achieve the desired level of selectivity. Similarly, many other conventional dry etch processes require the use of very specific dry etchant components. Thus, the process windows of many conventional dry etch systems are narrow.Although silicon nitride is widely employed as an etch stop material, the use of silicon nitride etch stops is, however, somewhat undesirable from the standpoint that the deposition of silicon nitride upon a semiconductor device active surface by low pressure chemical vapor deposition (LPCVD) processes may also form a thick nitride layer on the back surface of the semiconductor device. Such thick nitride layers must be subsequently removed, which increases fabrication time and costs, as well as the potential for damaging the semiconductor device during the fabrication thereof.Moreover, the fluorine radicals and fluoride ions that are generated by conventional dry etches which employ plasmas non-selectively attack, or etch, both doped and undoped silicon dioxide. Thus, such silicon dioxide dry etch techniques are incapable of distinguishing between doped and undoped silicon dioxide. Consequently, when conventional dry etch techniques are employed, the use of alternatives to silicon nitride in state-of-the-art semiconductor devices is restricted.Accordingly, the inventors have recognized a need for a selective doped silicon dioxide dry etch process for which both silicon nitride and undoped silicon dioxide act as etch stops, and etchants which are specific for silicon dioxide over both undoped silicon dioxide and silicon nitride. Etchant mixtures are also needed wherein relative concentrations of each of the components of such etchant mixtures may be varied in order to facilitate the use of such mixtures in a broad range of doped silicon dioxide etching applications.BRIEF SUMMARY OF THE INVENTIONThe present invention includes a dry etch process and etchants that address the foregoing needs and overcome the disadvantages manifested by conventional dry etch processes.The etchants of the present invention include C2HxFy, where x is an integer from two to five, inclusive, y is an integer from one to four, inclusive, and x plus y equals 6. Specifically, the C2HxFy component of the present invention may be selected from the group consisting of C2H2F4, C2H3F3, C2H4F2, and C2H5F. The C2HxFy component may be used as either a primary etchant or as a component of an etchant mixture. When employed as a primary etchant, C2HxFy etches doped silicon dioxide at a slow rate relative to the etch rates of many conventional silicon dioxide dry etch techniques, but selectively etches doped silicon dioxide over undoped silicon dioxide.When used as an additive to other silicon dioxide etchants, C2HxFy imparts the etchant mixture with selectivity for doped silicon dioxide over undoped silicon dioxide, while permitting the doped silicon dioxide etch to proceed at a comparable rate relative to many conventional doped silicon dioxide dry etch techniques. The amount of C2HxFy used in the etchant mixture may be varied, depending upon the particular species of C2HxFy used, the desired level of doped to undoped silicon dioxide selectivity (i.e., selectivity ratio), the desired level of silicon dioxide to silicon nitride selectivity, the desired etch rate, and other factors.The dry etch process of the present invention employs an etchant of the present invention (i.e., an etchant which includes C2HxFy), and is selective for doped silicon dioxide over both undoped silicon dioxide and silicon nitride. Thus, the dry etch process of the present invention may be effectively employed for anisotropically etching a doped silicon dioxide layer down to an underlying etch stop of either undoped silicon dioxide or silicon nitride.Other advantages of the present invention will become apparent to those of ordinary skill in the relevant art through a consideration of the appended drawings and the ensuing description.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1 through 4 are cross-sectional schematic representations which illustrate the process of the present invention and exemplary structures that may be formed thereby.DETAILED DESCRIPTION OF THE INVENTIONThe present invention includes an etchant that is selective for doped silicon dioxide over both undoped silicon dioxide and silicon nitride. As those of skill in the art are aware, "doped" silicon dioxide typically includes a dopant such as boron or phosphorus, whereas undoped silicon dioxide is substantially free of dopants and other impurities. Exemplary types of doped silicon dioxide include, without limitation, borosilicate glass (BSG), phosphosilicate glass (PSG) and borophosphosilicate glass (BPSG). The present invention also includes a dry etch process which utilizes the inventive etchant.The doped silicon dioxide etchant of the present invention, which is also merely referred to as an etchant for simplicity, includes an ethane component of the general formula C2HxFy, which is also referred to as the C2HxFy component or C2HxFy for simplicity, where x is an integer from two to five, inclusive, y is an integer from one to four, inclusive, and x plus y equals 6. Specifically, the C2HxFy component of the present invention is desirably selected from the group consisting of C2H2F4, C2H3F3, C2H4F2, and C2H5F. The doped silicon include combinations of various types of C2HxFy.As the C2HxFy component of a doped silicon dioxide etchant is RF activated, the hydrogen ions and activated hydrogen species react with the fluorine-containing ions and activated fluorine-containing species (e.g., F* and CF*), removing the activated fluorine-containing species from the surface of the wafer prior to the occurrence of any substantial amount of etching of an etch stop layer of either undoped silicon dioxide or silicon nitride. The hydrogen content of the C2HxFy additives imparts etchants including the same with specificity for doped silicon dioxide over undoped silicon dioxide.In a first embodiment of the doped silicon dioxide etchant of the present invention, C2HxFy is a primary etchant. When used as the primary etchant, C2HxFy is selective for doped silicon dioxide over undoped silicon dioxide. Stated another way, C2HxFy etches doped silicon dioxide at a higher rate than it etches undoped silicon dioxide. As the primary etchant, C2HxFy etches doped silicon dioxide at a relatively slow rate compared to the etch rates of many conventional silicon dioxide dry etchants. Thus, additives which will increase the etch rate may be used in combination with C2HxFy. Such additives include, but are not limited to, CF4, CHF3, and other halogenated carbon materials which have been used as primary etchants in conventional doped silicon dioxide dry etch techniques.Similarly, additives that increase an etchant's selectivity for silicon dioxide over silicon nitride (i.e., reduce the rate at which silicon nitride is etched) may also be used as additives to etchants which include C2HxFy as the primary etchant. U.S. Pat. No. 5,286,344 (the "'344 patent"), issued to Guy Blalock et al. on Feb. 15, 1994, the disclosure of which is hereby incorporated by reference in its entirety, discloses some exemplary additives that may enhance the selectivity of C2HxFy in this manner. The additives of the '344 patent are fluorocarbons in which the number of hydrogen atoms is equal to or greater than the number of fluorine atoms, such as CH2F2 and CH3F.Other additives may also be used with silicon dioxide etchants that include C2HxFy as the primary etchant in order to alter other characteristics of such etchants, including, without limitation, the selectivity of such etchants for doped silicon dioxide over undoped silicon dioxide and the selectivity for certain types of doped silicon dioxide over other types of doped silicon dioxide.In another embodiment of the doped silicon dioxide etchant of the present invention, C2HxFy is employed as an additive to one or more primary etchants. C2HxFy may be used as an additive to etchants which include a fluorocarbon primary etchant, such as CF4, CHF3, or other fluorocarbons which etch silicon dioxide at a higher rate than they etch silicon nitride (i.e., are selective for silicon dioxide over silicon nitride). According to the '344 patent, CF4 and CHF3 are exemplary primary etchants with which C2HxFy may be utilized as an additive.When used as an additive to a silicon dioxide etchant, such as CF4 or CHF3, C2HxFy imparts the silicon dioxide etchant with selectivity for doped silicon dioxide over undoped silicon dioxide while permitting the doped silicon dioxide etch to proceed at a substantially normal rate. The amount of C2HxFy that is used in an etchant mixture, relative to the amounts of other etchants and any carrier gas, may be varied in order to tailor the characteristics thereof and to achieve the desired etching results. The various characteristics of the etchant mixture which may be varied by altering the concentration of C2HxFy in the mixture include, but are not limited to, selectivity for doped silicon dioxide over undoped silicon dioxide, selectivity for silicon dioxide over silicon nitride, and the doped silicon dioxide etch rate.An exemplary dry etchant that is selective for doped silicon dioxide over both undoped silicon dioxide and silicon nitride includes about 40% of the additive C2H2F4 (i.e., the C2HxFy component), about 30% of the primary etchant CHF3, and about 30% of CH2F2, an additive which improves the selectivity of the primary etchants for silicon dioxide over silicon nitride, each of the percentages based on the relative flow rates of each gas into the etcher.Alternatively, the amounts of the C2HxFy component may be varied considerably. Etchants which include any amount of an additive of the general formula C2HxFy, where x is an integer from two to five, inclusive, where y is an integer from one to four, inclusive, and where x plus y equals six, are within the scope of the present invention. Exemplary etchants may include five percent, ten percent, twenty percent, sixty five percent, or ninety percent of the C2HxFy additive or any combination of C2HxFy additives.Similarly, it is also foreseen that C2HxFy may be employed as an additive to silicon dioxide dry etchants which include other components. For example, C2HxFy could be used along with an etchant which includes either CF4 or CHF3 or both of them as primary etchants and a carrier gas, such as argon or nitrogen. Alternatively, the C2HxFy-containing dry etchant may include one or more other additives that alter the various characteristics of the dry etchant, such as the etch rate, the degree of selectivity, and the type of selectivity. For example, as disclosed in the '344 patent, the use of CH2F2 as an additive enhances the selectivity of the dry etchant for silicon dioxide over silicon nitride. Combinations of the additives of the general formula C2HxFy may also be employed as components in a doped silicon dioxide dry etchant.A preferred embodiment of the dry etch process of the present invention employs an etchant of the present invention (i.e., an etchant which includes C2HxFy), and is selective for doped silicon dioxide over both undoped silicon dioxide and silicon nitride. Thus, the dry etch process includes the etching of a doped silicon dioxide layer down to an etch stop of either undoped silicon dioxide or silicon nitride.Referring to FIGS. 1 to 4, the etch process of the present invention, which utilizes the inventive etchant, is illustrated. FIG. 1 depicts an exemplary multi-layer structure 10, which is also referred to as a semiconductor device structure, that may be fabricated in part in accordance with the process of the present invention. Multi-layer structure 10 includes a semiconductor substrate 12 (e.g., a silicon wafer, silicon-on-insulator (SOI), silicon-on-sapphire (SOS), silicon-on-glass (SOG), etc.), a field oxide layer14 that is disposed on an active surface 13 of the semiconductor substrate and an active device region 16, polysilicon lines 18 disposed on the active device region, side wall spacers 20 positioned on each side of the polysilicon lines, an intermediate structural layer 22 disposed over each of the foregoing elements, and a passivation layer 24 disposed over the intermediate structure layer. Passivation layer 24 is fabricated from doped silicon dioxide, such as BPSG, PSG or BSG. Intermediate structural layer 22 may be fabricated from either silicon nitride or undoped silicon dioxide.FIG. 2 depicts masking of multi-layer structure 10 prior to defining a structure through passivation layer 24. A mask 26, which is also referred to as a protective layer, is layered and patterned over passivation layer 24. Mask 26 may be formed from a material such as a photoresist, or photoimageable material. Exemplary positive photoresists that are useful as mask 26 may include a novolac resin, a diazonaphthaquinone, and a solvent, such as n-butyl acetate or xylene. Exemplary negative photoresists that are useful as mask 26 may include a cyclized synthetic rubber resin, bis-arylazide, and an aromatic solvent. Such a mask 26 may be applied to, or coated onto, multi-layer structure 10 and patterned by techniques that are known to those in the art, such as spin coating and photomask processing and patterning techniques. Alternatively, mask 26 may comprise an aerosol spray pattern of electrostatically chargeable hardenable liquid material, such as a polymer, which is not etched or is etched at a much slower rate than the underlying passivation layer 24. An exemplary method for spray-patterning such electrostatically chargeable hardenable liquid materials is described in U.S. Pat. No 5,695,658 (the "'658 patent"), which issued to James J. Alwan on Dec. 9, 1997, the disclosure of which is hereby incorporated by reference. Both photoresist materials (positive and negative) and non-photoimageable materials may be employed as mask 26 in accordance with the '658 patent. The utilization of masks 26 which comprise other non-photoimageable materials and the processes for applying and patterning them are also within the scope of the method of the present invention. The patterning of mask 26 defines openings 28, which are also referred to as apertures or contact apertures, therethrough, through which predetermined structures will be defined in the underlying passivation layer 24 during a subsequent etch step. Mask 26 comprises a material that is resistant to the etchant of the present invention (i.e., the etchant does not etch mask 26 or etches the mask at a relatively slow rate compared to the rate at which the etch substrate is etched). Thus, the areas of passivation layer 24 which underlie mask 26 are protected from the etchant during the subsequent etch step.Turning now to FIG. 3, an etch step is depicted, wherein an etchant 30, which is introduced into an etch chamber (not shown) either with or without a carrier gas, attacks the areas of passivation layer 24 that are exposed through openings 28 of mask 26. Dry etch processes that are known to those of skill in the art, including, without limitation, high density plasma etching, reactive ion etching (RIE), magnetic ion etching (MIE), magnetically enhanced reactive ion etching (MERIE), plasma etching (PE), point plasma etching, plasma enhanced reactive ion etching (PERIE), and electron cyclotron resonance (ECR), may be employed with the etchant of the present invention and are within the scope of the process of the present invention. Etchant 30, which comprises a C2HxFy-containing etchant of the present invention, etches an aperture through passivation layer 24 in a substantially vertical fashion until intermediate structural layer 22 is exposed. Intermediate structural layer 22, which is fabricated from either undoped silicon dioxide or silicon nitride, acts as an etch stop layer. Thus, etchant 30 etches intermediate structural layer 22 at a slower rate than the rate at which passivation layer 24 is etched. After the exposed areas of passivation layer 24 have been etched, mask 26 may be removed by processes that are known in the art, such as washing or etching techniques.FIG. 4 illustrates a contact opening 32, which is also referred to as a contact, that has been formed through passivation layer 24 by the etch process of the present invention. Contact opening 32 includes side walls 34 that are substantially vertical relative to active surface 13 of semiconductor substrate 12. Contact openings 32 of the multi-layer structure 10 expose at least a portion of the intermediate structural layer 22 that lies above each of polysilicon lines 18, which may be logic circuits, such as word lines. Intermediate structural layer 22 defines a cap 36 over each polysilicon line 18. Thus, cap 36 may be fabricated from either undoped silicon dioxide or silicon nitride.Although the foregoing description contains many specifics, these should not be construed as limiting the scope of the present invention, but merely as providing illustrations of some of the presently preferred embodiments. Similarly, other embodiments of the invention may be devised which do not depart from the spirit or scope of the present invention. The scope of this invention is, therefore, indicated and limited only by the appended claims and their legal equivalents, rather than by the foregoing description. All additions, deletions and modifications to the invention as disclosed herein which fall within the meaning and scope of the claims are to be embraced within their scope. |
Design and operation of a processing device (100) is configurable to optimize wake-up time and peak power cost during restoration of a machine state from non- volatile storage. The processing device includes a plurality of non- volatile logic element arrays (110) configured to store a machine state represented by a plurality of volatile storage elements (120) of the processing device (100). A stored machine state is read out from the plurality of non- volatile logic element arrays (110) to the plurality of volatile storage elements (120). During manufacturing, a number of rows and a number of bits per row in non- volatile logic element arrays (110) are based on a target wake up time and a peak power cost. In another approach, writing data to or reading data of the plurality of non- volatile arrays (110) can be done in parallel, sequentially, or in any combination to optimize operation characteristics. |
CLAIMS What is Claimed is: 1. A method for customizing wake time and peak power cost during a restoration of a computing device volatile storage system state from a non-volatile array backup, the method comprising: manufacturing a processing device having a plurality of non-volatile logic element arrays configured to store a machine state represented by a plurality of volatile storage elements of the processing device and wherein the processing device is configured to enable reading out a stored machine state from the plurality of non- volatile logic element arrays to the plurality of volatile storage elements; wherein a number of rows and a number of bits per row in individual ones of the plurality of non- volatile logic element arrays are based on a target wake up time based on a time used to read data one row at a time from one of the plurality of non-volatile logic element arrays and a peak power cost based on a peak power used to read a row of a given length of bits at a same time from the one of the plurality of non-volatile logic element arrays. 2. The method of claim 1 further comprising analyzing simulations of a design of the nonvolatile logic element arrays to determine peak and average power per array. 3. The method of claim 1 further comprising analyzing simulations of the computing device volatile storage system running application code for peak and average power consumption. 4. The method of claim 1 further comprising: analyzing simulations of a design of the non-volatile logic element arrays to determine first analysis results including peak and average power per array; analyzing simulations of the computing device volatile storage system running application code to determine second analysis results including peak and average power consumption for the computing device volatile storage system; comparing at least aspects of the first analysis results and at least aspects of the second analysis results with the target wake up time and the peak power cost to determine a target design for the plurality of non- volatile logic element arrays. 5. The method of claim 4 wherein the comparing results with the target wake up time and the peak power cost further comprises analyzing a capacity of planned power distribution for the computing device volatile storage system and the plurality of non- volatile logic element arrays. 6. A computing device apparatus providing non- volatile logic based computing, the apparatus comprising: a plurality of non-volatile logic element arrays; a plurality of volatile storage elements; at least one non- volatile logic controller configured to control the plurality of non- volatile logic element arrays to store a machine state represented by the plurality of volatile storage elements and to read out a stored machine state from the plurality of non- volatile logic element arrays to the plurality of volatile storage elements; at least one multiplexer connected to variably connect individual ones of the volatile storage elements to one or more corresponding individual ones of the non- volatile logic element arrays; wherein the at least one non-volatile logic controller is configured to variably control at least one of storage of data to or reading of data from the plurality of non-volatile arrays in parallel, sequentially, or in any combination thereof based on input signals. 7. The computing device apparatus of claim 6 wherein the at least one non- volatile logic controller is configured to receive the input signals through a user interface. 8. The computing device apparatus of claim 6 wherein the at least one non- volatile logic controller is configured to receive the input signals from a separate computing element. 9. The computing device apparatus of claim 8 wherein the at least one non- volatile logic controller is configured to receive the input signals from an application executed by the separate computing element. 10. The computing device apparatus of claim 9 wherein the separate computing element is configured to execute the application to determine a reading sequence for the plurality of nonvolatile arrays based at least in part on a determination of power and computing resource requirements for the computing device apparatus. 11. A method comprising: controlling a plurality of non-volatile logic element arrays to store a machine state represented by a plurality of volatile storage elements and to read out a stored machine state from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements; at least one non-volatile logic controller variably controlling at least one of storage of data to or reading of data from the plurality of non-volatile arrays in parallel, sequentially, or in any combination thereof based on input signals. 12. The method of claim 11 further comprising receiving the input signal through a user interface. 13. The method of claim 11 further comprising the at least one non- volatile logic controller receiving the input signals from a separate computing element. 14. The method of claim 13 further comprising the at least one non- volatile logic controller receiving the input signals from an application executed by the separate computing element. 15. The method of claim 14 further comprising the separate computing element executing the application determining a reading sequence for the plurality of non- volatile arrays based at least in part on a determination of power and computing resource requirements for the computing device apparatus. |
CUSTOMIZABLE BACKUP AND RESTORE FROM NONVOLATILE LOGIC ARRAY [0001] This generally relates to nonvolatile memory cells and their use in a system, and in particular, in combination with logic arrays to provide nonvolatile logic modules. BACKGROUND [0002] Many portable electronic devices such as cellular phones, digital cameras/camcorders, personal digital assistants, laptop computers and video games operate on batteries. During periods of inactivity the device may not perform processing operations and may be placed in a power-down or standby power mode to conserve power. Power provided to a portion of the logic within the electronic device may be turned off in a low power standby power mode. However, presence of leakage current during the standby power mode represents a challenge for designing portable, battery operated devices. Data retention circuits such as flip- flops and/or latches within the device may be used to store state information for later use prior to the device entering the standby power mode. The data retention latch, which may also be referred to as a shadow latch or a balloon latch, is typically powered by a separate 'always on' power supply. [0003] A known technique for reducing leakage current during periods of inactivity utilizes multi-threshold CMOS (MTCMOS) technology to implement the shadow latch. In this approach, the shadow latch utilizes thick gate oxide transistors and/or high threshold voltage (V t) transistors to reduce the leakage current in standby power mode. The shadow latch is typically detached from the rest of the circuit during normal operation (e.g., during an active power mode) to maintain system performance. To retain data in a "master-slave "flip-flop topology, a third latch, e.g., the shadow latch, may be added to the master latch and the slave latch for the data retention. In other cases, the slave latch may be configured to operate as the retention latch during low power operation. However, some power is still required to retain the saved state. For example, see US Patent 7,639,056, "Ultra Low Area Overhead Retention Flip-Flop for Power- Down Applications", which is incorporated by reference herein. [0004] System on Chip (SoC) is a concept that has been around for a long time; the basic approach is to integrate more and more functionality into a given device. This integration can take the form of either hardware or solution software. Performance gains are traditionally achieved by increased clock rates and more advanced process nodes. Many SoC designs pair a microprocessor core, or multiple cores, with various peripheral devices and memory circuits. [0005] Energy harvesting, also known as power harvesting or energy scavenging, is the process by which energy is derived from external sources, captured, and stored for small, wireless autonomous devices, such as those used in wearable electronics and wireless sensor networks. Harvested energy may be derived from various sources, such as: solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, etc. However, typical energy harvesters provide a very small amount of power for low-energy electronics. The energy source for energy harvesters is present as ambient background and is available for use. For example, temperature gradients exist from the operation of a combustion engine, and in urban areas, there is a large amount of electromagnetic energy in the environment because of radio and television broadcasting, etc. BRIEF DESCRIPTION OF THE DRAWINGS [0006] FIG. 1 is a functional block diagram of a portion of an example system on chip (SoC) as configured in accordance with various embodiments of the invention; [0007] FIG. 2 is a more detailed block diagram of one flip-flop cloud used in the SoC of FIG. 1; [0008] FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor; [0009] FIGS. 4-7 are schematic and timing diagrams illustrating an example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention; [0010] FIGS. 8-9 are schematic and timing diagrams illustrating another example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention; [0011] FIG. 10 is a block diagram illustrating an example NVL array used in the SoC of FIG. 1; [0012] FIGS. 11 A and 1 IB are more detailed schematics of input/output circuits used in the NVL array of FIG. 10; [0013] FIG. 12A is a timing diagram illustrating an example offset voltage test during a read cycle as configured in accordance with various embodiments of the invention; [0014] FIG. 12B illustrates a histogram generated during an example sweep of offset voltage as configured in accordance with various embodiments of the invention; [0015] FIG. 13 is a schematic illustrating parity generation in the NVL array of FIG. 10; [0016] FIG. 14 is a block diagram illustrating example power domains within an NVL array as configured in accordance with various embodiments of the invention; [0017] FIG. 15 is a schematic of an example level converter for use in the NVL array as configured in accordance with various embodiments of the invention; [0018] FIG. 16 is a timing diagram illustrating an example operation of level shifting using a sense amp within a ferroelectric bitcell as configured in accordance with various embodiments of the invention; [0019] FIG. 17 is a block diagram of an example power detection arrangement as configured in accordance with various embodiments of the invention; [0020] FIG. 18 is a flow chart illustrating an example operation of a method for customizing wake time and peak power cost in accordance with various embodiments of the invention; [0021] FIG. 19 is a flow chart illustrating an example operation of a method for customizing read out of NVL arrays in accordance with various embodiments of the invention; and [0022] FIG. 20 is a block diagram of another example SoC that includes NVL arrays as configured in accordance with various embodiments of the invention. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0023] While prior art systems made use of retention latches to retain the state of flip- flops in logic modules during low power operation, some power is still required to retain state. In contrast, nonvolatile elements can retain the state of flip flops in logic module while power is completely removed. Such logic elements will be referred to herein as Non-Volatile Logic (NVL). A micro-control unit (MCU) implemented with NVL within an SoC (system on a chip) may have the ability to stop, power down, and power up with no loss in functionality. A system reset/reboot is not required to resume operation after power has been completely removed. This capability is ideal for emerging energy harvesting applications, such as Near Field Communication (NFC), radio frequency identification (RFID) applications, and embedded control and monitoring systems, for example, where the time and power cost of the reset/reboot process can consume much of the available energy, leaving little or no energy for useful computation, sensing, or control functions. Though this description discusses an SOC containing a programmable MCU for sequencing the SOC state machines, one of ordinary skill in the art can see that NVL can be applied to state machines hard coded into ordinary logic gates or ROM, PLA, or PLD based control systems. [0024] In one approach, an SoC includes one or more blocks of nonvolatile logic. For example, a non-volatile logic (NVL) based SoC may back up its working state (all flip-flops) upon receiving a power interrupt, have zero leakage in sleep mode, and need less than 400ns to restore the system state upon power-up. [0025] Without NVL, a chip would either have to keep all flip-flops powered in at least a low power retention state that requires a continual power source even in standby mode or waste energy and time rebooting after power-up. For energy harvesting applications, NVL is useful because there is no constant power source required to preserve the state of flip-flops (FFs), and even when the intermittent power source is available, boot-up code alone may consume all the harvested energy. For handheld devices with limited cooling and battery capacity, zero-leakage IC's (integrated circuits) with "instant-on" capability are ideal. [0026] Ferroelectric random access memory (FRAM) is a non-volatile memory technology with similar behavior to DRAM (dynamic random access memory). Each individual bit can be accessed, but unlike EEPROM (electrically erasable programmable read only memory) or Flash, FRAM does not require a special sequence to write data nor does it require a charge pump to achieve required higher programming voltages. Each ferroelectric memory cell contains one or more ferroelectric capacitors (FeCap). Individual ferroelectric capacitors may be used as non-volatile elements in the NVL circuits described herein. [0027] FIG. 1 is a functional block diagram illustrating a portion of a computing device, in this case, an example system on chip (SoC) 100 providing non- volatile logic based computing features. While the term SoC is used herein to refer to an integrated circuit that contains one or more system elements, the teachings of this disclosure can be applied to various types of integrated circuits that contain functional logic modules such as latches, integrated clock gating cells, and flip-flop circuit elements (FF) that provide non-volatile state retention. Embedding non-volatile storage elements outside the controlled environment of a large array presents reliability and fabrication challenges. An NVL bitcell based NVL array is typically designed for maximum read signal margin and in-situ margin testability as is needed for any NV-memory technology. However, adding testability features to individual NVL FFs may be prohibitive in terms of area overhead. [0028] To amortize the test feature costs and improve manufacturability, and with reference to the example of FIGS. 1 and 2, a plurality of non- volatile logic element arrays or NVL arrays 110 are disposed with a plurality of volatile storage elements 220. At least one nonvolatile logic controller 106 configured to control the plurality of NVL arrays 110 to store a machine state represented by the plurality of volatile storage elements 220 and to read out a stored machine state from the plurality of NVL arrays 110 to the plurality of volatile storage elements 220. For instance, the at least one non- volatile logic controller 106 is configured to generate a control sequence for saving the machine state to or retrieving the machine state from the plurality of NVL arrays 110. A multiplexer 212 is connected to variably connect individual ones of the volatile storage elements 220 to one or more corresponding individual ones of the NVL arrays 110. [0029] In the illustrated example, the computing device apparatus is arranged on a single chip, here an SoC 100 implemented using 256b mini-arrays 110, which will be referred to herein as NVL arrays, of FeCap (ferroelectric capacitor) based bitcells dispersed throughout the logic cloud to save state of the various flip flops 120 when power is removed. Each cloud 102-104 of FFs 120 includes an associated NVL array 110. Such dispersal results in individual ones of the NVL arrays 110 being arranged physically closely to and connected to receive data from corresponding individual ones of the volatile storage elements 220. A central NVL controller 106 controls all the arrays and their communication with FFs 120. While three FF clouds 102- 104 are illustrated here, SoC 100 may have additional, or fewer, FF clouds all controlled by NVL controller 106. The SOC 100 can be partitioned into more than one NVL domain in which there is a dedicated NVL controller for managing the NVL arrays 110 and FFs 120 in each of the separate NVL domains. The existing NVL array embodiment uses 256 bit mini-arrays, but the arrays may have a greater or lesser number of bits as needed. [0030] SoC 100 is implemented using modified retention flip flops 120 including circuitry configured to enable write back of data from individual ones of the plurality of nonvolatile logic element arrays to the individual ones of the plurality of flip flop circuits. There are various known ways to implement a retention flip flop. For example, a data input may be latched by a first latch. A second latch coupled to the first latch may receive the data input for retention while the first latch is inoperative in a standby power mode. The first latch receives power from a first power line that is switched off during the standby power mode. The second latch receives power from a second power line that remains on during the standby mode. A controller receives a clock input and a retention signal and provides a clock output to the first latch and the second latch. A change in the retention signal is indicative of a transition to the standby power mode. The controller continues to hold the clock output at a predefined voltage level and the second latch continues to receive power from the second power line in the standby power mode, thereby retaining the data input Such a retention latch is described in more detail in US Patent 7,639,056, "Ultra Low Area Overhead Retention Flip-Flop for Power-Down Applications". [0031] FIG. 2 illustrates an example retention flop architecture that does not require that the clock be held in a particular state during retention. In such a "clock free" NVL flop design, the clock value is a "don't care" during retention. [0032] In SoC 100, modified retention FFs 120 include simple input and control modifications to allow the state of each FF to be saved in an associated FeCap bit cell in NVL array 110, for example, when the system is being transitioned to a power off state. When the system is restored, then the saved state is transferred from NVL array 110 back to each FF 120. Power savings and data integrity can be improved through implementation of particular power configurations. In one such approach, individual retention flip flop circuits include a primary logic circuit portion (master stage or latch) powered by a first power domain (such as VDDL in the below described example) and a slave stage circuit portion powered by a second power domain (such as VDDR in the below described example). In this approach, the first power domain is configured to be powered down and the second power domain is active during write back of data from the plurality of NVL arrays to the plurality of volatile storage elements. The plurality of non-volatile logic elements are configured to be powered by a third power domain (such as VDDN in the below described example) that is configured to be powered down during regular operation of the computing device apparatus. [0033] With this configuration, a plurality of power domains can be implemented that are independently powered up or powered down in a manner that can be specifically designed to fit a given implementation. Thus, in another aspect, the computing apparatus includes a first power domain configured to supply power to switched logic elements of the computing device apparatus and a second power domain configured to supply power to logic elements configured to control signals for storing data to or reading data from the plurality of non-volatile logic element arrays. Where the plurality of volatile storage elements comprise retention flip flops, the second power domain is configured to provide power to a slave stage of individual ones of the retention flip flops. A third power domain supplies power for the plurality of non- volatile logic element arrays. In addition to the power domains, NVL arrays can be defined as domains relating to particular functions. For example, a first set of at least one of the plurality of nonvolatile logic element arrays can be associated with a first function of the computing device apparatus and a second set of at least one of the plurality of non-volatile logic element arrays can be associated with a second function of the computing device apparatus. Operation of the first set of at least one of the plurality of non-volatile logic element arrays is independent of operation of the second set of at least one of the plurality of non- volatile logic element arrays. So configured, flexibility in the control and handling of the separate NVL array domains or sets allows more granulated control of the computing device's overall function. [0034] This more specific control can be applied to the power domains as well. In one example, the first power domain is divided into a first portion configured to supply power to switched logic elements associated with the first function and a second portion configured to supply power to switched logic elements associated with the second function. The first portion and the second portion of the first power domain are individually configured to be powered up or down independently of other portions of the first power domain. Similarly, the third power domain can be divided into a first portion configured to supply power to non-volatile logic element arrays associated with the first function and a second portion configured to supply power to non-volatile logic element arrays associated with the second function. As with the first power domain, the first portion and the second portion of the third power domain are individually configured to be powered up or down independently of other portions of the third power domain. [0035] So configured, if individual functions are not used for a given device, flip flops and NVL arrays associated with the unused functions can be respectively powered down and operated separately from the other flip flops and NVL arrays. Such flexibility in power and operation management allows one to tailor the functionality of a computing device with respect to power usage and function. This can be further illustrated in the following example design having a CPU, three SPI interfaces, three UART interfaces, three I2C interfaces, and only one logic power domain (VDDL). The logic power domain is distinguished from the retention or NVL power domains (VDDR and VDDN respectively), although these teachings can be applied to those power domains as well. Although this example device has only one logic power domain, a given application for the device might only use one of the three SPI units, one of the three UARTs and one of the three I2C peripherals. To allow applications to optimize the NVL application wake -up and sleep times and energy costs, the VDDL power domain can be partitioned into 10 separate NVL domains (one CPU, three SPI, three UART, three I2C totalling 10 NVL domains), each of which can be enabled/disabled independently of the others. So, the customer could enable NVL capability for the CPU, one SPI, one UART, and one I2C for their specific application while disabling the others. In addition, this partitioning also allows flexibility in time as well as energy and the different NVL domains can save and restore state at different points in time. [0036] To add further flexibility, NVL domains can overlap with power domains. Referring to the above example, four power domains can be defined: one each for CPU, SPI, UART, and I2C (each peripheral power domain has three functional units) while defining three NVL domains within each peripheral domain and one for the CPU (total of 10 NVL domains again). In this case, individual power domains turn on or off in addition to controlling the NVL domains inside each power domain for added flexibility in power savings and wakeup/sleep timing. [0037] Moreover, individual ones of the first power domain, the second power domain, and the third power domain are configured to be powered down or up independently of other ones of the first power domain, the second power domain, and the third power domain. For instance, integral power gates can be configured to be controlled to power down the individual ones of the first power domain, the second power domain, and the third power domain. As described in table 1 below, the third power domain is configured to be powered down during regular operation of the computing device apparatus, and the second power domain is configured to be powered down during a write back of data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements. A fourth power domain can be configured to supply power to real time clocks and wake-up interrupt logic. [0038] Such approaches can be further understood in reference to the illustrated example SoC 100 where NVL arrays 110 and controller 106 are operated on an NVL power domain referred to as VDDN and are switched off during regular operation. All logic, memory blocks 107 such as ROM (read only memory) and SRAM (static random access memory), and master stage of FFs are on a logic power domain referred to as VDDL. FRAM (ferroelectric random access memory) arrays are directly connected to a dedicated global supply rail (VDDZ) maintained at a higher fixed voltage needed for FRAM (i.e., VDDL <= VDDZ, where VDDZ is a fixed supply and VDDL can be varied as long as VDDL remains at a lower potential than VDDZ). Note that FRAM arrays as shown in 103 typically contain integrated power switches that allow the FRAM arrays to be powered down as needed, though it can easily be seen that FRAM arrays without internal power switches can be utilized in conjunction with power switches that are external to the FRAM array. The slave stages of retention FFs are on a retention power domain referred to as the VDDR domain to enable regular retention in a stand-by mode of operation. Table 1 summarizes power domain operation during normal operation, system backup to NVL arrays, sleep mode, system restoration from NVL arrays, and back to normal operation. Table 1 also specifies domains used during a standby idle mode that may be initiated under control of system software in order to enter a reduced power state using the volatile retention function of the retention flip flops. A set of switches indicated at 108 are used to control the various power domains. There may be multiple switches 108 that may be distributed throughout SoC 100 and controlled by software executed by a processor on SoC 100 and/or by a hardware controller (not shown) within SoC 100. There may be additional domains in addition to the three illustrated here, as will be described later. Table 1 - system power modes [0039] State info could be saved in a large centralized FRAM array, but would require a more time to enter sleep mode, longer wakeup time, excessive routing, and power costs caused by the lack of parallel access to system FFs. [0040] FIGS. 2 is a more detailed block diagram of one FF cloud 102 used in SoC 100. In this embodiment, each FF cloud includes up to 248 flip flops and each NVL array is organized as an 8 x 32 bit array, but one bit is used for parity in this embodiment. However, in other embodiments, the number of flip flops and the organization of the NVL array may have a different configuration, such as 4 x m, 16 x m, etc, where m is chosen to match the size of the FF cloud. In some embodiments, all of the NVL arrays in the various clouds may be the same size, while in other approaches there may be different size NVL arrays in the same SoC. [0041] Block 220 is a more detailed schematic of each retention FF 120. Several of the signals have an inverted version indicated by suffix "B" (referring to "bar" or /), such as RET and RETB, CLK and CLKB, etc. Each retention FF includes a master latch 221 and a slave latch 222. Slave latch 222 is formed by inverter 223 and inverter 224. Inverter 224 includes a set of transistors controlled by the retention signal (RET, RETB) that are used to retain the FF state during low power sleep periods, during which power domain VDDR remains on while power domain VDDL is turned off, as described above and in Table 1. [0042] NVL array 110 is logically connected with the 248 FFs it serves in cloud 102. Generally speaking, to enable data transfer from an NVL array to the FFs, individual FFs include circuitry configured to enable write back of data from individual ones of the plurality of NVL arrays 110. In the illustrated example, two additional ports are provided on the slave latch 222 of each FF as shown in block 220. A data input port (gate 225) is configured to insert data ND from one of the NVL arrays 110 to an associated volatile storage element 220. The data input port is configured to insert the data ND by allowing passage of a stored data related signal from the one of the NVL arrays to a slave stage of the associated flip flop circuit in response to receiving an update signal NU from the at least one non- volatile logic controller 106 on a data input enable port to trigger the data input port. Inverter 223 is configured to be disabled in response to receiving the inverted NVL update signal NUZ to avoid an electrical conflict between the tri-state inverter 223 and the NVL data port input tri-state inverter 225. [0043] More specifically, in the illustrated example, the inv-inv feedback pair (223 and 224) form the latch itself. These inverters make a very stable configuration for holding the data state and will fight any attempts to change the latch state unless at least one of the inverters is disabled to prevent electrical conflict when trying to overwrite the current state with the next state via one of the data ports. The illustrated NVL FF 220 includes two data ports that access the slave latch 222 as compared to one data port for a regular flop. One port transfers data from the master stage 221 to the slave stage 222 via the cmos pass gate controlled by the clock. When using this port to update the slave state 221, the inverter 224 driving onto the output node of the pass gate controlled by CLK is disabled to avoid an electrical conflict while the inverter 223 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state in preparation for holding the data when clock goes low (for a posedge FF). [0044] For the same reason, the inverter 223 is disabled when the ND data port is activated by NU transitioning to the active high state to avoid an electrical conflict on the ND port. The second inverter 224 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state to be latched when NU goes low. In this example, the NU port does not in any way impact the other data port controlled by the clock. On a dual port FF, having both ports active at the same time is an illegal control condition, and the resulting port conflict means the resulting next state will be indeterminate. To avoid a port conflict, the system holds the clock in the inactive state if the slave state is updated while in functional mode. In retention mode, the RET signal along with supporting circuits inside the FF are used to prevent electrical conflicts independent of the state of CLK while in retention mode (see the inverter controlled by RETB in the master stage). [0045] As illustrated these additional elements are disposed in the slave stage 222 of the associated FF. The additional transistors, however, are not on the critical path of the FF and have only 1.8% and 6.9% impact on normal FF performance and power (simulation data) in this particular implementation. When data from the NVL array is valid on the ND (NVL-Data) port, the NU (NVL-Update) control input is pulsed high for a cycle to write to the FF. The thirty-one bit data output of an NVL array fans out to ND ports of eight thirty-one bit FF groups. [0046] To save flip-flop state, a multiplexer is configured to pass states from a plurality of the individual ones of the plurality of volatile storage elements 220 for essentially simultaneous storage in an individual one of the plurality of NVL arrays 110. For instance, the multiplexer may be configured to connect to N groups of M volatile storage elements of the plurality of volatile storage elements per group and to an N by M size NVL array of the plurality of NVL arrays. In this configuration, the multiplexer connects one of the N groups to the N by M size NVL array to store data from the M volatile storage elements into a row of the N by M size NVL array at one time. In the illustrated example, Q outputs of 248 FFs are connected to the 31b parallel data input of NVL array 110 through a 31b wide 8-1 mux 212. To minimize FF loading, the mux may be broken down into smaller muxes based on the layout of the FF cloud and placed close to the FFs they serve. Again, the NVL controller synchronizes writing to the NVL array, and the select signals MUX_SEL <2:0> of 8-1 mux 212. [0047] When the FFs are operating in a retention mode, a clock CLK of the computing device is a "don't care" such that it is irrelevant for the volatile storage elements with respect to updating the slave stage state whenever the NU signal is active, whereby the non-volatile logic controller is configured to control and effect storage of data from individual ones of the volatile storage elements into individual ones of the non- volatile storage elements. In other words, the clock CLK control is not needed during NVL data recovery during retention mode, but the clock CLK should be controlled at the system level once the system state is restored, right before the transition between retention mode and functional mode. In another approach, the NVL state can be recovered to the volatile storage elements when the system is in a functional mode. In this situation where the VDDL power is active, the clock CLK is held in the inactive state for the volatile storage elements during the data restoration from the NVL array, whereby the nonvolatile logic controller is configured to control and effect transfer of data from individual ones of the non-volatile storage elements into individual ones of the volatile storage elements. For example, a system clock CLK is typically held low for positive edge FF based logic and held high for negative edge FF based logic. [0048] Generally speaking, to move from regular operation into system backup mode, the first step is to stop the system clock(s) in an inactive state to freeze the machine state to not change while the backup is in progress. The clocks are held in the inactive state until backup is complete. After backup is complete, all power domains are powered down and the state of the clock becomes a don't care in sleep mode by definition. [0049] When restoring the state from NVL arrays, the FF are placed in a retention state (see Table 2 below) in which the clock continues to be a don't care as long as the RET signal is active (clock can be a don't care by virtue of special transistors added to each retention FF and is controlled by the RET signal). While restoring NVL state, the flops remain in retention mode so clock remains a don't care. Once the NVL state is recovered, the state of the machine logic that controls the state of the system clocks will also be restored to the state they were in at the time of the state backup, which also means that for this example all the controls (including the volatile storage elements or FF's) that placed the system clock into inactive states have now been restored such that the system clocks will remain in the inactive state upon completion of NVL data recovery. Now the RET signal can be deactivated, and the system will sit quiescent with clocks deactivated until the NVL controller signals to the power management controller that the restoration is complete, in response to which the power management controller will enable the clocks again. [0050] To restore flip-flop state during restoration, NVL controller 106 reads an NVL row in NVL array 110 and then pulses the NU signal for the appropriate flip-flop group. During system restore, retention signal RET is held high and the slave latch is written from ND with power domain VDDL unpowered; at this point the state of the system clock CLK is a don't care. FF's are placed in the retention state with VDDL = 0V and VDDR = VDD in order to suppress excess power consumption related to spurious data switching that occurs as each group of 31 FF's is updated during NVL array read operations. Suitably modified non-retention flops can be used in NVL based SOC's at the expense of higher power consumption during NVL data recovery operations. [0051] System clock CLK should start from low once VDDL comes up and thereafter normal synchronous operation continues with updated information in the FFs. Data transfer between the NVL arrays and their respective FFs can be done in serial or parallel or any combination thereof to tradeoff peak current and backup/restore time. Because a direct access is provided to FFs controlled by at least one non-volatile logic controller that is separate from a central processing unit for the computing device apparatus, intervention from a microcontroller processing unit (CPU) is not required for NVL operations; therefore the implementation is SoC/CPU architecture agnostic. Table 2 summarizes operation of the NVL flip flops. Table 2 - NVL Flip Flop truth table [0052] Because the at least one non-volatile logic controller is configured to variably control data transfer to or reading from the plurality of non-volatile arrays in parallel, sequentially, or in any combination thereof based on input signals, system designers have additional options with respect to tailoring system operation specifications to particular needs. For instance, because no computation can occur on an MCU SOC during the time the system enters a low power system state or to wakeup from a low power state, minimizing the wakeup or go to sleep time is advantageous. On the other hand, non-volatile state retention is power intensive because significant energy is needed to save and restore state to or from non-volatile elements such as ferro-electric capacitors. The power required to save and restore system state can exceed the capacity of the power delivery system and cause problems such as electromigration induced power grid degradation, battery life reduction due to excessive peak current draw, or generation of high levels of noise on the power supply system that can degrade signal integrity on die. Thus, allowing a system designer to be able to balance between these two concerns is desirable. [0053] In one such approach, the at least one non- volatile logic controller 106 is configured to receive the input signals through a user interface 125, such as those known to those of skill in the art. In another approach, the at least one non-volatile logic controller is configured to receive the input signals from a separate computing element 130 that may be executing an application. In one such approach, the separate computing element is configured to execute the application to determine a reading sequence for the plurality of non-volatile arrays based at least in part on a determination of power and computing resource requirements for the computing device apparatus 130. So configured, a system user can manipulate the system state store and retrieve procedure to fit a given design. [0054] FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor. The general operation of ferroelectric bit cells is known. When most materials are polarized, the polarization induced, P, is almost exactly proportional to the applied external electric field E; so the polarization is a linear function, referred to as dielectric polarization. In addition to being nonlinear, ferroelectric materials demonstrate a spontaneous nonzero polarization as illustrated in FIG. 3 when the applied field E is zero. The distinguishing feature of ferroelectrics is that the spontaneous polarization can be reversed by an applied electric field; the polarization is dependent not only on the current electric field but also on its history, yielding a hysteresis loop. The term "ferroelectric" is used to indicate the analogy to ferromagnetic materials, which have spontaneous magnetization and also exhibit hysteresis loops. [0055] The dielectric constant of a ferroelectric capacitor is typically much higher than that of a linear dielectric because of the effects of semi-permanent electric dipoles formed in the crystal structure of the ferroelectric material. When an external electric field is applied across a ferroelectric dielectric, the dipoles tend to align themselves with the field direction, produced by small shifts in the positions of atoms that result in shifts in the distributions of electronic charge in the crystal structure. After the charge is removed, the dipoles retain their polarization state. Binary "0"s and "l"s are stored as one of two possible electric polarizations in each data storage cell. For example, in the figure a " 1" may be encoded using the negative remnant polarization 302, and a "0" may be encoded using the positive remnant polarization 304, or vice versa. [0056] Ferroelectric random access memories have been implemented in several configurations. A one transistor, one capacitor (1T-1C) storage cell design in an FeRAM array is similar in construction to the storage cell in widely used DRAM in that both cell types include one capacitor and one access transistor. In a DRAM cell capacitor, a linear dielectric is used, whereas in an FeRAM cell capacitor the dielectric structure includes ferroelectric material, typically lead zirconate titanate (PZT). Due to the overhead of accessing a DRAM type array, a 1T-1C cell is less desirable for use in small arrays such as NVL array 110. [0057] A four capacitor, six transistor (4C-6T) cell is a common type of cell that is easier to use in small arrays. An improved four capacitor cell will now be described. [0058] FIG. 4 is a schematic illustrating one embodiment of a ferroelectric nonvolatile bitcell 400 that includes four capacitors and twelve transistors (4C-12T). The four FeCaps are arranged as two pairs in a differential arrangement. FeCaps CI and C2 are connected in series to form node Q 404, while FeCaps CI' and C2' are connected in series to form node QB 405, where a data bit is written into node Q and stored in FeCaps CI and C2 via bit line BL and an inverse of the data bit is written into node QB and stored in FeCaps CI' and C2' via inverse bitline BLB. Sense amp 410 is coupled to node Q and to node QB and is configured to sense a difference in voltage appearing on nodes Q, QB when the bitcell is read. The four transistors in sense amp 410 are configured as two cross coupled inverters to form a latch. Pass gate 402 is configured to couple node Q to bitline B and pass gate 403 is configured to couple node QB to bit line BLB. Each pass gate 402, 403 is implemented using a PMOS device and an NMOS device connected in parallel. This arrangement reduces voltage drop across the pass gate during a write operation so that nodes Q, QB are presented with a higher voltage during writes and thereby a higher polarization is imparted to the FeCaps. Plate line 1 (PL1) is coupled to FeCaps CI and CI' and plate line 2 (PL2) is coupled to FeCaps C2 and C2'. The plate lines are use to provide biasing to the FeCaps during reading and writing operations. Alternatively, in another embodiment the cmos pass gates can be replaced with NMOS pass gates that use a pass gate enable that is has a voltage higher than VDDL. The magnitude of the higher voltage must be larger than the usual NMOS Vt in order to pass a undegraded signal from the bitcell Q/QB nodes to/from the bitlines BL/BLB (I.E. Vpass gate control must be > VDDL + Vt). [0059] Typically, there will be an array of bit cells 400. There may then be multiple columns of similar bitcells to form an n row by m column array. For example, in SoC 100, the NVL arrays are 8 x 32; however, as discussed earlier, different configurations may be implemented. [0060] FIGS. 5 and 6 are timing diagram illustrating read and write waveforms for reading a data value of logical 0 and writing a data value of logical 0, respectively. Reading and writing to the NVL array is a multi-cycle procedure that may be controlled by the NVL controller and synchronized by the NVL clock. In another embodiment, the waveforms may be sequenced by fixed or programmable delays starting from a trigger signal, for example. During regular operation, a typical 4C-6T bitcell is susceptible to time dependent dielectric breakdown (TDDB) due to a constant DC bias across FeCaps on the side storing a "1". In a differential bitcell, since an inverted version of the data value is also stored, one side or the other will always be storing a "1". [0061] To avoid TDDB, plate line PL1, plate line PL2, node Q and node QB are held at a quiescent low value when the cell is not being accessed, as indicated during time periods sO in FIGS. 5, 6. Power disconnect transistors MP 411 and MN 412 allow sense amp 410 to be disconnected from power during time periods sO in response to sense amp enable signals SAEN and SAENB. Clamp transistor MC 406 is coupled to node Q and clamp transistor MC 407 is coupled to node QB. Clamp transistors 406, 407 are configured to clamp the Q and QB nodes to a voltage that is approximately equal to the low logic voltage on the plate lines in response to clear signal CLR during non-access time periods sO, which in this embodiment equal 0 volts, (the ground potential). In this manner, during times when the bit cell is not being accessed for reading or writing, no voltage is applied across the FeCaps and therefore TDDB is essentially eliminated. The clamp transistors also serve to prevent any stray charge buildup on nodes Q and QB due to parasitic leakage currents. Build up of stray charge may cause the voltage on Q or QB to rise above Ov, leading to a voltage differential across the FeCaps between Q or QB and PL1 and PL2. This can lead to unintended depolarization of the FeCap remnant polarization and could potentially corrupt the logic values stored in the FeCaps. [0062] In this embodiment, Vdd is 1.5 volts and the ground reference plane has a value of 0 volts. A logic high has a value of approximately 1.5 volts, while a logic low has a value of approximately 0 volts. Other embodiments that use logic levels that are different from ground for logic 0 (low) and Vdd for logic 1 (high) would clamp nodes Q, QB to a voltage corresponding to the quiescent plate line voltage so that there is effectively no voltage across the FeCaps when the bitcell is not being accessed. [0063] In another embodiment, two clamp transistors may be used. Each of these two transistors is used to clamp the voltage across each FeCap to be no greater than one transistor Vt (threshold voltage). Each transistor is used to short out the FeCaps. In this case, for the first transistor, one terminal connects to Q and the other one connects to PL1, while for transistor two, one terminal connects to Q and the other connects to PL2. The transistor can be either NMOS or PMOS, but NMOS is more likely to be used. [0064] Typically, a bit cell in which the two transistor solution is used does not consume significantly more area than the one transistor solution. The single transistor solution assumes that PL1 and PL2 will remain at the same ground potential as the local VSS connection to the single clamp transistor, which is normally a good assumption. However, noise or other problems may occur (especially during power up) that might cause PL1 or PL2 to glitch or have a DC offset between the PL1/PL2 driver output and VSS for brief periods; therefore, the two transistor design may provide a more robust solution. [0065] To read bitcell 400, plate line PL1 is switched from low to high while keeping plate line PL2 low, as indicated in time period s2. This induces voltages on nodes Q, QB whose values depend on the capacitor ratio between C1-C2 and Cl '-C2' respectively. The induced voltage in turn depends on the remnant polarization of each FeCap that was formed during the last data write operation to the FeCap's in the bit cell. The remnant polarization in effect "changes" the effective capacitance value of each FeCap which is how FeCaps provide nonvolatile storage. For example, when a logic 0 was written to bitcell 400, the remnant polarization of C2 causes it to have a lower effective capacitance value, while the remnant polarization of C I causes it to have a higher effective capacitance value. Thus, when a voltage is applied across CI - C2 by switching plate line PL1 high while holding plate line PL2 low, the resultant voltage on node Q conforms to equation (1). A similar equation holds for node QB, but the order of the remnant polarization of CI' and C2' is reversed, so that the resultant voltages on nodes Q and QB provide a differential representation of the data value stored in bit cell 400, as illustrated at 502, 503 in FIG. 5. V(Q) = V(PL1 (1) [0066] The local sense amp 410 is then enabled during time period s3. After sensing the differential values 502, 503, sense amp 410 produces a full rail signal 504, 505. The resulting full rail signal is transferred to the bit lines BL, BLB during time period s4 by asserting the transfer gate enable signals PASS, PASSB to enable transfer gates 402, 403 and thereby transfer the full rail signals to an output latch responsive to latch enable signal LAT EN that is located in the periphery of NVL array 1 10, for example [0067] FIG. 6 is a timing diagram illustrating writing a logic 0 to bit cell 400. The write operation begins by raising both plate lines to Vdd during time period si . This is called the primary storage method. The signal transitions on PL1 and PL2 are capacitively coupled onto nodes Q and QB, effectively pulling both storage nodes almost all the way to VDD (1.5v). Data is provided on the bit lines BL, BLB and the transfer gates 402, 403 are enabled by the pass signal PASS during time periods s2-s4 to transfer the data bit and its inverse value from the bit lines to nodes Q, QB. Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive after the write data drivers have forced adequate differential on Q/QB during time period s2. However, to avoid a short from the sense amp to the 1.2v driver supply, the write data drivers are turned off at the end of time period s2 before the sense amp is turned on during time periods s3, s4. In an alternative embodiment called the secondary store method, write operations hold PL2 at Ov or ground throughout the data write operation. This can save power during data write operations, but reduces the resulting read signal margin by 50% as C2 and C2' no longer hold data via remnant polarization and only provide a linear capacitive load to the CI and C2 FeCaps. [0068] Key states such as PL1 high to SAEN high during s2, SAEN high pulse during s3 during read and FeCap DC bias states s3-4 during write can selectively be made multi-cycle to provide higher robustness without slowing down the NVL clock. [0069] For FeCap based circuits, reading data from the FeCap's may partially depolarize the capacitors. For this reason, reading data from FeCaps is considered destructive in nature; i.e. reading the data may destroy the contents of the FeCap's or reduce the integrity of the data at a minimum. For this reason, if the data contained in the FeCap's is expected to remain valid after a read operation has occurred, the data must be written back into the FeCaps. [0070] In certain applications, specific NVL arrays may be designated to store specific information that will not change over a period of time. For example, certain system states can be saved as a default return state where returning to that state is preferable to full reboot of the device. The reboot and configuration process for a state of the art ultra low power SoC can take 1000 - 10000 clock cycles or more to reach the point where control is handed over to the main application code thread. This boot time becomes critical for energy harvesting applications in which power is intermittent, unreliable, and limited in quantity. The time and energy cost of rebooting can consume most or all of the energy available for computation, preventing programmable devices such as MCU's from being used in energy harvesting applications. An example application would be energy harvesting light switches. The energy harvested from the press of the button on the light switch represents the entire energy available to complete the following tasks: 1) determine the desired function (on/off or dimming level), 2) format the request into a command packet, 3) wake up a radio and squirt the packet over an RF link to the lighting system. Known custom ASIC chips with hard coded state machines are often used for this application due to the tight energy constraints, which makes the system inflexible and expensive to change because new ASIC chips have to be designed and fabricated whenever any change is desired. A programmable MCU SOC would be a much better fit, except for the power cost of the boot process consumes most of the available energy, leaving no budget for executing the required application code. [0071] To address this concern, in one approach, at least one of the plurality of nonvolatile logic element arrays is configured to store a boot state representing a state of the computing device apparatus after a given amount of a boot process is completed. The at least one non-volatile logic controller in this approach is configured to control restoration of data representing the boot state from the at least one of the plurality of non-volatile logic element arrays to corresponding ones of the plurality of volatile storage elements in response to detecting a previous system reset or power loss event for the computing device apparatus. To conserve power over a typical read/write operation for the NVL arrays, the at least one non-volatile logic controller can be configured to execute a round-trip data restoration operation that automatically writes back data to an individual non- volatile logic element after reading data from the individual non- volatile logic element without completing separate read and write operations. [0072] An example execution of a round-trip data restoration is illustrated in FIG. 7, which illustrates a writeback operation on bitcell 400, where the bitcell is read, and then written to the same value. As illustrated, initiating reading of data from the individual non-volatile logic element is started at a first time SI by switching a first plate line PL1 high to induce a voltage on a node of a corresponding ferroelectric capacitor bit cell based on a capacitance ratio for ferroelectric capacitors of the corresponding ferroelectric capacitor bit cell. If clamp switches are used to ground the nodes of the ferroelectric capacitors, a clear signal CLR is switched from high to low at the first time SI to unclamp those aspects of the individual non- volatile logic element from electrical ground. At a second time S2, a sense amplifier enable signal SAEN is switched high to enable a sense amplifier to detect the voltage induced on the node and to provide an output signal corresponding to data stored in the individual non-volatile logic element. At a third time S3, a pass line PASS is switched high to open transfer gates to provide an output signal corresponding to data stored in the individual non-volatile logic element. At a fourth time S4, a second plate line PL2 is switched high to induce a polarizing signal across the ferroelectric capacitors to write data back to the corresponding ferroelectric capacitor bit cell corresponding to the data stored in the individual non-volatile logic element. To the individual non-volatile logic element to a non-volatile storage state having the same data stored therein, at a fifth time S5 the first plate line PL1 and the second plate line PL2 are switched low, the pass line PASS is switched low at the sixth time S6, and the sense amplifier enable signal SAEN is switched law at the seventh time S7. If clamp switches are used to ground the nodes of the ferroelectric capacitors, at the seventh time a clear signal CLR is switched from low to high to clamp the aspects of the individual non-volatile logic element to the electrical ground to help maintain data integrity as discussed herein. This process includes a lower total number of transitions than what is needed for distinct and separate read and write operations (read, then write). This lowers the overall energy consumption. [0073] Bitcell 400 is designed to maximize read differential across Q/QB in order to provide a highly reliable first generation of NVL products. Two FeCaps are used on each side rather than using one FeCap and constant BL capacitance as a load because this doubles the differential voltage that is available to the sense amp. A sense amp is placed inside the bitcell to prevent loss of differential due to charge sharing between node Q and the BL capacitance and to avoid voltage drop across the transfer gate. The sensed voltages are around VDD/2, and a HVT transfer gate takes a long time to pass them to the BL. Bitcell 400 helps achieve twice the signal margin of a regular FRAM bitcell known in the art, while not allowing any DC stress across the FeCaps. [0074] The timing of signals shown in FIGS. 5 and 6 are for illustrative purposes. Various embodiments may signal sequences that vary depending on the clock rate, process parameters, device sizes, etc. For example, in another embodiment, the timing of the control signals may operate as follows. During time period SI : PASS goes from 0 to 1 and PL1/PL2 go from 0 to 1. During time period S2: SAEN goes from 0 to 1, during which time the sense amp may perform level shifting as will be described later, or provides additional drive strength for a non-level shifted design. During time period S3: PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same, but are moved up one clock cycle. This sequence is one clock cycle shorter than that illustrated in FIG. 6. [0075] In another alternative, the timing of the control signals may operate as follows. During time period SI : PASS goes from 0 to 1 (BL/BLB, Q/QB are Ov and VDDL respectively). During time period S2: SAEN goes from 0 to 1 (BL/BLB, Q/QB are Ov and VDDN respectively). During time period S3: PL1/PL2 go from 0 to 1 (BL/Q is coupled above ground by PL1/PL2 and is driven back low by the SA and BL drivers). During time period S4: PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same. [0076] FIGS. 8-9 are a schematic and timing diagram illustrating another embodiment of a ferroelectric nonvolatile bit cell 800, a 2C-3T self-referencing based NVL bitcell. The previously described 4-FeCap based bitcell 400 uses two FeCaps on each side of a sense amp to get a differential read with double the margin as compared to a standard 1C-1T FRAM bitcell. However, a 4-FeCap based bitcell has a larger area and may have a higher variation because it uses more FeCaps. [0077] Bitcell 800 helps achieve a differential 4-FeCap like margin in lower area by using itself as a reference, referred to herein as self-referencing. By using fewer FeCaps, it also has lower variation than a 4 FeCap bitcell. Typically, a single sided cell needs to use a reference voltage that is in the middle of the operating range of the bitcell. This in turn reduces the read margin by half as compared to a two sided cell. However, as circuit fabrication process moves, the reference value may become skewed, further reducing the read margin. A self-reference scheme allows comparison of a single sided cell against itself, thereby providing a higher margin. Tests of the self-referencing cell described herein have provided at least double the margin over a fixed reference cell. [0078] Bitcell 800 has two FeCaps CI, C2 that are connected in series to form node Q 804. Plate line 1 (PL1) is coupled to FeCap CI and plate line 2 (PL2) is coupled to FeCap C2. The plate lines are use to provide biasing to the FeCaps during reading and writing operations. Pass gate 802 is configured to couple node Q to bitline B. Pass gate 802 is implemented using a PMOS device and an NMOS device connected in parallel. This arrangement reduces voltage drop across the pass gate during a write operation so that nodes Q, QB are presented with a higher voltage during writes and thereby a higher polarization is imparted to the FeCaps. Alternatively, an NMOS pass gate may be used with a boosted word line voltage. In this case, the PASS signal would be boosted by one NFET Vt (threshold voltage). However, this may lead to reliability problems and excess power consumption. Using a CMOS pass gate adds additional area to the bit cell but improves speed and power consumption. Clamp transistor MC 806 is coupled to node Q. Clamp transistor 806 is configured to clamp the Q node to a voltage that is approximately equal to the low logic voltage on the plate lines in response to clear signal CLR during non-access time periods sO, which in this embodiment 0 volts, ground. In this manner, during times when the bit cell is not being accessed for reading or writing, no voltage is applied across the FeCaps and therefore TDDB and unintended partial depolarization is essentially eliminated. [0079] The initial state of node Q, plate lines PL1 and PL2 are all 0, as shown in FIG. 9 at time period sO, so there is no DC bias across the FeCaps when the bitcell is not being accessed. To begin a read operation, PL1 is toggled high while PL2 is kept low, as shown during time period si . A signal 902 develops on node Q from a capacitance ratio based on the retained polarization of the FeCaps from a last data value previously written into the cell, as described above with regard to equation 1. This voltage is stored on a read capacitor 820 external to the bitcell by passing the voltage though transfer gate 802 onto bit line BL and then through transfer gate 822 in response to a second enable signal EN1. Note: BL and the read capacitors are precharged to VDD/2 before the pass gates 802, 822, and 823 are enabled in order to minimize signal loss via charge sharing when the recovered signals on Q are transferred via BL to the read storage capacitors 820 and 821. Then, PL1 is toggled back low and node Q is discharged using clamp transistor 806 during time period s2. Next, PL2 is toggled high keeping PL1 low during time period s3. A new voltage 904 develops on node Q, but this time with the opposite capacitor ratio. This voltage is then stored on another external read capacitor 821 via transfer gate 823. Thus, the same two FeCaps are used to read a high as well as low signal. Sense amplifier 810 can then determine the state of the bitcell by using the voltages stored on the external read capacitors 820, 821. [0080] Typically, there will be an array of bit cells 800. One column of bit cells 800- 800n is illustrated in FIG. 8 coupled via bit line 801 to read transfer gates 822, 823. There may then be multiple columns of similar bitcells to form an n row by m column array. For example, in SoC 100, the NVL arrays are 8 x 32; however, as discussed earlier, different configurations may be implemented. The read capacitors and sense amps may be located in the periphery of the memory array, for example. [0081] FIG. 10 is a block diagram illustrating NVL array 110 in more detail. Embedding non-volatile elements outside the controlled environment of a large array presents reliability and fabrication challenges. As discussed earlier with reference to FIG. 1, adding testability features to individual NVL FFs may be prohibitive in terms of area overhead. To amortize the test feature costs and improve manufacturability, SoC 100 is implemented using 256b mini-NVL arrays 110, of FeCap based bitcells dispersed throughout the logic cloud to save state of the various flip flops 120 when power is removed. Each cloud 102-104 of FFs 120 includes an associated NVL array 110. A central NVL controller 106 controls all the arrays and their communication with FFs 120. [0082] While an NVL array may be implemented in any number of n rows of m column configurations, in this example, NVL array 110 is implemented with an array 1040 of eight rows and thirty-two bit columns of bitcells. Each individual bit cell, such as bitcell 1041, is coupled to a set of control lines provided by row drivers 1042. The control signals described earlier, including plate lines (PLl, PL2), sense amp enable (SEAN), transfer gate enable (PASS), and clear (CLR) are all driven by the row drivers. There is a set of row drivers for each row of bitcells. [0083] Each individual bit cell, such as bitcell 1041 is also coupled via the bitlines to a set of input/output (IO) drivers 1044. In this implementation, there are thirty-two sets of IO drivers, such as IO driver set 1045. Each driver set produces an output signal 1047 that provides a data value when a row of bit lines is read. Each bitline runs the length of a column of bitcells and couples to an IO driver for that column. Each bitcell may be implemented as 2C-3T bitcell 800, for example. In this case, a single bitline will be used for each column, and the sense amps and read capacitors will be located in IO driver block 1044. In another implementation of NVL array 110, each bitcell may be implemented as 4C-12T bit cell 400. In this case, the bitlines will be a differential pair with two IO drivers for each column. A comparator receives the differential pair of bitlines and produces a final single bit line that is provided to the output latch. Other implementations of NVL array 110 may use other known or later developed bitcells in conjunction with the row drivers and IO drivers that will be described in more detail below. [0084] Timing logic 1046 generates timing signals that are used to control the read drivers to generate the sequence of control signals for each read and write operation. Timing logic 1046 may be implemented using both synchronous or asynchronous state machines, or other known or later developed logic technique. One potential alternative embodiment utilizes a delay chain with multiple outputs that "tap" the delay chain at desired intervals to generate control signals. Multiplexors can be used to provide multiple timing options for each control signal. Another potential embodiment uses a programmable delay generator that produces edges at the desired intervals using dedicated outputs that are connected to the appropriate control signals. [0085] FIG. 11 is a more detailed schematic of a set of input/output circuits 1150 used in the NVL array of FIG. 10. Referring back to FIG. 10, each IO set 1045 of the thirty-two drivers in IO block 1044 is similar to IO circuits 1150. I/O block 1044 provides several features to aid testability of NVL bits. [0086] Referring now to FIG. 11, a first latch (LI) 1151 serves as an output latch during a read and also combines with a second latch (L2) 1152 to form a scan flip flop. The scan output (SO) signal is routed to multiplexor 1153 in the write driver block 1158 to allow writing scanned data into the array during debug. Scan output (SO) is also coupled to the scan input (SI) of the next set of IO drivers to form a thirty-two bit scan chain that can be used to read or write a complete row of bits from NVL array 110. Within SoC 100, the scan latch of each NVL array is connected in a serial manner to form a scan chain to allow all of the NVL arrays to be accessed using the scan chain. Alternatively, the scan chain within each NVL array may be operated in a parallel fashion (N arrays will generate N chains) to reduce the number of internal scan flop bits on each chain in order to speed up scan testing. The number of chains and the number of NVL arrays per chain may be varied as needed. Typically, all of the storage latches and flipflops within SoC 100 include scan chains to allow complete testing of SoC 100. Scan testing is well known and does not need to be described in more detail herein. In this embodiment, the NVL chains are segregated from the logic chains on a chip so that the chains can be exercised independently and NVL arrays can be tested without any dependencies on logic chain organization, implementation, or control. The maximum total length of NVL scan chains will always be less than the total length of logic chains since the NVL chain length is reduced by a divisor equal to the number of rows in the NVL arrays. In the current embodiment, there are 8 entries per NVL array, so the total length of NVL scan chains is l/8 ththe total length of the logic scan chains. This reduces the time required to access and test NVL arrays and thus reduces test cost. Also, it eliminates the need to determine the mapping between logic flops, their position on logic scan chains and their corresponding NVL array bit location (identifying the array, row, and column location), greatly simplifying NVL test, debug, and failure analysis. [0087] While scan testing is useful, it does not provide a good mechanism for production testing of SoC 100 since it may take a significant amount of time to scan in hundreds or thousands of bits for testing the various NVL arrays within SoC 100. This is because there is no direct access to bits within the NVL array. Each NVL bitcell is coupled to an associated flip-flop and is only written to by saving the state of the flip flop. Thus, in order to load a pattern test into an NVL array from the associated flipflops, the corresponding flipflops must be set up using a scan chain. Determining which bits on a scan chain have to be set or cleared in order to control the contents of a particular row in an NVL array is a complex task as the connections are made based on the physical location of arbitrary groups of flops on a silicon die and not based on any regular algorithm. As such, the mapping of flops to NVL locations need not be controlled and is typically somewhat random. [0088] An improved testing technique is provided within IO drivers 1150. NVL controller 106, referring back to FIG. 1, has state machine(s) to perform fast pass/fail tests for all NVL arrays on the chip to screen out bad dies. In one such approach, at least one non-volatile logic controller is configured to control a built-in-self-test mode where all zeros or all ones are written to at least a portion of an NVL array of the plurality of NVL arrays and then it is determined whether data read from the at least the portion of the NVL array is all ones or all zeros. This is done by first writing all 0's or l 's to a row using all 0/1 write driver 1180, applying an offset disturb voltage (V Off), then reading the same row using parallel read test logic 1170. Signal corr l from AND gate Gl goes high if the data output signal (OUT) from data latch 1151 is high, and signal corr l from an adjacent column's IO driver's parallel read test logic AND gate Gl is high. In this manner, the Gl AND gates of the thirty-two sets of I/O blocks 1150 in NVL array 110 implement a large 32 input AND gate that tell the NVL controller if all outputs are high for the selected row of NVL array 110. OR gate GO does the same for reading 0's. In this manner, the NVL controller may instruct all of the NVL arrays within SoC 100 to simultaneously perform an all ones write to a selected row, and then instruct all of the NVL arrays to simultaneously read the selected row and provide a pass fail indication using only a few control signals without transferring any explicit test data from the NVL controller to the NVL arrays. In typical memory array BIST (Built In Self Test) implementations, the BIST controller must have access to all memory output values so that each output bit can be compared with the expected value. Given there are many thousands of logic flops on typical silicon SOC chips, the total number of NVL array outputs can also measure in the thousands. It would be impractical to test these arrays using normal BIST logic circuits due to the large number of data connections and data comparators required. The NVL test method can then be repeated eight times, for NVL arrays having eight rows (the number of repetitions will vary according to the array organization. In one example, a 10 entry NVL array implementation would repeat the test method 10 times), so that all of the NVL arrays in SoC 100 can be tested for correct all ones operation in only eight write cycles and eight read cycles. Similarly, all of the NVL arrays in SoC 100 can be tested for correct all zeros operation in only eight write cycles and eight read cycles. The results of all of the NVL arrays may be condensed into a single signal indicating pass or fail by an additional AND gate and OR gate that receive the corr O and corr l signals from each of the NVL arrays and produces a single corr O and corr l signal, or the NVL controller may look at each individual corr O and corr l signal. [0089] All 0/1 write driver 1180 includes PMOS devices Ml, M3 and NMOS devices M2, M4. Devices Ml and M2 are connected in series to form a node that is coupled to the bitline BL, while devices M3 and M4 are connected in series to form a node that is coupled to the inverse bitline BLB. Control signal "all l A" and inverse "all l B" are generated by NVL controller 106. When asserted during a write cycle, they activate device devices Ml and M4 to cause the bit lines BL and BLB to be pulled to represent a data value of logic 1. Similarly, control signal "all O A" and inverse "all O B" are generated by NVL controller 106. When asserted during a write cycle, they activate devices M2 and M3 to cause the bit lines BL and BLB to be pulled to represent a data value of logic 0. In this manner, the thirty-two drivers are operable to write all ones into a row of bit cells in response to a control signal and to write all zeros into a row of bit cells in response to another control signal. One skilled in the art can easily design other circuit topologies to accomplish the same task. The current embodiment is preferred as it only requires 4 transistors to accomplish the required data writes. [0090] During a normal write operation, write driver block 1158 receives a data bit value to be stored on the data in signal. Write drivers 1156, 1157 couple complimentary data signals to bitlines BL, BLB and thereby to the selected bit cell. Write drivers 1156, 1157 are enabled by the write enable signal STORE. [0091] FIG. 12A is a timing diagram illustrating an offset voltage test during a read cycle. To apply a disturb voltage to a bitcell, state si is modified during a read. This FIG. illustrates a voltage disturb test for reading a data value of "0" (node Q); a voltage disturb test for a data value of " 1" is similar, but injects the disturb voltage onto the opposite side of the sense amp (node QB). Thus, the disturb voltage in this embodiment is injected onto the low voltage side of the sense amp based on the logic value being read. Transfer gates 1154, 1155 are coupled to the bit line BL, BLB. A digital to analog converter, not shown (may be on-chip, or off-chip in an external tester, for example), is programmed by NVL controller 106, by an off-chip test controller, or via a external production tester to produce a desired amount of offset voltage V OFF. NVL controller 106 may assert the Vcon control signal for the bitline side storing a "0" during the si time period to thereby enable Vcon transfer gate 1154, 1155, discharge the other bit- line using M2/M4 during si, and assert control signal PASS during si to turn on transfer gates 402, 403. This initializes the voltage on node Q/QB of the "0" storing side to offset voltage V Off, as shown at 1202. This pre-charged voltage lowers the differential available to the SA during s3, as indicated at 1204, and thereby pushes the bitcell closer to failure. For fast production testing, V Off may be set to a required margin value, and the pass/fail test using G0- 1 may then be used to screen out any failing die. [0092] FIG. 12B illustrates a histogram generated during a sweep of offset voltage. Bit level failure margins can be studied by sweeping V Off and scanning out the read data bits using a sequence of read cycles, as described above. In this example, the worst case read margin is 550mv, the mean value is 597mv, and the standard deviation is 22mv. In this manner, the operating characteristics of all bit cells in each NVL array on an SoC may be easily determined. [0093] As discussed above, embedding non-volatile elements outside the controlled environment of a large array presents reliability and fabrication challenges. The NVL bitcell should be designed for maximum read signal margin and in-situ testability as is needed for any NV-memory technology. However, NVL implementation cannot rely on SRAM like built in self test (BIST) because NVL arrays are distributed inside the logic cloud. The NVL implementation described above includes NVL arrays controlled by a central NVL controller 106. While screening a die for satisfactory behavior, NVL controller 106 runs a sequence of steps that are performed on-chip without any external tester interference. The tester only needs to issue a start signal, and apply an analog voltage which corresponds to the desired signal margin. The controller first writes all 0s or Is to all bits in the NVL array. It then starts reading an array one row at a time. The NVL array read operations do not necessarily immediately follow NVL array write operations. Often, high temperature bake cycles are inserted between data write operations and data read operations in order to accelerate time and temperature dependent failure mechanisms so that defects that would impact long term data retention can be screened out during manufacturing related testing. As described above in more detail, the array contains logic that ANDs and ORs all outputs of the array. These two signals are sent to the controller. Upon reading each row, the controller looks at the two signals from the array, and based on knowledge of what it previously wrote, decides it the data read was correct or not in the presence of the disturb voltage. If the data is incorrect, it issues a fail signal to the tester, at which point the tester can eliminate the die. If the row passes, the controller moves onto the next row in the array. All arrays can be tested in parallel at the normal NVL clock frequency. This enables high speed on- chip testing of the NVL arrays with the tester only issuing a start signal and providing the desired read signal margin voltage while the NVL controller reports pass at the end of the built in testing procedure or generates a fail signal whenever the first failing row is detected. Fails are reported immediately so the tester can abort the test procedure at the point of first failure rather than waste additional test time testing the remaining rows. This is important as test time and thus test cost for all non-volatile memories (NVM) often dominates the overall test cost for an SOC with embedded NVM. If the NVL controller activates the "done" signal and the fail signal has not been activated at any time during the test procedure, the die undergoing testing has passed the required tests. [0094] For further failure analysis, the controller may also have a debug mode. In this mode, the tester can specify an array and row number, and the NVL controller can then read or write to just that row. The read contents can be scanned out using the NVL scan chain. This method provides read or write access to any NVL bit on the die without CPU intervention or requiring the use of a long complicated SOC scan chains in which the mapping of NVL array bits to individual flops is random. Further, this can be done in concert with applying an analog voltage for read signal margin determination, so exact margins for individual bits can be measured. [0095] These capabilities help make NVL practical because without testability features it would be risky to use non- volatile logic elements in a product. Further, pass/fail testing on-die with minimal tester interaction reduces test time and thereby cost. [0096] NVL implementation using mini-arrays distributed in the logic cloud means that a sophisticated error detection method like ECC would require a significant amount of additional memory columns and control logic to be used on a per array basis, which could be prohibitive from an area standpoint. However, in order to provide an enhanced level of reliability, the NVL arrays of SoC 100 may include parity protection as a low cost error detection method, as will now be described in more detail. [0097] FIG. 13 is a schematic illustrating parity generation in NVL array 110 that illustrates an example NVL array having thirty-two columns of bits (0:31), that exclusive-ors the input data value DATA IN 1151 with the output of a similar XOR gate of the previous column's IO driver. Each IO driver section, such as section 1350, of the NVL array may contain an XOR gate 1160, referring again to FIG. 11A. During a row write, the output of XOR gate 1160 that is in column 30 is the overall parity value of the row of data that is being written in bit columns 0:30 and is used to write parity values into the last column by feeding its output to the data input of column 31 the NVL mini-array, shown as XOR IN in FIG. 1 IB. [0098] In a similar manner, during a read, XOR gate 1160 exclusive-ors the data value DATA OUT from read latch 1151 via mux 1161 (see FIG. 11) with the output of a similar XOR gate of the previous column's IO driver. The output of XOR gate 1160 that is in bit column 30 is the overall parity value for the row of data that was read from bit columns 0:30 and is used to compare to a parity value read from bit column 31 in parity error detector 1370. If the overall parity value determined from the read data does not match the parity bit read from column 31 , then a parity error is declared. [0099] When a parity error is detected, it indicates that the stored FF state values are not trustworthy. Since the NVL array is typically being read when the SoC is restarting operation after being in a power off state, then detection of a parity error indicates that a full boot operation needs to be performed in order to regenerate the correct FF state values. [00100] However, if the FF state was not properly stored prior to turning off the power or this is a brand new device, for example, then an indeterminate condition may exist. For example, if the NVL array is empty, then typically all of the bits may have a value of zero, or they may all have a value of one. In the case of all zeros, the parity value generated for all zeros would be zero, which would match the parity bit value of zero. Therefore, the parity test would incorrectly indicate that the FF state was correct and that a boot operation is not required, when in fact it would be required. In order to prevent this occurrence, an inverted version of the parity bit may be written to column 31 by bit line driver 1365, for example. Referring again to FIG. 11A, note that while bit line driver 1156 for columns 0-30 also inverts the input data bits, mux 1153 inverts the data in bits when they are received, so the result is that the data in columns 0-30 is stored un- inverted. In another embodiment, the data bits may be inverted and the parity error not inverted, for example. [00101] In the case of all ones, if there is an even number of columns, then the calculated parity would equal zero, and an inverted value of one would be stored in the parity column. Therefore, in an NVL array with an even number of data columns with all ones would not detect a parity error. In order to prevent this occurrence, NVL array 110 is constrained to have an odd number of data columns. For example, in this embodiment, there are thirty-one data columns and one parity column, for a total of thirty-two bitcell columns. [00102] In some embodiments, when an NVL read operation occurs, control logic for the NVL array causes the parity bit to be read, inverted, and written back. This allows the NVL array to detect when prior NVL array writes were incomplete or invalid/damaged. Remnant polarization is not completely wiped out by a single read cycle. Typically, it take 5 - 15 read cycles to fully depolarize the FeCaps or to corrupt the data enough to reliably trigger an NVL read parity. For example, if only four out of eight NVL array rows were written during the last NVL store operation due to loss of power, this would most likely result in an incomplete capture of the prior machine state. However, because of remnant polarization, the four rows that were not written in the most recent state storage sequence will likely still contain stale data from back in time, such as two NVL store events ago, rather than data from the most recent NVL data store event. The parity and stale data from the four rows will likely be read as valid data rather than invalid data. This is highly likely to cause the machine to lock up or crash when the machine state is restored from the NVL arrays during the next wakeup/power up event. Therefore, by writing back the parity bit inverted after every entry is read, each row of stale data is essentially forcibly invalidated. [00103] Writing data back to NVL entries is power intensive, so it is preferable to not write data back to all bits, just the parity bit. The current embodiment of the array disables the PL1, PL2, and sense amp enable signals for all non-parity bits (i.e. Data bits) to minimize the parasitic power consumption of this feature. [00104] In this manner, each time the SoC transitions from a no-power state to a power-on state, a valid determination can be made that the data being read from the NVL arrays contains valid FF state information. If a parity error is detected, then a boot operation can be performed in place of restoring FF state from the NVL arrays. [00105] Referring back to FIG. 1, low power SoC 100 has multiple voltage and power domains, such as VDDN FV, VDDN CV for the NVL arrays, VDDR for the sleep mode retention latches and well supplies, and VDDL for the bulk of the logic blocks that form the system microcontroller, various peripheral devices, SRAM, ROM, etc., as described earlier with regard to Table 1 and Table 2. FRAM has internal power switches and is connected to the always on supply VDDZ In addition, the VDDN FV domain may be designed to operate at one voltage, such as 1.5 volts needed by the FeCap bit cells, while the VDDL and VDDN CV domain may be designed to operate at a lower voltage to conserve power, such as 0.9 - 1.5 volts, for example. Such an implementation requires using power switches 108, level conversion and isolation in appropriate areas. Aspects of isolation and level conversion needed with respect to NVL blocks 110 will now be described in more detail. The circuits are designed such that VDDL/VDDN_CV can be any valid voltage less than or equal to VDDN FV and the circuit will function correctly. [00106] FIG. 14 is a block diagram illustrating power domains within NVL array 110. Various block of logic and memory may be arranged as illustrated in Table 3. Table 3 - example full chip power domains [00107] Power domains VDDL, VDDN CV, VDDN FV, and VDDR described in Table 3 are controlled using a separate set of power switches, such as switches 108 described earlier. However, isolation may be needed for some conditions. Data output buffers within IO buffer block 1044 are in the NVL logic power domain VDDN CV and therefore may remain off while domain VDDR (or VDDL depending on the specific implementation) is ON during normal operation of the chip. ISO-Low isolation is implemented to tie all such signals to ground during such a situation. While VDDN CV is off, logic connected to data outputs in VDDR (or VDDL depending on the specific implementation) domain in random logic area may generate short circuit current between power and ground in internal circuits if any signals from the VDDN CV domain are floating (not driven when VDDN CV domain is powered down) if they are not isolated. The same is applicable for correct_0/l outputs and scan out output of the NVL arrays. The general idea here is that any outputs of the NVL array will be isolated when the NVL array has no power given to it. In case there is always ON logic present in the chip, all signals going from VDDL or VDDN CV to VDD must be isolated using input isolation at the VDD domain periphery. Additional built-in isolation exists in NVL flops at the ND input. Here, the input goes to a transmission gate, whose control signal NU is driven by an always on signal. When the input is expected to be indeterminate, NU is made low, thereby disabling the ND input port. Similar built-in isolation exists on data inputs and scan-in of the NVL array. This isolation would be needed during NVL restore when VDDL is OFF. Additionally, signals NU and NVL data input multiplexor enable signals (mux sel) must be buffered only in the VDDR domain. The same applies for the retention enable signal. [00108] To enable the various power saving modes of operation, VDDL and VDDN* domain are shut off at various times, and isolation is makes that possible without burning short circuit current. [00109] Level conversion from the lower voltage VDDL domain to the higher voltage VDDN domain is needed on control inputs of the NVL arrays that go to the NVL bitcells, such as: row enables, PL1, PL2, restore, recall, and clear, for example. This enables a reduction in system power dissipation by allowing blocks of SOC logic and NVL logic gates that can operate at a lower voltage to do so. For each row of bitcells in bitcell array 1040, there is a set of word line drivers 1042 that drive the signals for each row of bitcells, including plate lines PL1, PL2, transfer gate enable PASS, sense amp enable SAEN, clear enable CLR, and voltage margin test enable VCON, for example. The bitcell array 1040 and the wordline circuit block 1042 are supplied by VDDN. Level shifting on input signals to 1042 are handled by dedicated level shifters (see FIG. 15), while level shifting on inputs to the bitcell array 1040 are handled by special sequencing of the circuits within the NVL bitcells without adding any additional dedicated circuits to the array datapath or bitcells. [00110] FIG. 15 is a schematic of a level converter 1500 for use in NVL array 110. FIG. 15 illustrates one wordline driver that may be part of the set of wordline drivers 1402. Level converter 1500 includes PMOS transistors PI, P2 and NMOS transistor Nl, N2 that are formed in region 1502 in the 1.5 volt VDDN domain for wordline drivers 1042. However, the control logic in timing and control module 1046 is located in region 1503 in the 1.2v VDDL domain (1.2v is used to represent the variable VDDL core supply that can range from 0.9v to 1.5v). 1.2 volt signal 1506 is representative of any of the row control signals that are generated by control module 1046, for use in accessing NVL bitcell array 1040. Inverter 1510 forms a complimentary pair of control signals 1511, 1512 in region 1503 that are then routed to transistors Nl and N2 in level converter 1500. In operation, when 1.2 volt signal 1506 goes high, NMOS device Nl pulls the gate of PMOS device P2 low, which causes P2 to pull signal 1504 up to 1.5 volts. Similarly, when 1.2 volt signal 1506 goes low, complimentary signal 1512 causes NMOS device N2 to pull the gate of PMOS device PI low, which pulls up the gate of PMOS device P2 and allows signal 1504 to go low, approximately zero volts. The NMOS devices must be stronger than the PMOS so the converter doesn't get stuck. In this manner, level shifting may done across the voltage domains and power may be saved by placing the control logic, including inverter 1510, in the lower voltage domain 1503. For each signal, the controller is coupled to each of level converter 1500 by two complimentary control signals 1511, 1512. [00111] FIG. 16 is a timing diagram illustrating operation of level shifting using a sense amp within a ferroelectric bitcell. Input data that is provided to NVL array 110 from multiplexor 212, referring again to FIG. 2, also needs to be level shifted from the 1.2v VDDL domain to 1.5 volts needed for best operation of the FeCaps in the 1.5 volt VDDN domain during write operations. This may be done using the sense amp of bit cell 400, for example. Referring again to FIG. 4 and to FIG. 13, note that each bit line BL, such as BL 1352, which comes from the 1.2 volt VDDL domain, is coupled to transfer gate 402 or 403 within bitcell 400. Sense amp 410 operates in the 1.5v VDDN power domain. Referring now to FIG. 16, note that during time period s2, data is provided on the bit lines BL, BLB and the transfer gates 402, 403 are enabled by the pass signal PASS during time periods s2 to transfer the data bit and its inverse value from the bit lines to differential nodes Q, QB. However, as shown at 1602, the voltage level transferred is only limited to less than the 1.5 volt level because the bit line drivers are located in the 1.2 v VDDL domain. [00112] Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive, as illustrated at 1604, after the write data drivers, such as write driver 1156, 1157, have forced adequate differential 1602 on Q/QB during time period s2. Since the sense amp is supplied by a higher voltage (VDDN), the sense amp will respond to the differential established across the sense amp by the write data drivers and will clamp the logic 0 side of the sense amp to VSS (Q or QB) while the other side containing the logic 1 is pulled up to VDDN voltage level. In this manner, the existing NVL array hardware is reused to provide a voltage level shifting function during NVL store operations. [00113] However, to avoid a short from the sense amp to the 1.2v driver supply, the write data drivers are isolated from the sense amp at the end of time period s2 before the sense amp is turned on during time periods s3, s4. This may be done by turning off the bit line drivers by de- asserting the STORE signal after time period s2 and/or also by disabling the transfer gates by de- asserting PASS after time period s2. [00114] Using the above described arrangements, various configurations are possible to maximize power savings or usability at various points in a processing or computing devices operation cycle. In one such approach, a computing device can be configured to operate continuously across a series of power interruptions without loss of data or reboot. With reference to the example illustrated in FIG. 17, a processing device 1700 as described above includes a plurality of non- volatile logic element arrays 1710, a plurality of volatile storage elements 1720, and at least one non-volatile logic controller 1730 configured to control the plurality of non- volatile logic element arrays 1710 to store a machine state represented by the plurality of volatile storage elements 1720 and to read out a stored machine state from the plurality of non- volatile logic element arrays 1710 to the plurality of volatile storage elements 1720. A voltage or current detector 1740 is configured to sense a power quality from an input power supply 1750. [00115] A power management controller 1760 is in communication with the voltage or current detector 1740 to receive information regarding the power quality from the voltage or current detector 1710. The power management controller 1760 is also configured to be in communication with the at least one non- volatile logic controller 1710 to provide information effecting storing the machine state to and restoration of the machine state from the plurality of non- volatile logic element arrays 1710. [00116] A voltage regulator 1770 is connected to receive power from the input power supply 1750 and provide power to an output power supply rail 1755 configured to provide power to the processing device 1700. The voltage regulator 1770 is further configured to be in communication with the power management controller 1760 and to disconnect the output power supply rail 1755 from the input power supply 1750, such as through control of a switch 1780, in response to a determination that the power quality is below a threshold. [00117] The power management controller 1760 and the voltage or current detector 1740 work together with the at least one non- volatile logic controller 1730 and voltage regulator 1770 to manage the data backup and restoration processes independent of the primary computing path. In one such example, the power management controller 1760 is configured to send a signal to effect stoppage of clocks for the processing device 1700 in response to the determination that the power quality is below the threshold. The voltage regulator 1770 can then send a disconnect signal to the power management controller 1760 in response to disconnecting the output power supply rail 1755 from the input power supply 1750. The power management controller 1760 sends a backup signal to the at least one non- volatile logic controller 1710 in response to receiving the disconnect signal. Upon completion of the backup of system state into NVL arrays, the power can be removed from the SOC, or can continue to degrade without further concern for loss of machine state. [00118] The individual elements that make the determination of power quality can vary in different approaches. For instance, the voltage regulator 1770 can be configured to detect the power quality's rising above the threshold and, in response, to send a good power signal to the power management controller 1760. In response, the power management controller 1760 is configured to send a signal to provide power to the plurality of non-volatile logic element arrays 1710 and the at least one non- volatile logic controller 1730 to facilitate restoration of the machine state. The power management controller 1760 is configured to determine that power up is complete and, in response, send a signal to effect release of clocks for the processing device 1700 wherein the processing device 1700 resumes operation from the machine state prior to the determination that the power quality was below the threshold. [00119] To assure that the processing device 1700 has enough power to complete a backup process, a charge storage element 1790 is configured to provide temporary power to the processing device 1700 sufficient to power it long enough to store the machine state in the plurality of non- volatile logic element arrays 1710 after the output power supply rail 1755 is disconnected from the input power supply 1750. The charge storage element 1790 may be at least one dedicated on-die (or off-die) capacitor designed to store such emergency power. In another approach, the charge storage element 1790 may be circuitry in which naturally occurring parasitic charge builds up in the die where the dissipation of the charge from the circuitry to ground provides sufficient power to complete a backup operation. [00120] FIG. 18 is a flow chart illustrating a method for customizing wake time and peak power cost during a restoration of a computing device volatile storage system state from a nonvolatile array backup described above. This method is executed by a manufacturer of processing devices described in this disclosure or by the manufacture in combination with a customer that will have specific application code to be executed by such processing devices. The manufacturer manufactures 1802 a processing device having a plurality of non- volatile logic element arrays configured to store a machine state represented by a plurality of volatile storage elements of the processing device and wherein the processing device is configured to enable reading out a stored machine state from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements. At step 1804, a number of rows and a number of bits per row in individual ones of the plurality of non-volatile logic element arrays are based on a target wake up time based on a time used to read data one row at a time from one of the plurality of non-volatile logic element arrays and a peak power cost based on a peak power used to read a row of a given length of bits at a same time from the one of the plurality of non- volatile logic element arrays. [00121] There are a number of ways to determine a target design. By one approach, the method includes analyzing 1806 simulations of a design of the non- volatile logic element arrays to determine peak and average power per array. In another approach, the method includes analyzing 1808 simulations of the computing device volatile storage system running application code for peak and average power consumption. In still another approach, simulations of a design of the non- volatile logic element arrays are analyzed 1810 to determine first analysis results including peak and average power per array, and simulations of the computing device volatile storage system running application code are analyzed 1812 to determine second analysis results including peak and average power consumption for the computing device volatile storage system. At least aspects of the first analysis results and at least aspects of the second analysis results are compared 1814 with the target wake up time and the peak power cost to determine a target design for the plurality of non-volatile logic element arrays. [00122] In certain approaches, NVL mini arrays are arranged by "rows" where each row is read sequentially. Thus, the time used to sequentially access the array with the least number entries in all of the mini-arrays on die sets the minimum wakeup and sleep time cost. The number of bits accessed with each array read determines the peak power cost. Accordingly, boundaries can be set on sleep/wake time versus peak power cost by providing a mini-array "compiler" that allows the system designer to choose the number entries and the number of bits per entry. This flexibility is provided during design time and is fixed after the design is manufactured. [00123] Examples of such analysis includes performing an analysis of a particular chip design (without NVL arrays) for peak and average power consumption using such tools as Redhawk, Cadence EPS, Powermill, and the like while using switching activity derived from chip functional simulations running customer application code. The peak and average power of the design with and without NVL will be compared to the product specification/goals and to the size and capacity of the planned power distribution, which includes analysis of number of package pins, package parasitics (inductance, resistance, capacitance), on-die voltage regulator capacity (amperage supply spec), and the on-die power distribution (current carrying capacity of wires, vias between metal layers, and the like as compared to electromigration limits, voltage droop, local decoupling capacitor sizes, and the like). Generally, the designer will want to wake up the designed circuit as fast as possible from a low power state. Thus, the designer will then build up a spreadsheet and calculate the options for wakeup versus inrush current and determine what combinations of NVL array access can be tolerated. In some cases, chip goals or specifications can be modified to make appropriate trade-offs such as increasing the power distribution capacity and chip peak power spec limit to wake up faster. [00124] FIG. 19 illustrates another method of control over the processing device to achieve control over performance characteristics. As described above, a plurality of non-volatile logic element arrays are controlled 1902 to store a machine state represented by a plurality of volatile storage elements and to read out a stored machine state from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements. An input signal is received 1904 regarding controlling storage of and/or read out of the non- volatile logic element arrays. At least one non- volatile logic controller variably controls 1906 at least one of storage of data to or reading of data from the plurality of non-volatile arrays in parallel, sequentially, or in any combination thereof based on the input signals. The input signals can be received from a user interface and/or from a separate computing element. For instance, a separate computing element can execute an application that determines 1908 a storage and/or reading sequence for the plurality of non-volatile arrays based at least in part on a determination of power and computing resource requirements for the computing device apparatus. So configured, a user can modify the operating method of the processing device to tune the wake up/backup time and peak power usage to given circumstances. System Example [00125] FIG. 20 is a block diagram of another SoC 2000 that includes NVL arrays, as described above. SoC 2000 features a Cortex-M0 processor core 2002, universal asynchronous receiver/transmitter (UART) 2004 and SPI (serial peripheral interface) 2006 interfaces, and 10KB ROM 2010, 8KB SRAM 2012, 64KB (Ferroelectric RAM) FRAM 2014 memory blocks, characteristic of a commercial ultra low power (ULP) microcontroller. The 130nm FRAM process based SoC uses a single 1.5V supply, an 8MHz system clock and a 125MHz clock for NVL operation. The SoC consumes 75uA/MHz & 170uA/MHz while running code from SRAM & FRAM respectively. The energy and time cost of backing up and restoring the entire system state of 2537 FFs requires only 4.72nJ & 320ns and 1.34nJ & 384ns respectively, which sets the industry benchmark for this class of device. SoC 2000 provides test capability for each NVL bit, as described in more detail above, and in-situ read signal margin of 550mV. [00126] SoC 2000 has 2537 FFs and latches served by 10 NVL arrays. A central NVL controller controls all the arrays and their communication with FFs, as described in more detail above. The distributed NVL mini-array system architecture helps amortize test feature costs, achieving a SoC area overhead of only 3.6% with exceptionally low system level sleep/wakeup energy cost of 2.2pJ/0.66pJ per bit. [00127] Although the invention finds particular application to microcontrollers (MCU) implemented, for example, in a System on a Chip (SoC), it also finds application to other forms of processors. A SoC may contain one or more modules which each include custom designed functional circuits combined with pre-designed functional circuits provided by a design library. [00128] While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various other embodiments of the invention will be apparent to persons skilled in the art upon reference to this description. For example, other portable, or mobile systems such as remote controls, access badges and fobs, smart credit/debit cards and emulators, smart phones, digital assistants, and any other now known or later developed portable or embedded system may embody NVL arrays as described herein to allow nearly immediate recovery to a full operating state from a completely powered down state. [00129] While embodiments of retention latches coupled to a nonvolatile FeCap bitcell are described herein, in another embodiment, a nonvolatile FeCap bitcell from an NVL array may be coupled to flip-flop or latch that does not include a low power retention latch. In this case, the system would transition between a full power state, or otherwise reduced power state based on reduced voltage or clock rate, and a totally off power state, for example. As described above, before turning off the power, the state of the flipflops and latches would be saved in distributed NVL arrays. When power is restored, the flipflops would be initialized via an input provided by the associated NVL array bitcell. [00130] The techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP). The software that executes the techniques may be initially stored in a computer-readable medium such as compact disc (CD), a diskette, a tape, a file, memory, or any other computer readable storage device and loaded and executed in the processor. In some cases, the software may also be sold in a computer program product, which includes the computer-readable medium and packaging materials for the computer-readable medium. In some cases, the software instructions may be distributed via removable computer readable media (e.g., floppy disk, optical disk, flash memory, USB key), via a transmission path from computer readable media on another digital system, etc. [00131] Although method steps may be presented and described herein in a sequential fashion, one or more of the steps shown and described may be omitted, repeated, performed concurrently, and/or performed in a different order than the order shown in the figures and/or described herein. Accordingly, embodiments of the invention should not be considered limited to the specific ordering of steps shown in the figures and/or described herein. [00132] It is therefore contemplated that the claims will cover any such modifications of the embodiments as fall within the true scope of the invention. |
A nanowire (130) transistor is provided that includes a well implant having a local isolation region (120) for insulating a replacement metal gate (405) from a parasitic channel and oxidized caps (125) in the extension regions that inhibit parasitic gate-to-source and gate- to-drain capacitances. The method for making the device includes forming a fin of alternating selectively etchable layers (eg. Si and SiGe); doping the extension (source / drain) regions of these layers with an etch stop dopant (carbon); performing a selective etch for one of the materials; and selectively oxidizing the extension region (215) corresponding to the sacrificial layer, thus forming an oxidized cap (125). |
1.A method of fabricating a nanowire transistor, comprising:Oxidation well implantation to form a local isolation region;Forming a fin including a layer of alternating first semiconductor and a layer of a second semiconductor on the partial isolation region, wherein an initial layer of the first semiconductor abuts the partial isolation region, and wherein the fin extends from the first The area extends to the second extended area;Injecting an etch stop dopant in the first extension region and the second extension region;Forming a dummy gate opening to expose a gate region of the fin;Selectively etching the layer of the first semiconductor in the gate region of the fin to form a nanowire from the layer of the second semiconductor in the gate region, wherein The etch stop dopant implanted inhibits selective etching of the layer of the first semiconductor in the first extension region and the second extension region.2.The method of claim 1 further comprising:Selectively oxidizing the layer of the first semiconductor such that each layer of the first semiconductor includes an oxidation cap extending from the dummy gate opening to the first extension region and the second extension region ;as well asAn alternate metal gate is formed around the nanowires.3.The method of claim 1 wherein the first semiconductor is silicon and the second semiconductor is silicon germanium.4.The method of claim 1 wherein the first semiconductor is silicon germanium and the second semiconductor is silicon.5.The method of claim 4 wherein implanting the etch stop dopant comprises implanting carbon.6.The method of claim 2 wherein depositing the replacement metal gate comprises depositing an initial high k dielectric layer.7.The method of claim 6 wherein depositing the replacement metal gate further comprises depositing a subsequent work function layer.8.The method of claim 7 wherein depositing the replacement metal gate further comprises depositing a metal gate fill.9.The method of claim 1, wherein forming the fin comprises depositing the initial layer of the first semiconductor, the second layer of the second semiconductor, the third layer of the first semiconductor, and the first The fourth layer of the second semiconductor.10.The method of claim 1 wherein forming the fin comprises selectively isolating a trench process.11.A nanowire transistor comprising:At least one nanowire extending from the first extension region to the second extension region;An alternative metal gate surrounding the at least one nanowire, wherein the first extension region and the second extension region each comprise at least one semiconductor layer having an oxidization cap abutting the at least one nanowire.12.The nanowire transistor of claim 11 further comprising:Substrate;A well implant adjacent the alternate metal gate in the substrate, wherein the well implant includes an oxidized partial isolation region between the remaining portion of the well implant and the replacement metal gate.13.The nanowire transistor of claim 11 wherein said at least one semiconductor layer comprises a silicon germanium layer, said silicon germanium layer comprising an etch stop dopant.14.The nanowire transistor of claim 13 wherein the etch stop dopant comprises carbon.15.The nanowire transistor of claim 11 wherein said replacement metal gate comprises:An outer high-k layer adjacent to the at least one nanowire;Metal gate filler;A work function layer between the outer high-k layer and the metal gate fill.16.The nanowire transistor of claim 11 wherein the at least one nanowire comprises a plurality of silicon nanowires, and wherein the at least one semiconductor layer comprises a plurality of silicon germanium layers.17.A nanowire transistor comprising:SubstrateWell injectiona plurality of selectively etched semiconductor layers interleaved with the plurality of nanowires;An alternate metal gate surrounding the plurality of nanowires, wherein the well implant includes an oxidized partial isolation region, the oxidized partial isolation region being configured to insulate the replacement metal gate from a remaining portion of the well implant.18.The nanowire transistor of claim 17, wherein the plurality of nanowires comprise silicon, and wherein the selectively etched semiconductor layer comprises silicon germanium.19.The nanowire transistor of claim 17, wherein the plurality of nanowires comprise silicon germanium, and wherein the selectively etched semiconductor layer comprises silicon.20.The nanowire transistor of claim 17 wherein said replacement metal gate comprises:An outer high-k layer adjacent to the plurality of nanowires;Metal filler;A work function layer between the outer high k layer and the metal fill. |
Nanowire transistor with reduced parasitic effect and method for fabricating the sameCross-reference to related applicationsThis application claims the benefit of U.S. Patent Application Serial No. 14/980,850, filed on Dec. 28, 2015.Technical fieldThis application relates to transistor devices and, more particularly, to nanowire devices having reduced parasitic capacitance and channeling.Background techniqueIn advanced process nodes, conventional planar transistor architecture suffers from many problems such as excessive leakage. As a result, a three-dimensional architecture such as a fin field effect transistor (finFET) process is typically used in these advanced nodes. A "fin" in a finFET device includes a three-dimensional strip on a semiconductor substrate. The fin thus has a lower surface that abuts the surface of the substrate and three remaining surfaces that protrude above the surface of the substrate. A gate is then deposited over the fin such that the gate is directly adjacent to the three remaining surfaces of the fin. In contrast, in a conventional planar structure, the gate is only directly adjacent to one surface of the channel. Therefore, the channel can be cut more efficiently in the finFET device, thereby reducing leakage current and making advanced process nodes possible.Although finFETs are therefore advantageous, the gate cannot directly control the surface of the fins adjacent to the surface of the substrate. To provide better gate control, a surrounding gate architecture has been developed in which fins are converted into one or more nanowires suspended from the surface of the substrate. Thus, the surrounding gate device can be represented as a nanowire device or transistor. In order to start forming a nanowire transistor, a well implant is formed in a semiconductor substrate. The fab then alternately deposits Si and SiGe layers on the well implant. These alternating layers are then etched to form fins. The fab then deposits a shallow trench isolation oxide fill around the fin, followed by a dummy gate. After the dummy gate is formed, the fab performs extended implant, spacer deposition, source/drain epitaxial (epi) growth, junction implant, interlayer dielectric (ILD0) fill, and then the dummy gate is removed. With the removal of the dummy gate, the nanowires can then be formed by selectively etching the Si layer in the fin or selectively etching the SiGe layer. If the SiGe layer is removed, the resulting nanowire is silicon. Conversely, if the silicon layer is selectively etched, the nanowires are SiGe. A gate structure can then be deposited around the nanowires.Although the resulting nanowire device has better gate control than comparable finFET devices, selective etching of the silicon germanium (or silicon) layer through the window between the spacers prior to gate deposition is under the spacer Produce an undercut. In view of the undercut of the SiGe or Si layer, the subsequent gate-to-source and gate-to-drain parasitic capacitances are relatively high. Furthermore, the bottom parasitic channel in the well implant below the gate cannot be well controlled because there is no surrounding gate contact for the bottom parasitic channel. Therefore, it causes an undesired leakage current. In addition, there is an undesirable parasitic capacitance between the gate and the bottom parasitic channel.Accordingly, there is a need in the art for an improved nanowire device architecture with reduced parasitic capacitance and reduced parasitic channeling.Summary of the inventionIn order to provide an improved reduction in parasitic capacitance in a nanowire device, a local isolation region is formed in the well implant that inhibits the formation of parasitic channels when the replacement metal gate is charged. Additionally, the extended regions in the nanowire device are implanted with an etch stop dopant that inhibits selective etching of the first semiconductor layer in the extended region. The etch stop dopant also causes the first semiconductor layer in the extended region to be susceptible to selective oxidation, which selectively oxidizes to form an oxidized cap to insulate the replacement metal gate from the drain and source regions.DRAWINGSFIG. 1A illustrates a longitudinal cross section of a nanowire transistor in accordance with an aspect of the present disclosure.FIG. 1B is a transverse cross section of the nanowire transistor of FIG. 1A taken along the dashed line A:A.2 is a transverse cross section of a pair of fins prior to a selective etching process.3 is a longitudinal cross-section of one of the fins of FIG. 2 after forming dummy gates, spacers, and extension regions.4 is a longitudinal cross-section of the fin of FIG. 3 after a selective etch process and an oxidized cap is formed in the extended region.Figure 5 is a transverse cross-section of the fin of Figure 2 after a selective etching process.6 is a flow chart of a method of fabrication in accordance with an aspect of the disclosure.Embodiments of the present disclosure and its advantages are best understood by referring to the following detailed description. It should be understood that the similar reference numerals are used to identify similar elements in the one or more drawings.Detailed waysIn order to avoid undercutting during selective etching of the nanowires, an extension implant is disclosed that stops the dopant by implant etching to render the extended region resistant to selective etching. The same extension implant of the etch stop dopant allows the extended regions to be susceptible to selective oxidation prior to forming a replacement metal gate. Thus, the resulting gate-to-source and gate-to-drain parasitic capacitances are relatively low due to the lower k value of the oxide layer and the reduced undercut separating the gate from the source/drain regions. Additionally, prior to depositing the epitaxial layer in the fin, a local isolation region may be formed in the well implant by oxygen implantation to provide a reduced parasitic capacitance between the replacement metal gate and any parasitic channels formed in the well implant.An example nanowire transistor 100 is shown in cross-sectional view along the longitudinal axis of fin 105 in FIG. 1A. As will be discussed further herein, the fins 105 include alternating first and second semiconductor layers. For example, the first semiconductor layer may include a silicon (Si) layer, and the second semiconductor layer may include a silicon germanium (SiGe) layer. The nanowires 130 may be from the first semiconductor layer or the second semiconductor layer depending on whether the first semiconductor layer or the second semiconductor layer is selectively etched during the fabrication of the nanowire transistor 100. The selective etch process selectively etches only the first semiconductor layer or the second semiconductor layer to form the nanowires 130. As used herein, a selectively etched semiconductor layer will also be referred to as a selectively etched semiconductor layer, while the remaining semiconductor layers will be referred to herein as nanowire layers. For example, in a Si nanowire embodiment, the silicon layer will be a nanowire layer and the SiGe layer will be a selective etch layer. In contrast, in a SiGe nanowire embodiment, the SiGe layer will be a nanowire layer and the Si layer will be a selective etch layer.An alternate metal gate including metal gate fill 145 surrounds nanowire 130 and is separated from nanowire 130 by internal work function layer 150 and outer high k dielectric layer 140. The high k dielectric layer 140 thus contacts the nanowires 130 while the work function layer 150 separates the metal gate fill 145 from the high k dielectric layer 140. The fins 105 extend longitudinally in the same direction as the nanowires 130. In contrast, the alternate metal gate, metal function layer 150, and k dielectric layer 140 including the metal gate fill 145 extend laterally through the fins 105 at right angles to the longitudinal axis defined by the nanowires 130. Regarding the lateral extension of the replacement metal gate across the fin 105, the replacement metal gate is located between a pair of spacer layers 115 deposited over the fins 105. The extension region 110 is directly below the spacer layer 115 at either end of the nanowire 130 and the corresponding drain/source region 155. The extended region 110 is thus located between the nanowire 130 and the drain/source region 155. As will be discussed further herein, the dopant is implanted into the extension region 110 with an etch stop to resist selective etching of the at least one nanowire 130. The selectively etched semiconductor layer thus resists selective etch of nanowires 130 formed in the channel portion of nanowire transistor 100. For example, in the Si nanowire embodiment, the SiGe layer (discussed further below) is a selectively etched semiconductor layer. Such selective etching also tends to etch the SiGe layer within the extended region 110. However, the etch stop dopant implanted into the extension region 110 inhibits selective etching of the SiGe layer in the extended region 110 in the silicon nanowire embodiment.The suppression of selective etching within the extended region 110 causes the replacement metal gate to not extend into the extended region 110, but is limited to the channel region between the extended regions 110. This is very advantageous for reducing unwanted gate-to-source and gate-to-drain parasitic capacitances in nanowire transistor 100. As will be further explained, the metal gate fill 145 and its corresponding inner layer 150 and outer layer 140 are deposited into the dummy gate openings defined by the spacers 115. To further reduce these parasitic capacitances, the extension region 110 is oxidized through the dummy gate opening to form an oxidized cap 125 in the selectively etched semiconductor layer prior to deposition of the replacement metal gate. Metal gate fill 145 and its inner layer 150 and outer layer 140 are then finally deposited through the dummy gate opening such that oxidation cap 125 is located between the two longitudinal ends of nanowire 130 and the remainder of extension region 110. Thus, not only is the metal gate fill 145 prevented from extending into the extended region 110, it is also insulated from the extended region 110 by the oxidized cap 125, thereby further reducing any resulting gate-to-source and gate-to-drain parasitic capacitance.A cross-sectional view of nanowire transistor 100 taken along dashed line A: A is shown in FIG. 1B. The nanowires 130 are completely surrounded by the metal gate fill 145 as compared to the planar and finFET methods, such that the resulting trenches formed in each nanowire 130 can be better controlled. In the nanowire transistor 100, there are two fins 105 such that there are actually four nanowires 130. It should be understood that fewer or greater numbers of nanowires 130 may be implemented in alternative embodiments. The fins 105 are formed on the substrate 160. The fins 105 are formed by deposition of first and second semiconductor layers on the well implant. Prior to this deposition, oxygen implantation in the well implant forms an integral partially isolated region. The deposited layer is then etched to form fins 105. This same etch forms a partially isolated region 120 from the previous integral isolation regions formed in the well implant. A shallow trench isolation (STI) oxide region 165 can isolate the fins 105. The partial isolation regions 120 are highly advantageous because they insulate the replacement metal gate from the well implant to reduce any undesirable parasitic capacitance that would otherwise be formed between the replacement metal gate and the well implant. These advantageous features can be better understood with reference to the following exemplary fabrication methods.Production methodTo begin fabrication, a suitable substrate, such as a silicon substrate or a silicon-on-insulator (SOI) substrate, receives the well implant 200 as shown in FIG. Oxygen implantation (e.g., by an oxygen injection isolation (SiMOX) process) may then be performed to form an overall partially isolated region that will ultimately be patterned into partial isolation regions 120. Then, the selectively etched semiconductor layer 215 is deposited to alternate with the nanowire semiconductor layer 210. The following discussion will assume that the nanowire semiconductor layer 210 is a Si layer and the selectively etched semiconductor layer 215 is a SiGe layer. In one embodiment, layers 210 and 215 can be epitaxially deposited. Depending on whether the resulting nanowire transistor will be a p-channel metal oxide (PMOS) device or an n-channel (NMOS) device, layers 210 and 215 may also be p-type or n-type doped. A shallow trench isolation (STI) process can then be performed on layers 210 and 215 to form fins 105 and STI regions 165. For example, fins 105 may be wet etched or dry etched from layers 210 and 215.As shown in FIG. 3, a dummy gate 330, such as an oxide material, may then be deposited laterally on each of the fins 105, followed by an angular extension implant to each side of the dummy gate 330 to form an extended region 110. The extended implant also includes an etch stop dopant for selectively etching layer 215. For example, for embodiments in which the selective etch layer 215 includes a SiGe layer, carbon can be used as an etch stop dopant. The extension implant also implants an etch stop dopant into the nanowire layer 210 in the extension region 110, but these layers have resisted the selective etch process that will ultimately be used to form the nanowires, making such doping harmless. A spacer 115 can then be deposited on either side of the dummy gate 330. Spacer 115 may comprise a suitable material such as silicon nitride.Referring now to FIG. 4, the source/drain regions 155 can be epitaxially deposited on the extension region 110, followed by junction injection of the source/drain regions 155. An interlayer dielectric (ILD) filling step forms the ILD region 400. After the dummy gate is removed, the nanowires 130 may be selectively etched by dummy gate openings or windows 410 created between the spacers 115. The selectively etched semiconductor layer 215 is selectively etched through the dummy gate opening 410 to form nanowires 130 from the nanowire layer 210 (FIG. 3). For example, if the selectively etched semiconductor layer 215 includes a SiGe layer, an acidic wet etch such as HCL, carboxylic acid, HF, or nitric acid can be used. Alternatively, if the selectively etched semiconductor layer 215 is a Si layer, an alkaline wet etching such as an aqueous solution of ammonium hydroxide or potassium hydroxide may be used. However, the etch stop dopant implanted into the extension region 110 prevents the selective etch layer 215 in the extension region 110 from being etched by the selective etch process used to form the nanowires 130. Therefore, the selective etch layer 215 is removed only in the region 405 under the dummy gate. After the selective etch process, the oxidized cap 125 is formed by selective oxidation through the dummy gate opening 410. The etch stop dopant implanted into the extension region 110 causes the selective etch layer 215 in the extension region 110 to be sensitive to selective oxidation. In contrast, the nanowires 130 are resistant to such selective oxidation of the oxidized cap 125. 5 is a transverse cross-sectional view across two fins 105 showing the isolation of the nanowires 130 after a selective etch process.Referring again to FIGS. 1A and 1B, an alternative metal gate process can begin with a high-k dielectric layer 140 deposited through the dummy gate opening 410 of FIG. For example, the high-k dielectric layer 140 may comprise a suitable material, such as cerium oxide, zirconium dioxide, hafnium silicate or zirconium silicate deposited by an atomic layer deposition process. The work function layer 150 is then deposited. The work function layer 150 may include titanium nitride, tantalum nitride, titanium aluminum, or other suitable materials. Finally, a metal gate fill 145 is deposited. Metal gate fill 145 can include tungsten or aluminum. As with the high-k dielectric layer 140, the metal gate fill 145 and the work function layer 150 can be deposited using a suitable process such as atomic deposition or chemical vapor deposition.The manufacturing method can be summarized with reference to the flowchart shown in FIG. 6. The method includes an act 600 of oxidizing a well implant to form a partially isolated region. The formation of the partial isolation region 120 in FIGS. 1A and 1B is an example of the act 600. Moreover, the method includes an act 605 of forming a fin on a partially isolated region, the fin including alternating layers of the first semiconductor and a second semiconductor, wherein the initial layer of the first semiconductor is adjacent to the local isolation region, and wherein the fins are from An extended region extends to the second extended region. The formation of fins 105 and their layers 210 and 215 in FIG. 3 is an example of act 605. In this regard, one of the extended regions 110 in FIG. 1A can be considered to be the first extended region, and the remaining extended regions in the extended region 110 can be considered to be the second extended regions.Moreover, the method includes the act of implanting an etch stop dopant in the first extension region and the second extension region. Referring to Figure 3, the implantation of the etch stop dopant in the first and second extension regions is discussed. Finally, the method includes an act 615 of forming a dummy gate opening to expose a gate region of the fin and an act 620 of selectively etching a layer of the first semiconductor in a gate region of the fin to The layers of the two semiconductors form nanowires, wherein the implanted etch stop dopant inhibits selective etching of the first semiconductor layer in the first extension region and the second extension region. Referring to Figure 4, the formation of dummy gate opening 410 and selective etching of the first semiconductor layer are discussed.Many modifications, alternatives, and variations of the materials, devices, configurations, and methods of use of these devices can be made without departing from the scope of the present invention, as will be appreciated by those skilled in the art. In view of the above, the scope of the present disclosure should not be limited to the scope of the specific embodiments described and described herein, as they are merely by way of some examples thereof, and should be fully commensurate with the appended claims and their functional equivalents. |
The present inventive principles provide a method and system for performing backside voltage contrast on an SOI device. The SOI semiconductor device includes a bulk silicon, a box insulator residing on the bulk silicon and a silicon region on the box insulator. The SOI semiconductor device further includes a plurality of structures in the silicon region, the plurality of structures includes a conductive structure. The method and system include mechanical dimpling and chemical etching of the substrate to expose the box insulator. Optionally, a second chemical etch to remove at least a portion of the box insulator may be performed. A charged particle beam, such as energetic electrons from an SEM, for example, may be directed at the backside of the device, and emitted secondary electrons observed. |
What is claimed is:1. A method for performing passive voltage contrast on a silicon on insulator (SOI) device comprising the steps of:grinding a first portion of a substrate of said SOI device with a dimpling tool;etching a second portion of said substrate of said SOI device with tetramethylammonium hydroxide (TMAH) following said grinding of said SOI device with said dimpling tool; anddirecting a beam of electrons at a backside surface of said SOI device.2. The method of claim 1, wherein said beam of electrons is operable for generating a secondary emission of electrons one or more active regions in said SOI device.3. The method of claim 2, wherein a presence of said secondary electron emission signals determines a p-type active region, and an absence of secondary emission determines a n-type active region.4. The method of claim 2, said secondary emission for inspecting a boundary between a first active region and a second active region of said one or more active regions in said SOI device.5. The method of claim 1 further comprising connecting one or more pins of a pin-grid-array package containing said SOI device to a ground reference.6. The method of claim 1 further comprising:applying a conductive coating to a topside surface of said SOI device; andconnecting said conductive coating to a ground reference.7. The method as recited in claim 1, wherein said conductive coating comprises a carbon ink coating.8. The method of claim 1 further comprising etching a third portion of said substrate and a portion of a box insulator of said SOI device with hydrofluoric acid (HF) following said etching of said SOI device with said TMAH. |
TECHNICAL FIELDThe present invention relates, in general, to semiconductor devices, and more particularly to a method and system for analyzing SOI semiconductor devices using backside voltage contrast.BACKGROUND INFORMATIONSilicon on insulator (SOI) semiconductor devices are increasingly utilized. A SOI semiconductor device includes a semiconductor substrate, or bulk silicon. On the semiconductor substrate is an insulating layer, typically silicon dioxide. The insulating layer is known as the box layer. On the box layer is a silicon region, termed the body, that is typically p-doped. The source and/or drain junctions, shallow trench isolation (STI) regions, gate stacks, spacers and other structures, are formed on the silicon. Conductive structures, such as interconnects and contacts, electrically connect devices within the SOI semiconductor device. Typically, the contacts are formed of tungsten, while the interconnects are composed of copper.SOI semiconductor devices may have failures, such as shorts or open circuits, that arise when the semiconductor device is fabricated. Similarly, components of the semiconductor devices may fail during testing and/or operation. As a result, it is desirable to perform failure analysis to determine the type of failure that has occurred, the components affected and the location of the failure. Additionally, analysis of the structural features of the device may provide information on fabrication process parameters and control.One method analyzing semiconductor devices is passive voltage contrast. In the passive voltage contrast technique, a scanning electron microscope (SEM) may direct an energetic beam of electrons to an integrated circuit or wafer placed on a stage in a vacuum chamber. Upon directing electrons onto the test circuit or wafer, secondary electrons may be produced. This technique has typically be used for detecting defects, such as a gate oxide breakdown from the front side of the device. The secondary electrons may be emitted when there is a conductive path for electrons to flow. Consequently, the image of areas where there is a conductive path may be brighter than the areas in which there is no conductive path. By determining if the area around the gate oxide region is dark or bright, breakdown in gate oxide region may be detected. If the gate oxide has broken down, the area will appear bright since a conductive path has been formed from the gate to the channel. Conversely, a sound gate oxide region will appear dark.Structures within the body of the device may similarly exhibit such variation in the emission of secondary electrons and the resulting image contrast when illuminated by an energetic charged particle (electron or ion) beam. For example, the secondary emission from p-type regions and n-type regions typically is different. Thus, voltage contrast techniques may be advantageously applied to inspect structures within the semiconductor body. However, an energetic particle beam, such as an SEM beam directed to the topside of the chip cannot penetrate to these structures.Accordingly, there is a need in the art for techniques for backside voltage contrast inspection of semiconductor devices.BRIEF DESCRIPTION OF THE DRAWINGSFor a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:FIG. 1 is a cross-sectional view of a silicon on insulator semiconductor device;FIG. 2 illustrates a cross-sectional view of a device mounted in a pin-grid array (PGA) package which may be used in conjunction with backside voltage contrast in accordance with the present invention;FIG. 3 is a flowchart of a method for performing backside passive voltage contrast on the SOI device in accordance with an embodiment of the present invention;FIG. 4A illustrates an embodiment of the present invention of the SOI device after the step of grinding the substrate using a dimpling tool;FIG. 4B illustrates an embodiment of the present invention of the SOI device during the TMAH etch;FIG. 4C illustrates an embodiment of the present invention of the SOI device during an optional HF acid etch;FIG. 5 is a cross-sectional view of a silicon on insulator semiconductor illustrating device preparation for backside voltage contrast inspection in accordance with the present inventive principles; andFIG. 6 an embodiment of the present invention of a passive voltage contrast chamber used for backside voltage contrast on an SOI device.DETAILED DESCRIPTIONThe present inventive principles provide a method and system for performing backside voltage contrast on an SOI device. The SOI semiconductor device includes a bulk silicon, a box insulator residing on the bulk silicon and a silicon region on the box insulator. The SOI semiconductor device further includes a plurality of structures in the silicon region, the plurality of structures includes a conductive structure. The method and system include mechanical dimpling and chemical etching of the substrate to expose the box insulator. Optionally, a second chemical etch to remove at least a portion of the box insulator may be performed. A charged particle beam, such as energetic electrons from an SEM, for example, may be directed at the backside of the device, and emitted secondary electrons observed.In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention may be practiced without such specific details. In other instances, well-known circuits have been shown in block diagram form in order not to obscure the present invention in unnecessary detail. For the most part, details concerning timing considerations and the like have been omitted inasmuch as such details are not necessary to obtain a complete understanding of the present invention and are within the skills of persons of ordinary skill in the relevant art.FIG. 1 illustrates an embodiment of a semiconductor on insulator (SOI) semiconductor device 100 which may be used in conjunction with the present invention. SOI device 100 may be formed on bulk silicon substrate 101. SOI device 100 may further include a layer of oxide 102, referred to as the "box insulator," disposed on the bulk silicon 101. In one embodiment, oxide 102 may be composed of SiO2. On top of box insulator 102 may reside a silicon region, referred to as the "body" 103. Body 103 may include active regions 104A-B, e.g., source/drain junctions, and shallow trench isolation (STI) regions 105A-B. Active regions 104A-B may collectively or individually be referred to as active regions 104 or active region 104, respectively. One of ordinary skill in the art would appreciate that active regions 104 may be either p-type regions or n-type regions depending on the type of the device, a PFET or an NFET, respectively. For the purposes herein, FIG. 1 may be used to represent wither type of device in a particular embodiment. STI regions 105A-B may collectively or individually be referred to as STI regions 105 or STI region 105, respectively. Active regions 104A-B may be interconnected to metal-1 layer 112 via contacts 106A-B, respectively. Contacts to external circuitry may be made through metal-2 layer 120. Vias 114 through interlayer dielectric 110 connect metal-1 layer 112 and metal-2 layer 120. Further, the top of SOI device 100 may be interconnected with a polysilicon gate 107 via contact 106C. Contacts 106A-C may collectively or individually be referred to as contacts 106 or contact 106, respectively. In one embodiment, contacts 106 may be comprised of tungsten. Polysilicon gate 107 may be separated from body 103 by a gate oxide 108. Contacts 106, polysilicon gate 107 and gate oxide 108 may be interposed by an interlayer dielectric 109. It is noted that SOI device 100 may comprise any number of contacts 106, active regions 104, STI regions 105, gates 107 and gate oxide regions 108 and that FIG. 1 is illustrative. For example, an alternative embodiment may omit STI regions 105.The present invention will be described in terms of a particular method having certain steps and particular tools, such as a scanning electron microscope (SEM). However, one of ordinary skill in the art will readily recognize that the present invention will operate effectively for tools having other and/or additional components. In addition, one of ordinary skill in the art will also readily recognize that the methods of the present invention may include other and/or additional steps that, for clarity, are not depicted. The present invention will be described in terms of certain semiconductor devices and certain structures within the semiconductor devices. However, the present invention is consistent with the testing of other semiconductor devices and/or additional or different structures. One of ordinary skill in the art will also readily recognize that for clarity, only certain portions of the semiconductor devices are depicted.Refer now to FIG. 2 illustrating, in cross-sectional view, a "pin-grid" array (PGA) packaged semiconductor device 200 which may be used in conjunction with the methodology for performing backside voltage contrast in accordance with the present inventive principles, and discussed in conjunction with FIG. 3, below.Integrated circuit (IC) 202 may be a SOI device, such as SOI device 100, FIG. 1. IC 202 is electrically connected to PGA 204 by solder balls 206. The electrical contact may be made to the metal-2 layer (for example, metal-2 layer 120, FIG. 1) at the topside surface 208 of IC 202. Underfill 210, typically an insulating material such as epoxy glue that provides a mechanical bond between IC 202 and PGA 204. Connections to external circuitry (not shown) are provided by an array of pins 212 connected to corresponding ones of solder balls 206. Note that a package cap that would be used in devices packaged for use in applications has been removed, thereby exposing a backside surface 213 of IC 202.Refer now to FIG. 3 illustrating in flowchart form, a method 300 for performing backside passive voltage contrast on an SOI device. In particular, method 300 may be used in conjunction with a PGA-packaged SOI device such as device 202 shown in FIG. 2. Alternatively, method 300 may be used with an unpackaged device as described further hereinbelow.In step 302, a portion of the substrate (e.g. substrate 101, FIG. 1) is ground, to form a "dimple" in the substrate. A dimpling tool may be used to perform step 302. FIG. 4A illustrates an PGA-packaged device 200 including an SOI IC device 100 after grinding. IC 100 includes a dimpled substrate 101 including dimpled surface 402 and box insulator 102. (It would be understood by those of ordinary skill in the art that circuit 403 includes body 103 and structures disposed therein and the metal interconnects as shown in FIG. 1 and which are not shown in FIGS. 4A and 4B).Referring again to FIG. 3, in step 304, the dimpled substrate is etched with tetramethylammonium hydroxide (TMAH). The etch step may preferentially etch the dimpled surface portion 402, FIG. 4A. FIG. 4B illustrates SOI device 100 during the etching step, with dimpled surface 402, etched surface 404 at an intermediate stage of the etching step (404a) and etched surface 404 at an end of the etching step (404b). The etching by TMAH may stop at box insulator 102 as TMAH does not etch oxide material, thereby exposing a backside surface 406 of the box insulator. In other words, the exposed backside surface is defined by the dimpling and etching steps.Box oxide may optionally be removed. If, in step 306, box oxide is to be removed, in step 308 a portion of the substrate 101 and a portion box insulator 102 are etched using hydrofluoric (HF) acid. In one embodiment, the HF acid may etch box insulator 102 up to the border with body 103 but not including body 103 as illustrated in FIG. 4C. Referring to FIG. 4C, FIG. 4C illustrates an embodiment of the present invention of SOI device 100 in which a portion of substrate 101 and a portion of box insulator 102 have been etched by HF acid at the end of the etching step (404c). Etching using HF acid may require careful attention as etching using HF acid for too long a time may cause an interaction with body 103. If box oxide material is not to be removed, the HF etch step is eliminated.As noted above, the present inventive principles may be used in conjunction with unpackaged devices. If, in step 312, the device is not packaged, in step 314, a conductive coating, such as carbon ink, is applied to the topside surface, for example surface 208, FIG. 2. This may be further understood by referring to FIG. 5 illustrating an SOI device 100 after the TMAH etch step discussed above. Carbon ink layer 502 is disposed on topside insulator 504 and metal-layer 120. This provides a conducting path between metal-2 layer 120, step 316 and the body 103. (Reference numerals not explicitly referred to correspond to structures described in conjunction with FIG. 1.)If, alternatively, in step 312, the device is packaged, the pins of the PGA are grounded, step 318.In step 320, a charged particle beam is directed onto the backside surface of the device, as illustrated in FIG. 6.FIG. 6 depicts an embodiment of the present invention of a passive voltage contrast chamber 600. It would be appreciated by those of ordinary skill in the art that the interior of chamber 600 is under vacuum. The passive voltage contrast technique may involve attaching PGA-packaged SOI device 202 to a rotating stage 602 in chamber 600. Pins 212 of the PGA package may be electrically connected to ground (not shown in FIG. 6). Rotating stage 602 may include a support member 604 and a pivoting mechanism 606. Once device 202 has been rotated into the appropriate location to expose backside surface 607 to a charged particle beam such as electron beam 608 from SEM 610. Note that, in an embodiment in which the HF etch step (optional step 308, FIG. 3) is not used, the energy of the electron beam may be selected to penetrate the box oxide and expose the structures within body 103. For example, energies in the range of 5 keV to 20 keV may be used. It would be appreciated by those of ordinary skill in the art that the beam energies depend on the parameters of the box layer, such as thickness and composition, and that these energies are exemplary, and that other beam energies may be used in conjunction with alternative embodiments of the present invention, and such alternative embodiments would be within the spirit and scope of the present invention.A detector 612 may be configured to detect any secondary electrons 614 that may be emitted. For example, p-type active regions absorb more electrons (emit more secondary electrons), and thus appear "bright" while n-type active regions absorb fewer electrons (emit fewer secondary electrons), and appear "dark." Consequently, by observing the regions of secondary emission (step 322, FIG. 3) with detector 612, the boundaries of the active regions within body 103 (FIG. 1), may be inspected, for example. |
The invention relates to a supercapacitor and an integrated assembly including the same. Some embodiments include an integrated assembly having a supercapacitor supported by a semiconductor substrate. The super capacitor comprises a first electrode base and a second electrode base. The first electrode base includes a first laterally protruding region, and the second electrode base includes a second laterally protruding region intersecting the first laterally protruding region. A distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 500 nm. Carbon nanotubes extend upward from the first and second electrode bases. The carbon nanotubes are configured as a first membrane structure associated with the first electrode base and a second membrane structure associated with the second electrode base. A pseudocapacitance material is dispersed throughout the first diaphragm structure and the second diaphragm structure. An electrolyte material is within and between the first and second membrane structures. Some embodiments include methods of forming an integrated assembly. |
1.An integrated assembly comprising an ultracapacitor supported by a semiconductor substrate; the ultracapacitor comprising:first and second electrode bases; the first electrode base includes a first laterally protruding area, and the second electrode base includes a second laterally protruding area intersecting the first laterally protruding area ; the distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 500 nm;carbon nanotubes extending upward from the first electrode base and the second electrode base; the carbon nanotubes are configured as a first membrane structure associated with the first electrode base and as a first membrane structure associated with the first electrode base the second diaphragm structure associated with the second electrode base;a pseudocapacitive material dispersed throughout the first diaphragm structure and the second diaphragm structure; andan electrolyte material within and between the first membrane structure and the second membrane structure.2.The integrated assembly of claim 1, wherein the integrated assembly includes a memory array directly below the ultracapacitor.3.2. The integrated assembly of claim 2, wherein the memory array is directly below only a portion of the ultracapacitor.4.The integrated assembly of claim 1, wherein the pseudocapacitive material comprises a conductive polymer, a transition metal oxide, and/or a transition metal sulfide.5.The integrated assembly of claim 1, wherein the pseudocapacitive material comprises one or more of MnO, RuO, and FeO, wherein the chemical formula indicates a major component rather than a specific stoichiometry.6.The integrated assembly of claim 1, wherein the electrolyte material comprises sulfuric acid and polyvinyl alcohol.7.The integrated assembly of claim 1, wherein the carbon nanotubes have a vertical height of at least about 50 [mu]m.8.The integrated assembly of claim 1, wherein the carbon nanotubes have a vertical height of at least about 60 [mu]m.9.The integrated assembly of claim 1, wherein the carbon nanotubes have an average tube diameter of less than or equal to about 7 nm.10.The integrated assembly of claim 1, wherein the carbon nanotubes have an average maximum distance across the individual nanotubes of less than or equal to about 7 nm.11.The integrated assembly of claim 1, wherein the carbon nanotubes are at a density of at least about 5 x 1011/cm2 within the first membrane structure and the second membrane structure.12.The integrated assembly of claim 1, wherein the carbon nanotubes have an average tube diameter in a range from about 3 nm to about 7 nm.13.The integrated assembly of claim 1, wherein the distance between the first laterally protruding region and the second laterally protruding region is in a range from about 5 nm to about 500 nm.14.The integrated assembly of claim 1, wherein the distance between the first laterally protruding region and the second laterally protruding region is in a range from about 50 nm to about 500 nm.15.The integrated assembly of claim 1, wherein the distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 250 nm.16.The integrated assembly of claim 1, wherein the distance between the first laterally projecting region and the second laterally projecting region is less than or equal to about 150 nm.17.The integrated assembly of claim 1, wherein:the first laterally protruding region protrudes in a first direction and has a first width along a second direction orthogonal to the first direction;the second laterally protruding region protrudes in a third direction opposite to the first direction, wherein the second direction is orthogonal to the third direction;the second laterally protruding region has a second width along the second direction; andThe first and second widths are less than or equal to about 100 μm.18.18. The integrated assembly of claim 17, wherein the first and second widths are in a range from about 3 [mu]m to about 100 [mu]m.19.The integrated assembly of claim 17, wherein:the first laterally projecting region has a first length along the first direction;the second laterally projecting region has a second length along the third direction; andThe first and second lengths are at least about 1000 μm.20.19. The integrated assembly of claim 19, wherein the first and second lengths are in a range from about 1000 [mu]m to about 10,000 [mu]m.21.The integrated assembly of claim 17, wherein:the first laterally projecting region protrudes outwardly from a first trunk region, wherein the first trunk region has a first trunk width along the first direction;the second laterally projecting region protrudes outwardly from a second trunk region, wherein the second trunk region has a second trunk width along the third direction; andThe first and second trunk widths are less than or equal to about 100 μm.22.21. The integrated assembly of claim 21, wherein the first and second backbone widths are in a range from about 3 [mu]m to about 100 [mu]m.23.The integrated assembly of claim 1, wherein the first electrode base includes at least 200 of the first laterally protruding regions, and wherein the second electrode base includes at least 200 of the second side to the highlighted area.24.An integrated assembly comprising an ultracapacitor associated with a non-volatile RAM NVRAM controller; the ultracapacitor comprising:first and second electrode bases;carbon nanotubes extending upward from the first electrode base and the second electrode base; the carbon nanotubes are configured as a first membrane structure associated with the first electrode base and as a first membrane structure associated with the first electrode base the second diaphragm structure associated with the second electrode base;a pseudocapacitive material dispersed throughout the first diaphragm structure and the second diaphragm structure; andan electrolyte material within and between the first membrane structure and the second membrane structure.25.The integrated assembly of claim 24, wherein the electrode base comprises:metal-containing support material; andan additional metal-containing material above the metal-containing support material.26.26. The integrated assembly of claim 25, wherein the metal-containing support material comprises a metal nitride, and wherein the additional metal-containing material comprises an iron (Fe) layer over an aluminum (Al) layer.27.25. The integrated assembly of claim 24, wherein the first electrode base includes a first finger region, and wherein the second electrode base includes a second electrode that intersects the first finger region Finger area.28.25. The integrated assembly of claim 24, wherein the carbon nanotubes are at a density of at least about 5 x 1011/cm2 within the first membrane structure and the second membrane structure, and wherein the carbon nanotubes Have an average vertical height of at least about 60 μm.29.25. The integrated assembly of claim 24, wherein the pseudocapacitive material comprises one or more of MnO, RuO, and FeO, wherein the chemical formula indicates a major component rather than a specific stoichiometry.30.25. The integrated assembly of claim 24, wherein the integrated assembly includes a memory array in data communication with the NVRAM controller, and wherein the ultracapacitor is on the same chip as the memory array and in the directly above the memory array.31.The integrated assembly of claim 30, wherein the memory array is a dynamic random access memory DRAM array.32.25. The integrated assembly of claim 24, wherein the integrated assembly includes a memory array in data communication with the NVRAM controller, and wherein the ultracapacitor is off-chip relative to the memory array.33.A method of forming a supercapacitor comprising:First and second electrode bases are formed over the insulating layer; the first electrode base includes a first laterally protruding region, and the second electrode base includes a first laterally protruding region intersecting the first laterally protruding region Two laterally protruding regions; the distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 500 nm;carbon nanotubes are formed extending upwardly from the first electrode base and the second electrode base; the carbon nanotubes are configured as a first membrane structure associated with the first electrode base and with a second diaphragm structure associated with the second electrode base;dispersing pseudocapacitive material throughout the first diaphragm structure and the second diaphragm structure; andElectrolyte material is formed within and between the first and second membrane structures.34.34. The method of claim 33, wherein the insulating layer comprising silicon dioxide is directly against a support base comprising silicon.35.34. The method of claim 33, wherein the insulating layer is over a support base comprising memory cells.36.34. The method of claim 33, wherein said forming said first and second electrode pedestals comprises:depositing a metal-containing support material onto a semiconductor substrate;patterning the metal-containing support material into the preliminary configuration of the first and second electrode pedestals using one or both of electron beam lithography and inductively coupled plasma etching; andforming additional metal-containing material over the preliminary configuration of the first and second electrode pedestals; the additional metal-containing material and the metal-containing support material together form the first and second electrode pedestals; the The additional metal-containing material includes a catalyst for promoting the growth of the carbon nanotubes.37.37. The method of claim 36, wherein the catalyst comprises iron (Fe), and further comprising utilizing a precursor comprising a mixture of C2H2 and H2 in a helium carrier gas, utilizing a temperature of at least about 590°C and utilizing no greater than The carbon nanotubes were grown over the catalyst at a pressure of about 0.4 mbar.38.38. The method of claim 37, wherein the growing is performed for a duration of at least about 30 minutes.39.38. The method of claim 37, wherein the growing is performed for a duration of at least about 60 minutes.40.34. The method of claim 33, wherein the pseudocapacitive material comprises a conductive polymer, transition metal oxide, and/or transition metal sulfide, and wherein deposition processes comprising one or both of ALD and CVD are utilized throughout The first diaphragm structure and the second diaphragm structure disperse the pseudocapacitive material.41.The method of claim 33, wherein the electrolyte material comprises sulfuric acid and polyvinyl alcohol. |
Supercapacitors and Integrated Assemblies Containing Supercapacitorstechnical fieldSuper capacitor. Integrated assembly. Nonvolatile Dual Inline Memory Modules (NVDIMMs). Solid State Drive (SSD). Methods of forming supercapacitors, integrated assemblies, NVDIMMs, and SSDs.Background techniqueSupercapacitors (also known as ultracapacitors) are high-capacity capacitors that can utilize electrostatic double-layer capacitance and/or electrochemical pseudocapacitors without the solid dielectrics of ordinary capacitors.Supercapacitors can be used in integrated assemblies. For example, ultracapacitors can be used to provide backup power in connection with modules with volatile memory, such as dynamic random access memory (DRAM), so that such volatile memory can be backed up to the On less volatile memory (or non-volatile memory).It is desirable to develop improved supercapacitors, and to develop improved methods of forming supercapacitors. It would be desirable to have improved supercapacitors suitable for use in integrated assemblies.SUMMARY OF THE INVENTIONAccording to one aspect of the present application, an integrated assembly is provided that includes a supercapacitor supported by a semiconductor substrate. The supercapacitor includes: first and second electrode bases; the first electrode base includes a first laterally protruding region, and the second electrode base includes a cross-section of the first laterally protruding region a second laterally protruding region; the distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 500 nm; carbon nanotubes from the first electrode base and the A second electrode base extends upward; the carbon nanotubes are configured as a first membrane structure associated with the first electrode base and a second membrane structure associated with the second electrode base; pseudocapacitance a material dispersed throughout the first membrane structure and the second membrane structure; and an electrolyte material within and between the first membrane structure and the second membrane structure.According to another aspect of the present application, an integrated assembly is provided that includes an ultracapacitor associated with a non-volatile RAM (NVRAM) controller. The supercapacitor includes: first and second electrode bases; carbon nanotubes extending upwardly from the first electrode base and the second electrode base; the carbon nanotubes configured to be with the a first diaphragm structure associated with a first electrode base and a second diaphragm structure associated with the second electrode base; a pseudocapacitive material dispersed throughout the first diaphragm structure and the second diaphragm structure; and an electrolyte material within and between the first membrane structure and the second membrane structure.According to yet another aspect of the present application, a method of forming an ultracapacitor is provided. The method includes: forming first and second electrode bases over an insulating layer; the first electrode base includes a first laterally protruding region, and the second electrode base includes an a second laterally protruding region intersected by protruding regions; the distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 500 nm; forming carbon nanotubes from the first electrode The pedestal and the second electrode pedestal extend upward; the carbon nanotubes are configured as a first membrane structure associated with the first electrode pedestal and a second membrane structure associated with the second electrode pedestal a membrane structure; dispersing pseudocapacitive material throughout the first membrane structure and the second membrane structure; and forming an electrolyte material within and between the first membrane structure and the second membrane structure.Description of drawings1 is a schematic diagram of an example arrangement of example components of an example integration assembly.2 is a schematic three-dimensional view of an example arrangement of example components of an example integration assembly.2A is a schematic three-dimensional view of an example arrangement of example components of an example integration assembly.2B is a schematic diagram of an example arrangement of example components of an example integration assembly.3 is a schematic cross-sectional side view of an instance region of an instance assembly at an instance processing stage of an instance method.4 is a schematic cross-sectional side view of an example region of the example assembly of FIG. 3 at an example process stage subsequent to the example process stage of FIG. 3 .FIG. 5 is a schematic top view of the example assembly of FIG. 4 , wherein the view of FIG. 4 is along line A-A of FIG. 5 .6 and 6A are schematic cross-sectional side views of example regions of the example assembly of FIG. 3 at an example process stage subsequent to the example process stage of FIG. 4 .7 is a schematic cross-sectional side view of an example region of the example assembly of FIG. 3 at an example process stage subsequent to the example process stage of FIG. 6 .FIG. 7A is a schematic three-dimensional view of a region of the example assembly of FIG. 7 .8 is a schematic cross-sectional side view of an example region of the example assembly of FIG. 3 at an example process stage subsequent to the example process stage of FIG. 7 .FIG. 9 is a schematic three-dimensional view of a region of the example assembly of FIG. 8 .FIG. 10 is a schematic three-dimensional view of an example region of the example assembly of FIG. 9 at an example process stage subsequent to the example process stage of FIG. 9 .Detailed waysSome embodiments include ultracapacitor configurations suitable for incorporation into integrated assemblies, such as integrated assemblies including non-volatile memory (eg, non-volatile random access memory (NVRAM)). Some embodiments include methods of forming ultracapacitor configurations. Example embodiments are described with reference to FIGS. 1-10 .The ultracapacitors described herein may be suitable as backup power sources in integrated assemblies. For example, FIG. 1 schematically depicts an example integrated assembly 100 including an ultracapacitor as a backup power source.Assembly 100 may be considered an example of an assembly that includes both volatile and nonvolatile memory, with a particular example being an assembly configured to include NVDIMMs (non-volatile dual in-line memory modules) . The embodiments described herein may be used with assemblies other than those of FIG. 1, and such other assemblies may or may not include both volatile and nonvolatile memory.The assembly 100 includes an ultracapacitor 120 of the type that can be formed with respect to the following embodiments. Ultracapacitor 120 is coupled with power conditioner 130 . Ultracapacitor 120 is shown having positive terminal 122 and negative terminal 124 , where such terminals are electrically coupled with power regulator 130 . Positive terminal 122 and negative terminal 124 may be coupled to electrodes associated with the ultracapacitor, as shown in FIGS. 9 and 10 (discussed below).The power regulator 130 of FIG. 1 is coupled with the NVRAM controller 140 . Controller 140 is coupled to volatile dynamic random access memory (DRAM) 150 and to non-volatile flash memory (NAND flash memory) 160 through NAND flash controller 170 . In the illustrated embodiment, the NVRAM controller is also coupled with a PCI Express interface 180 (where PCI means Peripheral Component Interconnect).Ultracapacitor 120 may be configured to provide backup power in the event that assembly 100 loses external power. Backup power may enable data stored on volatile DRAM to be moved to non-volatile flash memory so that such data is not irretrievably lost due to loss of external power to assembly 100 .FIG. 2 schematically illustrates an example three-dimensional arrangement of the various components of assembly 100 . All of the components shown in Figure 1 may be located within the same integrated assembly package as each other. In some embodiments, the assembly 100 of FIG. 2 may be considered to illustrate an arrangement in which all of the components of FIG. 1 are part of the same integrated chip package as each other (ie, within a monolithic assembly). As will be understood by those of ordinary skill in the art, the BEOL of FIG. 2 is a back-end-of-line circuitry.In the arrangement illustrated in FIG. 2 , the supercapacitor 120 is disposed directly above the DRAM array 150 . Ultracapacitor 120 is shown having a larger footprint than array 150, and thus array 150 is only under a portion of the ultracapacitor.The schematic diagram of FIG. 2 shows schematically one of the memory cells of DRAM array 150, and specifically shows that this memory cell includes a capacitor in combination with a transistor (access device). This is an example memory cell of an example DRAM array. It should be understood that in other embodiments, the memory cells may have other configurations (eg, capacitors within the memory cells may be replaced with other storage elements, such as resistive memory devices, phase change memory devices, etc.).In some embodiments, the ultracapacitor 120 may be formed off-chip relative to the other components shown in FIG. 1 . For example, Figures 2A and 2B schematically illustrate an example configuration of assembly 100a in which supercapacitor 120 is formed off-chip relative to other components (eg, memory components). In other embodiments, power regulator 130 (and possibly controller 140, etc.) may be formed with ultracapacitor 120 to be off-chip relative to one or more other components (eg, memory components).Example methods for forming example ultracapacitors are described with reference to FIGS. 3-10 .Referring to FIG. 3 , the integrated assembly 10 includes a base 12 and a layer 14 over the base 12 .Base 12 may comprise any suitable material. In some applications, the pedestal 12 may include a semiconductor material (eg, may include, consist essentially of, or consist of single crystal silicon). If the pedestal 12 includes a semiconductor material, the pedestal may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any construction comprising semiconductor material, including but not limited to bulk semiconductor material, such as semiconductor wafers (alone or in assemblies including other materials) and layers of semiconductor material (alone or in assemblies including other materials) ). The term "substrate" refers to any support structure, including but not limited to the semiconductor substrates described above. In some applications, pedestal 12 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit fabrication. Such materials may include, for example, one or more of refractory metallic materials, barrier materials, diffusion materials, insulator materials, and the like. In some embodiments, the susceptor 12 may be part of a semiconductor wafer.Layer 14 includes insulating material 16 . Insulating material 16 may include, consist essentially of, or consist of any suitable composition, and in some embodiments may include, consist essentially of, or consist of one or more of silicon dioxide, aluminum oxide, silicon nitride, and the like.In some embodiments, the base 12 may be omitted.In some embodiments, layer 14 may be over an assembly that includes memory cells (eg, of the type shown in FIG. 2 as under supercapacitor 120). In such embodiments, base 12 may correspond to an assembly including memory cells, and layer 14 may be considered to be monolithically formed over the assembly including memory cells (ie, on a chip with memory cells). In other embodiments, layer 14 may be formed off-chip relative to an assembly of memory cells (eg, in an assembly of the type shown in FIG. 2A ), and pedestal 12 may correspond to a semiconductor chip supporting layer 14 ( die).Referring to FIG. 4 , a conductive support material (eg, a metal-containing support material) 18 is deposited onto insulating material 16 and patterned into first components 20 and second components 22 . In some embodiments, components 20 and 22 may be referred to as base components (or as preliminary base configurations). Additional processing (described below with reference to Figures 6-10) may be used to form the final configuration of the susceptor.Support material 18 may include, for example, one or more metal nitrides. For example, in some embodiments, support material 18 may include, consist essentially of, or consist of titanium nitride.Support material 18 may be deposited using any suitable method, including, for example, one or more of atomic layer deposition, chemical vapor deposition, physical vapor deposition, and the like; and in some embodiments, is deposited using a sputter deposition process.5 shows a top view of the configuration of FIG. 4 along line A-A of FIG. 5 and showing base assemblies (or preliminary base configurations) 20 and 22 intersecting with respect to each other. The material 18 may be formed into the desired configuration using any suitable process, and in some embodiments may be patterned using one or both of electron beam lithography and inductively coupled plasma etching.5 shows that the first electrode base assembly (or first preliminary electrode configuration) 20 includes a first laterally projecting region (tab, finger or finger region) projecting outwardly from the trunk (trunk, main) region 26 )twenty four. The first laterally projecting region projects along the first direction R from the trunk region 26 . The second electrode base assembly (or second preliminary electrode configuration) 22 includes a second laterally projecting region (tab, finger or finger region) 28 projecting outwardly from the trunk (stem, main) region 30 . The second laterally protruding region protrudes from the stem region 30 along a second direction Q, wherein the second direction Q is opposite to the first direction R. As shown in FIG.The first laterally projecting region 24 and the second laterally projecting region 28 intersect each other and are spaced from each other by an intermediate region 32 . Intermediate region 32 includes an intermediate distance D between electrode bases 20 and 22 . Such intermediate distances can be less than or equal to about 500 nanometers (nm), less than or equal to about 250 nm, less than or equal to about 150 nm, etc.; and in some embodiments, can range from about 5 nm to about 500 nm, at about 50 nm to the range of about 500 nm, etc. The small spacing between interdigitated regions 24 and 28 may minimize the path for ion diffusion from the electrolyte to the electrode material in example embodiment supercapacitor configurations formed herein, which may enable high energy extraction from such supercapacitors and high power density. Additionally, in the example supercapacitor embodiments described herein, the small spacing between the interdigitated regions can maximize (or at least substantially maximize) the available electrochemical surface area, which can enable high capacitance and fast charge/discharge rates.The first finger region 24 has a first width W1 and the second finger region 28 has a second width W2. The first width W1 and the second width W2 may be substantially the same as each other, and may be less than or equal to about 100 micrometers (μm) in some embodiments. For example, the first width W1 and the second width W2 may range from about 3 μm to about 100 μm.In some embodiments, width W1 and width W2 may be considered to extend along the x-axis direction relative to the view of FIG. 5 . Such x-axis directions may be orthogonal (or at least substantially orthogonal) to the first direction R and the second direction Q of the protrusions 24 and 28 . In some embodiments, one of the R and Q directions may be referred to as the first direction, the x-axis direction may be referred to as the second direction, and the other of the R and Q directions may be referred to as the third direction. The term "substantially orthogonal" means orthogonal to within reasonable tolerances of fabrication and measurement.In some embodiments, the R and Q directions may be considered to extend along the y-axis direction relative to the view of FIG. 5 . The stem regions 26 and 30 have a third width W3 and a fourth width W4 along such y-axis directions. In some embodiments, widths W3 and W4 may be referred to as trunk widths. The third width W3 and the fourth width W4 may be substantially the same as each other, and may be substantially the same as the first width W1 and the second width W2. For example, in some embodiments, the third width W3 and the fourth width W4 may be less than or equal to about 100 micrometers (μm); and may range from about 3 μm to about 100 μm, for example.The protrusions 24 and 28 have lengths L1 and L2 along the y-axis direction. In some embodiments, the lengths L1 and L2 may be substantially the same as each other and may be at least about 1000 μm. For example, in some embodiments, lengths L1 and L2 may range from about 1000 μm to about 10,000 μm.Components 20 and 22 may each have any suitable number of fingers, and in some embodiments may have at least about 50 fingers, at least about 100 fingers, at least about 200 fingers, and the like. Components 20 and 22 may have the same number of fingers as each other.Referring to FIG. 6 , a thin metal layer 34 is formed over support material 18 and patterned into the configuration of electrode bases 20 and 22 . Metal layer 34 may be patterned using any suitable method, including, for example, one or both of e-beam lithography and lift-off techniques.In some embodiments, material 18 may be referred to as a first metal-containing material, and metal layer 34 may be considered to include additional metal-containing materials. In the illustrated embodiment, metal layer 34 includes two materials 36 and 38 .The first material 36 may be a buffer layer disposed between the second material 38 and the support material 18, and in some embodiments may include, consist essentially of, or consist of aluminum (Al) . The first material 36 may be formed to any suitable thickness, and in some embodiments is formed to a thickness of less than or equal to about 10 nm, less than or equal to about 7 nm, and the like.The second material 38 includes a catalyst for promoting the growth of carbon nanotubes (carbon nanotube-containing). In some embodiments, the second material 38 may include, consist essentially of, or consist of iron (Fe). The second material may be formed to any suitable thickness, and in some embodiments may be formed to a thickness of less than or equal to about 5 nm, less than or equal to about 3 nm, less than or equal to about 1 nm, and the like.The embodiment of FIG. 6 shows layer 34 extending across the entire upper surface of buttress material 18 . In other embodiments, layer 34 may cover only a portion of support material 18 . Examples of such other embodiments are shown in Figure 6A.FIG. 7 shows the assembly 10 at a process stage subsequent to the process stage of FIG. 6 . The carbon nanotubes are formed to extend upward from the first electrode base 20 and the second electrode base 22 . The carbon nanotubes and electrode base together form capacitor electrodes 40 and 42 .The carbon nanotubes may be considered to be configured as a first membrane structure (or membrane) 44 and a second membrane structure (or membrane) 46, wherein the first membrane structure 44 is associated with the first electrode base 20, and wherein the second membrane structure 46 is associated with the second electrode base 22 . Stipple is used to assist the reader in identifying septum structures 44 and 46 . The stacked arrangement of carbon nanotubes is conventionally referred to as a membrane structure. Alternatively, the stacked arrangement of carbon nanotubes may be referred to herein as a matrix, mesh, or the like.It may be desirable to form carbon nanotubes within membranes 44 and 46 as vertically aligned carbon nanotubes (VA-CNTs). This may enable high surface area from carbon nanotube surfaces, which may enable the formation of supercapacitors capable of high energy densities. One of ordinary skill in the art will understand the term "vertically aligned carbon nanotubes". As used herein, such terms mean that the majority of the carbon nanotubes extend perpendicularly relative to the upper surface of the underlying infrastructure. For example, most of the carbon nanotubes may extend perpendicularly relative to the upper surface of catalyst material 38 . Vertically extending carbon nanotubes may or may not extend orthogonally relative to the upper surface of the catalyst material, so long as they extend generally vertically relative to such upper surface (wherein the term "substantially vertically" means upward rather than laterally). In some embodiments, the nanotubes can be curved, wavy, etc., rather than straight, and can still be considered to extend vertically, so long as the general orientation of the majority of the nanotubes is vertical.7A schematically illustrates an enlarged region of a portion of one of the membrane structures 44 of FIG. 7, and shows a plurality of vertically extending (upwardly extending) carbon nanotubes 48 within such region. Nanotubes 48 are shown tightly packed together.Carbon nanotubes can be vertically aligned and can be formed by any suitable process. In some embodiments, the nanotubes may be formed using a vertical hot-wall Plassys reactor (reaction chamber) with a dedicated chemical vapor deposition (CVD) process. Carbon nanotubes can be grown using C2H2 and H2 in a helium carrier gas, wherein the temperature in the reaction chamber is at least about 590°C, and the pressure in the reaction chamber is less than or equal to about 0.4 millibar (mbar). Growth can be performed for a duration of at least about 30 minutes, at least about 60 minutes, and the like. Membranes 44 and 46 may be formed to a vertical height H (shown in FIG. 7 ) of at least about 50 μm, at least about 60 μm, etc. . The carbon nanotubes 48 may have an average tube diameter D (shown in FIG. 7A ) in the range of about 3 nm to about 7 nm. Although the tube 48 is shown as being cylindrical, it should be understood that the tube may have any suitable closed shape when viewed from above, including, for example, a circular shape, an oval shape, a polygonal shape, and the like. If the tube has a closed shape other than circular in top view, the distance D may correspond to the maximum distance across the interior of the tube instead of the diameter. In some embodiments, the tube may be polygonal in plan view and may have three or four side walls.Carbon nanotubes 48 may be formed to any suitable density within membrane structures 44 and 46, and in some embodiments may be formed to a density of at least about 5 x 1011/cm2. Considering a mass density of about 0.3 g/cm3, such a density can provide a surface area of at least about 2600 m2/g. Such surface areas can be much higher than those achieved with conventional activated carbon electrodes.The tips of the carbon nanotubes 48 may or may not be pinched off. In some embodiments, it may be advantageous if most of the nanotubes 48 have tips that are not pinch-off to enable material (eg, pseudocapacitive materials, electrolyte materials, etc.) to penetrate into the nanotubes.Referring to FIG. 8 , pseudocapacitive materials 50 (only some of which are labeled) are dispersed throughout first diaphragm structure 44 and second diaphragm structure 46 . Pseudocapacitive materials can include any suitable composition; and in some embodiments can include conductive polymers, transition metal oxides, and/or transition metal sulfides. For example, in some embodiments, the pseudocapacitive material may include one or more of manganese (Mn), ruthenium (Ru), and iron (Fe), and one or both of oxygen (O) and sulfur (S) combination of species. For example, the pseudocapacitive material may include, consist essentially of, or consist of one or more of MnO, RuO, and FeO, where the chemical formula indicates the major components rather than the specific stoichiometry. In some example embodiments, MnO may be MnO2, RuO may be RuO2, and FeO may be Fe3O4.In some embodiments, the pseudocapacitive material may be dispersed throughout the first diaphragm structure 44 and using a deposition process including one or more of atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), and the like. The second diaphragm structure 46 .Pseudocapacitive material 50 can increase the capacitance within the supercapacitor via redox reactions, and thus can increase the overall energy density of the supercapacitor.9 schematically illustrates a three-dimensional view of interdigitated electrodes 40 and 42, wherein such poles include membrane structures 44 and 46, respectively. In the illustrated embodiment, terminal 122 is coupled with electrode 40 and terminal 124 is coupled with electrode 42 . Terminals 122 and 124 may correspond to the structures described above with reference to FIG. 1 .FIG. 10 shows the assembly 10 after forming an electrolyte material (electrolyte) 60 within and between the first membrane structure 44 and the second membrane structure 46 of electrodes 40 and 42 . Electrolyte material 60 may include any suitable composition, and in some embodiments may include a combination of sulfuric acid and polyvinyl alcohol (ie, may include H2SO4-PVA).The configuration of FIG. 10 includes an ultracapacitor 120 . A protective cover (not shown) may be provided over such ultracapacitors, and the ultracapacitors may be used in integrated assemblies, including, for example, those described above with reference to Figures 1, 2, and 2A. The lid may contain a low-k dielectric, polymer, or the like.The ultracapacitor 120 may be used in monolithic (on-chip) applications (eg, assemblies of the type shown in FIG. 2 ), or off-chip applications (eg, assemblies of the type shown in FIG. 2A ).The ultracapacitor configurations described herein may provide many advantages as compared to conventional designs. For example, it can achieve high capacitance in a relatively small footprint, which can make it suitable for incorporation into highly integrated assemblies. Furthermore, due to the solid state nature of carbon nanotubes, they can be monolithically integrated into existing integrated assemblies. In addition, the supercapacitors described herein can achieve high energy densities, due at least in part to the high surface area associated with VA-CNTs. Furthermore, cost efficiencies can be achieved due at least in part to the monolithic integration of the ultracapacitors described herein.The assemblies and structures discussed above may be used within integrated circuits (where the term "integrated circuit" means an electronic circuit supported by a semiconductor substrate); and possibly incorporated into an electronic system. Such electronic systems can be used, for example, in memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and can include multiple layers of multi-chip modules. Electronic systems can be any of a wide range of systems, such as cameras, wireless devices, displays, chipsets, set-top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc. .Unless otherwise specified, the various materials, substances, compositions, etc. described herein can be obtained by any suitable method now known or yet to be developed (including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.).The terms "dielectric" and "insulating" may be used to describe materials that have insulating electrical properties. The terms are considered synonyms throughout this disclosure. Utilization of the term "dielectric" in some instances and the term "insulating" (or "electrically insulating") in other instances may change language provided within this disclosure to simplify the premise basis in the appended claims, and Not intended to indicate any significant chemical or electrical differences.The terms "electrically connected" and "electrically coupled" may both be used in this disclosure. Said terms are considered synonyms. The use of one term in some instances and another term in other instances may provide a change of language in this disclosure to simplify the basis of premise in the appended claims.The particular orientations of the various embodiments in the figures are for illustration purposes only, and in some applications, the embodiments may be rotated relative to the orientations shown. The description provided herein and the claims that follow relate to any structure having the described relationships between the various features, whether in the particular orientation of the drawings or rotated relative to that orientation.To simplify the drawings, unless otherwise indicated, the cross-sectional views of the accompanying description show only features within the plane of the cross-section and do not show material behind the plane of the cross-section.When a structure is referred to above as being "on", "adjacent" or "abutting" another structure, it can be directly on the other structure, or intervening structures may also be present. In contrast, when a structure is referred to as being "directly on", "directly adjacent" or "directly abutting" another structure, there are no intervening structures present. The terms "directly below", "directly above", etc. do not denote direct physical contact (unless expressly stated otherwise), but rather upright alignment.A structure (eg, layer, material, etc.) may be referred to as "extending vertically" to indicate that the structure generally extends upward from an underlying base (eg, a substrate). The vertically extending structures may extend substantially orthogonally with respect to the upper surface of the base, or may not extend.Some embodiments include an integrated assembly having a supercapacitor supported by a semiconductor substrate. The supercapacitor includes first and second electrode bases. The first electrode base includes a first laterally protruding region, and the second electrode base includes a second laterally protruding region intersecting the first laterally protruding region. The distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 500 nm. Carbon nanotubes extend upward from the first and second electrode bases. The carbon nanotubes are configured as a first membrane structure associated with the first electrode base and a second membrane structure associated with the second electrode base. Pseudocapacitive material is dispersed throughout the first diaphragm structure and the second diaphragm structure. Electrolyte material is within and between the first membrane structure and the second membrane structure.Some embodiments include an integrated assembly that includes an ultracapacitor associated with an NVRAM controller. The supercapacitor includes first and second electrode bases and carbon nanotubes extending upward from the first and second electrode bases. The carbon nanotubes are configured as a first membrane structure associated with the first electrode base and a second membrane structure associated with the second electrode base. Pseudocapacitive material is dispersed throughout the first diaphragm structure and the second diaphragm structure. Electrolyte material is within and between the first membrane structure and the second membrane structure.Some embodiments include methods of forming ultracapacitors. The first electrode base and the second electrode base are formed over the semiconductor substrate. The first electrode base includes a first laterally protruding region, and the second electrode base includes a second laterally protruding region intersecting the first laterally protruding region. The distance between the first laterally protruding region and the second laterally protruding region is less than or equal to about 500 nm. Carbon nanotubes are formed to extend upwardly from the first and second electrode bases. The carbon nanotubes are configured as a first membrane structure associated with the first electrode base and a second membrane structure associated with the second electrode base. Pseudocapacitive material is dispersed throughout the first diaphragm structure and the second diaphragm structure. An electrolyte material is formed within and between the first membrane structure and the second membrane structure.In accordance with the regulations, the subject matter disclosed herein has been described with more or less specific language regarding structural and methodological features. It is to be understood, however, that the claims are not limited to the specific features shown and described, as the apparatus disclosed herein includes example embodiments. Therefore, the claims are to be taken literally to provide the full scope, and to be properly construed in accordance with the teaching of equivalents. |
A sequential access memory structure includes an output bus and a plurality of sequential access memories, each of which is connected to the output bus. Each memory includes a memory array having a plurality of sequentially readable memory elements, a carry output for producing a carry signal when reading of the array has been substantially completed, and a carry input for causing reading of the array in response to a carry signal. The carry output of each memory is connected to a carry input of one other downstream memory respectively in a chain arrangement, and the carry signals cause the arrays to be read sequentially onto the output bus. Each memory further comprises a read-write storage connected between the array and the output bus, the storage including a plurality of sections. Data from the array is loaded into one section of the storage while data is being read from another section of the storage onto the output bus. The sections of memory elements in the array comprise half-pages. The storage comprises two sections, each of which has a half-page of memory elements, and the carry output produces the carry signal prior to reading data from a last half-page of the array out of the storage onto the output bus. Data from the last half-page is read onto the output bus while data from a first half-page of an array of a next downstream memory is being loaded into its storage. |
What is claimed is: 1. A sequential access memory structure, comprising:an output bus; and a plurality of separate sequential access memories coupled together in a chain arrangement, each of which is individually coupled to the output bus and includes: a memory array having a plurality of sequentially readable memory elements; a carry output for producing a carry signal when reading of the array has been substantially completed; and a carry input for causing reading of the array in response to a carry signal, in which: the carry output of each memory is connected to a carry input of one other memory respectively in a chain arrangement; and the carry signals cause the memory elements of the arrays to be read sequentially onto the output bus. 2. A memory structure as in claim 1, further comprising a command bus, in which:each memory is connected to the command bus; and a read command applied to the command bus enables the arrays to be read sequentially in response to the carry signals respectively. 3. A memory structure as in claim 2, in which the read command is a gapless read command.4. A memory structure as in claim 2, in which the carry input of a most upstream memory in said chain arrangement is connected such that the array of the upstream memory will be read onto the output bus in response to the read command without receiving a carry signal.5. A memory structure as in claim 1, in which each memory further comprises a read-write storage connected between the array and the output bus, the storage including a plurality of sections; anddata from the array is loaded into one section of the storage while data is being read from another section of the storage onto the output bus. 6. A memory structure as in claim 5, in which:each array comprises a plurality of sections of memory elements; each section of each storage comprises a plurality of memory elements; and a number of memory elements in each section of each array is the same as a number of memory elements in each section of each storage. 7. A memory structure as in claim 6, in which the carry output produces the carry signal prior to reading data from a last section of an array out of a storage onto the output bus.8. A memory structure as in claim 7, in which:each memory includes an address counter for sequentially addressing the memory elements; and the carry output is controlled by the address counter. 9. A memory structure as in claim 6, in which:the sections of memory elements in each array comprise half-pages; each storage comprises two sections, each of which has a half-page of memory elements; and the carry output produces the carry signal prior to reading data from a last half-page of an array out of a storage onto the output bus, such that the data from the last half-page is read onto the output bus while data from a first half-page of an array of a next downstream memory is being loaded into its storage. 10. A memory structure as in claim 6, in which:the sections of memory elements in each array comprise pages; each storage comprises two sections, each of which has a page of memory elements; and the carry output produces the carry signal prior to reading data from a last page of an array out of a storage onto the output bus, such that the data from the last page is read onto the output bus while data from a first page of an array of a next downstream memory is being loaded into its storage. 11. A memory structure as in claim 1, in which each memory is configured such that it will be connected to the output bus only while its array is being read onto the output bus.12. A sequential access memory, comprising:a data output; a plurality of separate sequential access memories coupled together in a chain arrangement, each of which is individually coupled to the data output and comprises: a memory array having a plurality of sequentially readable memory elements; a carry output for producing a carry signal when reading of the array has been substantially completed; and a carry input for causing reading of the array to the data output in response to a carry signal, wherein the carry signals cause the memory elements of the arrays to be read sequentially onto the data output. 13. A memory as in claim 12, further comprising a command input, in which a read command applied to the command input enables the array to be read in response to a carry signal.14. A memory as in claim 13, in which the read command is a gapless read command.15. A memory as in claim 13, in which the carry input is configurable such that the array can be read to the data output in response to the read command without receiving a carry signal.16. A memory as in claim 12, further comprising a read-write storage connected between the array and the data output, the storage including a plurality of sections, in which data from the array is loaded into one section of the storage while data is being read from another section of the storage to the data output.17. A memory as in claim 16, in which:the array comprises a plurality of sections of memory elements; each section of the storage comprises a plurality of memory elements; and a number of memory elements in each section of the array is the same as a number of memory elements in each section of the storage. 18. A memory as in claim 17, in which the carry output produces the carry signal prior to reading data from a last section of the array out of the storage to the data output.19. A memory as in claim 18, further comprising an address counter for sequentially addressing the memory elements, in which the carry output is controlled by the address counter.20. A memory as in claim 17, in which:each section of memory elements in the array comprises a half-page; the storage comprises two sections, each of which has a half-page of memory elements; and the carry output produces the carry signal prior to reading data from a last half-page of the array out of the storage to the data output. 21. A memory as in claim 17, in which:each section of memory elements in the array comprises a page; the storage comprises two sections, each of which has a page of memory elements; and the carry output produces the carry signal prior to reading data from a last page of the array out of the storage to the data output. 22. A memory as in claim 12, in which the memory is configured such that its data output will be active only while the array is being read. |
CROSS REFERENCE TO RELATED APPLICATIONThis application claims the benefit of provisional U.S. patent application Ser. No. 60/178,766, filed Jan. 28, 2000.Any additional fees required in connection with this communication which are not specifically provided for herewith are authorized to be charged to deposit account no. 01-2520 in the name of Arter & Hadden, LLP. Any overpayments are also authorized to be credited to this account.TECHNICAL FIELDThe present invention generally relates to the art of microelectronic integrated circuits, and more specifically to a chained array of sequential access memories which enables continuous read.BACKGROUND ARTSequential access memories have been developed which have advantages and disadvantages relative to conventional random access memories. In a sequential access memory, individual addresses are not accessible directly. The memory is organized in pages of, for example, 512 bytes each, and it is necessary to read out an entire page or half page in order to obtain the data stored at any particular address on the page. A preferred example of a sequential access memory is the Am30LV0064D UltraNAND(TM), which is commercially available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, Calif. This memory is a Flash ROM device based on NAND architecture.Compared to sequential access non-volatile memories, random access ROMs require more physical pins and connections, mainly for address lines, cost significantly more for the same bit density, and are not available in as high a density as sequential access memories.Sequential access memories, on the other hand, generally require a command sequence to be written to the device in order to select and make a set of information readable. They can only read information from sequential locations in the memory until a new command sequence is written, and thus only support straight line program execution.The AMD UltraNAND memory has a conventional read command which causes a single 512 byte page of data to be loaded into an input/output storage register in parallel, and output in series.There is a 7 microsecond latency period at the beginning of the read operation during which the data is loaded into the register, and then data can be serially output at approximately 50 nanoseconds/byte.The UltraNAND memory also has a "gapless" read command which enables all of the memory pages to be read out with only a single 7 microsecond latency period at the beginning of the operation. This is accomplished by loading one half-page of data into one section of the register while outputting a previously loaded half-page from another section of the register in an "pingpong" manner until the entire memory has been read.Although the gapless read command enables a single UltraNAND memory to be read out with only 7 microsecond latency, it does not enable a plurality of UltraNAND memories to read out continuously. For each memory that is to be read, it is necessary to individually input the read command sequence, which itself requires a substantial amount of time, and then read out the data with at least one 7 microsecond latency period for loading the first half-page of data into the input/output register.SUMMARY OF THE INVENTIONThe present invention overcomes the limitations of the prior art by providing a sequential access memory which can be chained together with similar memories to form a memory array that can be read out continuously. In accordance with the present invention, the read command sequence is loaded into all of the memories simultaneously, and the memories are then read out sequentially with only a single 7 microsecond latency at the beginning of the read operation.More specifically, a sequential access memory structure according to the present invention includes an output bus and a plurality of sequential access memories, each of which is connected to the output bus. Each memory includes a memory array having a plurality of sequentially readable memory elements, a carry output for producing a carry signal when reading of the array has been substantially completed, and a carry input for causing reading of the array in response to a carry signal. The carry output of each memory is connected to a carry input of one other downstream memory respectively in a chain arrangement, and the carry signals cause the arrays to be read sequentially onto the output bus.Each memory further comprises a read-write storage connected between the array and the output bus, the storage including a plurality of sections.: Data from the array is loaded into one section of the storage while data is being read from another section of the storage onto the output bus.The sections of memory elements in the array comprise half-pages. The storage comprises two sections, each of which has a half-page of memory elements, and the carry output produces the carry signal prior to reading data from a last half-page of the array out of the storage onto the output bus. Data from the last half-page is read onto the output bus while data from a first half-page of an array of a next downstream memory is being loaded into its storage.These and other features and advantages of the present invention will be apparent to those skilled in the art from the following detailed description, taken together with the accompanying drawings, in which like reference numerals refer to like parts.BRIEF DESCRIPTION OF DRAWINGSFIG. 1 is a block diagram illustrating a sequential access memory according to the present invention;FIG. 2 is a diagram illustrating the arrangement of sections of memory elements in the memory of FIG. 1;FIG. 3 is a diagram illustrating how half-page sections of memory elements are loaded and read out concurrently;FIG. 4 is a diagram illustrating how a plurality of memories as shown in FIG. 1 are connected in a chain arrangement for continuous read to provide a memory array according to the present invention; andFIG. 5 is a timing diagram for the array of FIG. 4.DETAILED DESCRIPTION OF THE INVENTIONFIG. 1 is a simplified block diagram of the AMD Am30LV0064D UltraNAND(TM) memory which has been modified to embody the features of the present invention. It will be understood that this particular memory is being used to describe the principles of the invention, and that the invention is not limited to this example. Conversely, the features of the invention can be applied to any memory having a suitable configuration.A memory 10 embodying the present invention comprises an array 12 of memory elements or cells which are capable of storing data. The elements of the UltraNAND memory are based on a non-volatile "flash" configuration, although the scope of the invention is not so limited. The detailed configuration and operation of the UltraNAND memory per se are not the particular subject matter of the present invention, and are described in data books available from AMD.The memory 10 further includes input/output storage elements 14 and 16 in the form of data registers. Data is written into or read out of the individual memory elements via the registers 14 and 16 under control of an address register 18 which specifies a write or read address in the array 12. The address is decoded by an X decoder 20 and Y decoders 22 and 24 for memory access. An address bus 26 interconnects the address register 18 and the decoders 20, 22 and 24. An 8-bit I/O register and buffer 28 enables input and output of commands and data via an 8-bit I/O bus 29.Data is written into the array 12 by serially inputting the data one byte at a time into the buffer 28, and therefrom into one of the registers 14 and 16. Data is transferred in parallel one page or one half-page at a time from the register 14 or 16 into the array 12. This operation is reversed for reading data out of the array. The details of the read operation will be described below.Further illustrated are high voltage pumps 30 which boost the nominal 3.0V applied operating voltage of the memory 10 to higher and/or negative voltages as required for the flash memory reading, writing and erasing operations. A state machine 32 sequences the operations of the memory 10 in response to input pulses. A command register 34 is provided to store a command which is to be executed by the memory 10. A status register 36 stores information regarding the status of the memory 10.The UltraNAND memory 10 utilizes a multiplexed address/data bus arrangement. All command, address, and data information is passed to and from the device through I/O[0 . . . 7] (the bus 29). Control signals are applied to the memory 10 for CE# (Chip Enable), CLE# (Command Latch Enable), ALE (Address Latch Enable), RE# (Read Enable), WE# (Write Enable), SE# (Spare Area Enable), and WP# (Write Protect). There is also an open drain RY/BY# (Ready/Busy) output pin used to indicate when the memory 10 is busy with an internal operation.Whereas the bus 29 functions as an output bus for read operations, an input or command bus is considered for the purposes of the present invention to include the signal inputs CE#, RE#, WE#, SE#, ALE, CLE and WP#.In accordance with the present invention, the UltraNAND memory 10 (or other suitable memory) is modified to include a carry-out pin 40 and a carry-in pin 42. An input CARRY signal is applied to the pin 42 which is connected to the input of an AND gate 44 or other suitable logic element. The read enable signal RE# is applied to another input of the AND gate 44, the output of which is applied to inputs of the state machine 32 and command register 34.A decoder 46 is also provided in accordance with the present invention to produce an output CARRY signal on the pin 40 when the address in the address register has incremented to the last address of the second to last page or half-page in the array 12 as will be described in detail below.FIG. 2 illustrates the arrangement of memory elements in the array 12. The memory elements are arranged in 16,384 pages of 528 bytes each (each byte consists of 8 one-bit memory elements). Each page 12a consists of two half-pages 12b and 12c and a spare area 12d. The first half-page 12b stores bytes 0-255, the second half-page stores bytes 256-511, and the spare area 12d stores 16 bytes of auxiliary data. The spare area 12d is not the particular subject matter of the present invention and will be specifically excluded from further description.Excluding the spare area 12d, each page or half-page in the array and storage can be considered to constitute a section of memory elements. In the arrangement illustrated in FIG. 2, each section comprises one page of data. Each data register 14 and 16 can store 512 bytes (one page) of data, in which case one section consists of 512 bytes. For reading, one page of data is loaded into one of the data registers 14 and 16 from the array while one page of data is being output from the other data register 14 or 16 via the buffer 28 and output bus 29 in a "ping-pong" manner.In another embodiment of the invention as illustrated in FIG. 3, each data register 14' and 16' can store 256 bytes (one-half page), in which case one section consists of 256 bytes. For reading out the data in the array 12, one-half page of data is loaded into one of the data registers 14' and 16' from the array while 12 one-half page of data is being output from the other data register 14' or 16' via the buffer 28 and output bus 29.In both cases, the data registers 14 and 16 can be considered as read-write storage sections. Although not explicitly illustrated, it is further within the scope of the invention to provide a single read-write storage including 256 or 512 byte sections corresponding to the registers 14,16 or 14',16' respectively.FIG. 4 illustrates a sequential access memory array 50 according to the present invention, including a plurality of memories 10a, 10b, 10c and 10d connected together in a chain arrangement. The signals CE#, RE#, WE#, SE#, ALE, CLE and WP# are applied to all of the memories in parallel via a command bus 52 as described above with reference to FIG. 1. The buffers 28 of the memories 10a to 10d are all connected to the output bus 29.The carry-in pin 42 of the most upstream memory 10a is connected to a supply voltage Vcc such that the memory 10a can read out data without receiving a CARRY signal. The carry-out pin 40 of the memory 10a is connected to the carry-in pin 42 of the next downstream memory lob. This arrangement continues down the chain of memories with the exception that the carry-out pin 40 of the most downstream memory 10d is not connected to a carry-in input and the output CARRY signal on the carry-out pin 40 of the memory 10d can be used to signal the end of a continuous read operation.The operation of the memory array 50 is illustrated in FIG. 5, assuming the embodiment in which each register 14' and 16' constitutes a 256 byte (half-page) section of memory elements.A gapless read command and the address of the first byte in the array 12 are entered into all of the memories 10a to 10d in parallel via the command bus 52 and buffer 29. Read enable pulses RE# are subsequently applied to all of the memories in parallel. However, the carry-out pins 40 of all of the memories 10a to 10d are all logically low, the carry-in pins 42 of the memories 10b to 10d are logically low, and the AND gates 44 of the memories 10b, 10c and 10d are inhibited. This prevents the read enable pulses RE# from being applied to the state machines 32 of the memories 10b, 10c and 10d, and these memories are initially prevented from reading out data.The carry-in pin 42 of the most upstream memory 10a is connected to Vcc and is logically high. Thus, the AND gate 44 of the memory la is enabled, and the read enable pulses RE# pass therethrough to the state machine 32. The memory 10a executes a gapless read operation in which half-pages of data from the array 12 are alternatingly loaded into one of the data registers 14' and 16' and concurrently read out from the other of the data registers 14' and 16' onto the output bus 29 as described above.As each sequential memory element or byte is loaded or read, the address register 18 is incremented to the next address by a counter in the state machine 32. The decoder 46 is configured to generate a CARRY signal on its carry-out pin 40 when the address in the address register 18 has been incremented to the last address 255 (256th byte) in the first half of the last page. As illustrated in FIG. 5, the CARRY signal is generated when the address is incremented to byte 255 of the 16,384th page.The CARRY signal from the pin 40 of the memory 10a is applied to the carry-in pin 42 of the chip 10b, and the AND gate 44 of the memory 10b is enabled to pass the read enable pulses RE# to the state machine 32 and thereby enable data to be read out of the array 12 of the memory 10b. In response to read pulse RE# 256 as illustrated in FIG. 5, the first byte 256 of the last half-page in the memory 10a is output to the bus 29 from the register 14' or 16', and concurrently the same read pulse causes the first byte 0 of the first half-page in the memory 10d to be loaded into its data register 14' or 16'.This operation continues until the entire last half-page of the memory 10a has been read out onto the bus 29 and the entire first half-page of the memory 10b has been loaded into its register 14' or 16'.Next, the memory 10a goes into an inactive state because all of its data has been read out. The read operation of the memory 10b continues until all of its data has been read out. When the data has been read out of the first half of the last page of the memory 10b, it generates a CARRY signal which is applied to the next downstream memory 10c in the chain in the manner described above.This continues until all of the memories 10a to 10d have been read out sequentially. The only latency period for the entire continuous read operation of all of the memories occurs as the first half-page of data is loaded into the register 14 or 16 of the first memory 11a. In the above example, it was assumed that the memory arrays 12 were read out one-half page at a time. It is further within the scope of the invention to read out data one page at a time as described above. In this case, the decoder 46 would generate CARRY signal after the address is incremented to the last byte 511 of the next-to-last page. The scope of the invention yet further includes reading out data in less than one-half page or more than one page sections.In order for the memories 10ato 10d to be read out seamlessly, it is necessary to prevent contention for the bus 29. This is accomplished by assuring that only one memory is operatively connected to the bus 29 at any time.This occurs automatically in the UltraNAND memory as illustrated in FIG. 5. Each memory goes into a tri-state condition (high impedance or floating state) after the last byte has been read out, and is thereby effectively disconnected from the bus 29.Each read enable pulse RE# h as a duration of 35 nanoseconds during which it causes the buffer 28 to output data onto the bus 29. There is a latency of 15 nanoseconds between the 35 nanosecond RE# pulses during which the buffer 28 is tri-stated. As such, there is a 15 nanosecond tri-state condition after the last byte is read out of one memory and the first byte is read out from the next memory in the chain. This assures that the previous memory chip is disconnected from the bus 29 before the next memory chip is connected thereto.Although bus contention is automatically obviated in the UltraNAND memory, it may be necessary to provide a separate mechanism to accomplish this in other configurations. Such is also contemplated within the scope of the present invention.In summary, the present invention overcomes the limitations of the prior art by providing a sequential access memory which can be chained together with similar memories to form a memory array that can be read out continuously with only a single latency period at the begging of the operation.Various modifications will become possible for those skilled in the art after receiving the teachings of the present disclosure without departing from the scope thereof.INDUSTRIAL APPLICABILITYThe invention is applicable to the fabrication of microelectronic integrated circuit memories, and to the assembly of a plurality of sequential access memories into a structure such that the memories can be read continuously. |
Disclosed is apparatus, and a method, for providing increased cooling air flow for a computer, particularly a thin server mounted in an enclosure (e.g., a cabinet). The thin server has holes in one side thereof, in addition to holes in the front and back, for increased vent openings for air flow, especially for exit of warmed air. The apparatus includes a spacer rail along the one side of the thin server having the holes, the spacer rail exposing the holes. In mounting the thin server in a cabinet, the spacer rail, and the side of the thin server opposite the side having the holes, are fastened to (supported by) the sides of the cabinet, to mount the thin server in the cabinet. |
What is claimed is: 1. Chassis structure of a computer, comprising:a computer chassis, having a front and back and opposed sides, at least the front, back and one side, of the opposed sides, of the computer chassis having holes therethrough for circulation of air through the computer chassis; at least one air moving device in the computer chassis, provided to force flow of air through the computer chassis, and adapted to draw air into the computer chassis through the holes in the front of the computer chassis and cause air to flow out of the computer chassis through the holes in the one side and the back of the computer chassis; and a spacer rail extending along the one side of the computer chassis having the holes, and fastened to the one side, said spacer rail when fastened to the one side leaving the holes in the one side exposed, the spacer rail providing a space for air flow along the one side of the computer chassis having the holes therethrough. 2. Chassis structure according to claim 1, wherein the spacer rail is U-shaped in cross-section.3. Chassis structure according to claim 1, wherein said one side of the computer chassis has a front portion closest to the computer chassis front and a rear portion closest to the computer chassis back, the front and rear portions meeting at a mid-point of the one side of the computer chassis, and wherein more than one-half of the area of the holes, for circulation of air, through the one side of the computer chassis, is in said rear portion.4. Chassis structure according to claim 1, wherein all of the holes, for circulation of air through the computer chassis, in the one side of the computer chassis, are in the 25% of the one side closest to the back of the computer chassis.5. Chassis structure according to claim 4, wherein the sum of the area of the holes in the back and the one side of the computer chassis exceeds the area of the holes in the front of the computer chassis.6. Chassis structure according to claim 4, wherein the holes in the one side of the computer chassis constitute at least 5% by area of the entire area of the one side of the computer chassis.7. Chassis structure according to claim 1, wherein said computer chassis is a computer chassis of a thin server, having a height of at most 2U, U being equal to 1.75 inches.8. Chassis structure according to claim 7, wherein the thin server has a height of 1U.9. A computer, comprising:a computer chassis, having opposed front and back, and opposed sides, at least the front, back and one side, of the opposed sides, of the computer chassis having holes therethrough, the holes in each of the front, back and one side being adapted for flow of air through the holes, for circulation of air through the computer chassis; and at least one air moving device in the computer chassis, provided to force flow of air through the computer chassis, and adapted to draw air into the computer chassis through the holes in the front of the computer chassis and cause air to flow out of the computer chassis through the holes in the one side and the back of the computer chassis, wherein more than one-half of the area of the holes in the one side of the computer chassis, for circulation of air through the computer chassis, is in one-half the area of the one side of the computer chassis which is closest to the back of the computer chassis. 10. The computer according to claim 9, wherein the computer further comprises internal components which product heat, said internal components being located within the computer chassis; and wherein said internal components, said holes in the one side of the computer chassis, and said at least one air moving device, are provided to draw air into the computer through the holes in the front of the computer chassis and pass the air by said internal components which produce heat, prior to the air flowing out of the computer chassis through the holes in the one side and the back of the computer chassis.11. The computer according to claims 9, wherein the computer includes baseboard core components, and the one side of the computer chassis having the holes therethrough is the side of the computer chassis closest to said baseboard core components.12. The computer according to claim 11, wherein said baseboard core components include processors, memory, voltage regulation chips and chipsets of said computer.13. The computer according to claim 9, further comprising a spacer rail fastened to the one side of the computer chassis, wherein the spacer rail has first and second portions spaced from each other in a direction perpendicular to the one side of the computer chassis, the first portion being adjacent the one side of the computer chassis, and wherein the first portion exposes the holes in the one side of the computer chassis.14. The computer according to claim 13, wherein the spacer rail is removably fastened to the computer chassis.15. The computer according to claim 9, wherein said computer is a thin server, having a height of at most 2U, U being equal to 1.75 inches.16. The computer according to claim 15, wherein the thin server has a height of 1U.17. At least one computer mounted on a support, comprising:a support, capable of mounting a plurality of computers with each computer positioned above an adjacent computer, the support having fasteners for holding each computer across a width of the support; a least one computer mounted on the support, wherein the at least one computer has a computer chassis having a front exposed at a front of the support, a back opposite the front, and two opposed sides extending between the front and back, at least the front, back and one side, of the two opposed sides, having holes extending therethrough, the holes being adapted for flow of air through the holes, for circulation of air through the computer chassis; at least one air moving device in the computer chassis, provided to force flow of air through the computer chassis, and adapted to draw air into the computer chassis through the holes in the front of the computer chassis and cause air to flow out of the computer chassis through the holes in the one side and the back of the computer chassis; and a spacer rail extending along said one side of the computer chassis, fastened to the one side of the computer chassis, the spacer rail providing a space for air flow along the one side of the computer chassis having the holes therethrough, wherein the spacer rail and the front of the computer chassis in total extend substantially across the width of the support, and wherein the opposed side of the computer chassis to the one side, and the spacer rail, are fastened to the support to mount the computer on the support. 18. At least one computer mounted on the support according to claim 17, wherein the spacer rail is removably fastened to at least one of said one side of the computer chassis and to said support.19. At least one computer mounted on the support according to claim 17, wherein said spacer rail includes first and second leg portions spaced from each other in a direction perpendicular to the one side of the computer chassis, and a third portion extending between the first and second leg portions, and wherein said first leg portion is fastened to said one side of the computer chassis and the second leg portion is fastened to the support.20. At least one computer mounted on the support according to claim 19, wherein the first leg portion has openings which expose said holes in the one side of the computer chassis.21. At least one computer mounted on the support according to claim 17, wherein said support is a cabinet, wherein the cabinet has a security door which is adapted to lock the at least one computer in the cabinet, and wherein the security door has an opening exposing the front of the computer chassis of the at least one computer.22. At least one computer mounted on the support according to claim 21, wherein said cabinet has an air moving device on the top thereof.23. At least one computer mounted on the support according to claim 17, wherein a plurality of the at least one computer are mounted on the support, with each computer above an adjacent lower computer.24. At least one computer mounted on a support according to claim 17, wherein each of the at least one computer is a thin server, and has a height of at most 2U, U being 1.75 inches.25. At least one computer mounted on a support according to claim 17, wherein the support includes support members and an extension slide assembly which can be extended out from the support members, and wherein the opposed side of the computer chassis and the spacer rail are fastened to the extension slide assembly.26. A method for mounting at least one computer on a support, comprising:providing at least one computer according to claim 13, fastening the spacer rail to the support at one portion of the support; and fastening the opposed side of the computer chassis, which is the side other than said one side, to another portion of the support. 27. A method for mounting at least one computer on a support, comprising:providing at least one computer according to claim 8; fastening a spacer rail to said one side of the computer chassis, such that the holes in said one side of the computer chassis are exposed; fastening the spacer rail to the support at one portion of the support; and fastening the opposed side of the computer chassis, which is the side other than said one side, to another portion of the support. 28. Chassis structure of a computer, comprising:a computer chassis, having a front and back and opposed sides, at least the front and one side, of the opposed sides, of the computer chassis having holes therethrough for circulation of air through the computer chassis; and a spacer rail extending along the one side of the computer chassis having the holes, and fastened to the one side, said spacer rail when fastened to the one side leaving the holes in the one side exposed, the spacer rail providing a space for air flow along the one side of the computer chassis having the holes therethrough, the spacer rail including a channel providing said space for air flow. 29. A computer, comprising:a computer chassis, having opposed front and back, and opposed sides, at least both the front and one side, of the opposed sides, of the computer chassis having holes therethrough, the holes being adapted for flow of air through the holes, for circulation of air through the computer chassis; a spacer rail fastened to the one side of the computer chassis, wherein the spacer rail has first and second portions spaced from each other in a direction perpendicular to the one side of the computer chassis, the first portion being adjacent the one side of the computer chassis, and wherein the first portion exposes the holes in the one side of the computer chassis; and at least one air moving device in the computer chassis, provided to force flow of air through the computer chassis, and adapted to draw air into the computer chassis through the holes in the front of the computer chassis and cause air to flow out of the computer through the holes in the one side of the computer chassis, wherein more than one-half of the area of the holes in the one side of the computer chassis, for circulation of air through the computer chassis, is in one-half the area of the one side of the computer chassis which is closest to the back of the computer chassis, and wherein the first and second portions of the spacer rail form a space therebetween that is a channel for flow of air along the one side of the computer chassis having the holes therethrough. 30. At least one computer mounted on a support, comprising:a support, capable of mounting a plurality of computers with each computer positioned above an adjacent computer, the support having fasteners for holding each computer across a width of the support; at least one computer mounted on the support, wherein the at least one computer has a computer chassis having a front exposed at a front of the support, a back opposite the front, and two opposed sides extending between the front and back, at least both the front and one side, of the two opposed sides, having holes extending therethrough, the holes being adapted for flow of air through the holes, for circulation of air through the computer chassis; and a spacer rail extending along said one side of the computer chassis, fastened to the one side of the computer chassis, the spacer rail providing a space for air flow along the one side of the computer chassis having the holes therethrough, wherein the spacer rail includes a channel providing said space for air flow, wherein the spacer rail and the front of the computer chassis in total extend substantially across the width of the support, and wherein the opposed side of the computer chassis to the one side, and the spacer rail, are fastened to the support to mount the computer on the support. |
BACKGROUNDThe present invention is directed to apparatus, and a method, for increased cooling of computers (e.g., thin servers), particularly for increased cooling of computers mounted on a support such as on a rack or in a cabinet. The present invention is especially directed to apparatus, and a method, for increased cooling of thin servers, mounted on the support (illustratively, on a rack or in a cabinet), especially where a plurality of thin servers as positioned one above another in a secure enclosure (such as a cabinet having a locked security door).The present trend of providing more processing power, contained in steadily smaller server designs, is creating large hurdles for thermal cooling solutions. The problem of providing sufficient cooling is exacerbated when a plurality of the thin servers are provided in a cabinet and are locked therein through use of, e.g., a security door which may be locked shut to prevent removal of the servers but which may be opened to permit removal of the servers. Illustratively, the cabinet can include, for example, up to 42 thin servers positioned vertically one above the other in the cabinet. However, use of such plurality of thin servers (for example, each having a height of 2U or 1U , where U=1.75 inches) increases the amount of cooling necessary; yet, with the thin servers within an enclosure such as a cabinet, and with outlets for warm air from the rear of the thin server being limited due to design configuration limiting the available area for air vents (openings) in the rear of each thin server, there is a restriction on the ability to provide cooling by air flow.Illustratively, the front of the 1U server should support at least 15% hole area, for ambient inlet air, whereas the rear of the server should equal or exceed this quantity for warmed outlet air. Typically, the rear holes cannot equal or approximate the percentage of front area holes, due to system constraints. This causes an imbalance of ambient inlet air to warmed outlet air, and this imbalance causes excessive back pressure to build up within the system, thus preventing ability to achieve proper internal component cooling.Various types of cabinets, in which, for example, thin servers can be mounted, have standard dimensions, including internal width dimensions, as set forth in EIA Standard EIA-310-D, September 1992, of the Electronic Industries Association (American National Standard ANSI/EIA-310-D-1992, approved Aug. 24, 1992). This EIA Standard also defines a cabinet as a free standing and self-supporting enclosure for housing electrical and/or electronic equipment, usually fitted with doors and/or side panels, which may or may not be removable; and defines a rack as an open structure for mounting electrical or electronic equipment. This standard defines panels as flat, rectangular structural members used for the external surface of equipment, describing that panels are designed to be mounted on the mounting flanges of cabinets or racks, and that they are usually used for mounting controls, data presentation, apparatus or equipment. These definitions will be used throughout the present application.While there are standard dimensions defined for widths of, e.g., cabinets, there are no standard dimensions set for widths of the computers (for example, thin servers) provided in such cabinets.Conventionally, thin servers are mounted within a cabinet, or on a rack, using extension slides. The extension slides are mounted to the sides of the chassis of the computer, disallowing any opportunity for cooling through the sides of the chassis.BRIEF DESCRIPTION OF THE DRAWINGSThe foregoing and a better understanding of the present invention will become apparent from the following detailed description of exemplary embodiments and claims when read in connection with the accompanying drawings, all forming a part of the disclosure of this invention. While the foregoing and following written and illustrated disclosure focuses on disclosing example embodiments of the invention, it should be clearly understood that the same is by way of illustration and example only and is not limited thereto. The spirit and scope of the present invention is limited only by the terms of the appended claims.The following represents brief descriptions of the drawings, wherein:FIG. 1 is a perspective view showing a thin server, including the chassis of the thin server, with holes in one side of the chassis for circulation of air through the chassis, and a spacer rail, according to a first example embodiment.FIG. 2 is a perspective view of the thin server of this first example embodiment, having the spacer rail fastened to the chassis thereof.FIG. 3 is a perspective view showing a thin server having a spacer rail according to an example embodiment of the present invention, partially inserted in a cabinet.FIG. 4 is a schematic perspective view showing another example embodiment of a thin server.FIG. 5 shows air flow within a thin server according to an example embodiment.FIG. 6 is a cross-sectional view illustrating positioning of a thin server having a spacer rail attached thereto, mounted on a support, according to an example embodiment.FIG. 7 shows an extension slide assembly for mounting the spacer rail on a support for the thin server, according to an example embodiment.DETAILED DESCRIPTIONIn the following descriptions in connection with the drawing figures, the present invention will be described in connection with its application to a thin server, for example, a server having a height of 1U (1.75 inches) or 2U (3.50 inches). However, the present invention is not limited to use with such thin servers, and can be applied in connection with any computer, particularly a computer provided in a cabinet, to provide increased cooling. Moreover, the present invention is primarily described in connection with mounting the thin server in a cabinet, by using extension slides mounted to the sides of the cabinet. However, the present invention is not limited thereto, and can be applied to any computer mounted on a support (e.g., a rack) using any technique for fastening the computer to the support (e.g., via the spacer rail).The present invention provides improved cooling particularly when mounting a plurality of thin servers on a support (for example, in a cabinet as defined previously). Applicant has found that by providing additional cooling vents (holes) through one side of the chassis, particularly the side closest to the baseboard core components of the server (for example, processor(s), memory, voltage regulation chips and other chips), and by attaching a spacer rail to this one side of the chassis having the additional cooling holes, the thin server can be mounted to, e.g., the sides of a support (for example, sides of an enclosure, such as a cabinet) while still providing adequate air flow through the computer and sufficient cooling. By use of the spacer rail, and with additional holes for air flow provided in the side of the server chassis, these additional holes can be spaced from the side (e.g., side panel) of the enclosure, thereby providing a space for air outflow through the side of the chassis, in addition to air outflow from the rear of the server, to thereby avoid an imbalance of ambient inlet air to removal of warmed outlet air. Moreover, the spacer rail can provide an air flow channel along the side of the thin server, to move air efficiently and effectively along the side of the thin server. In addition, by placement of the holes in the side of the thin server, holes in the side can also be used for inflow of ambient (cooler) air into the thin server.The spacer rail includes apertures or cut-out portions to expose the additional cooling (air flow) holes through the one side of the chassis, thereby facilitating continual air flow from the chassis side (effectively controlling back pressure). In addition, through use of the spacer rail which, for example, extends a length of the side of the server, the spacer rail helps maintain structural integrity and robustness of the thin server, e.g., when mounted in a security enclosure such as a cabinet.Through use of the holes in the side of the thin server chassis, the system designer is given more options to solve heating problems in the thin server. Additionally, through the addition of the spacer rail along the side of the server, in combination with the additional holes in the side of the server, the holes are easily exposed to achieve efficient chassis side venting, while providing reinforcement to the server-to-support mounting scheme.Thus, through use of the spacer rail, which allows for apertures in the side of the thin server (for example, 1U server), the total area of outlet aperture for warmed air can equal or exceed the total area of inlet aperture for ambient air, so that back pressure within the server can be minimized, and proper internal cooling can be more readily addressed.In addition, by providing the side apertures toward the rear of the server, rather than toward the front where ambient air enters the server, more effective use of the side holes for removing warmed outlet air from the server, rather than acting as an additional inlet for drawing ambient air into the server, can be achieved.Referring to the figures in which like numerals indicate like elements, FIG. 1 is a perspective view of thin server 1 and spacer rail 3 according to an example embodiment. Shown in one side 5 of thin server 1 are holes 7 for circulating air through thin server 1 (e.g., through the chassis of thin server 1). Depending on placement of holes 7 and of an air moving device (e.g., a fan) in thin server 1, holes 7 can act as inlets for ambient air or as outlets for air warmed within the thin server (which, correspondingly, cools components in the thin server). Thin server 1 has front 2 and rear 4; as can be appreciated, ambient air will enter through front 2 of thin server 1 and will exit through rear 4 of thin server 1, in addition to air flow through holes 7 in one side 5 of thin server 1. Front 2 and rear 4, together with sides 5 and 37 (see FIG. 6), are elements of a chassis of thin server 1. Also shown in FIG. 1 is spacer rail 3; in practice, spacer rail 3 is fastened to side 5 of thin server 1 according to an example embodiment. For example, using fastening holes 17, spacer rail 3 can be fastened to one side 5 of thin server 1. While, e.g., holes 17 are shown in FIG. 1, which can be screw holes, the fastening need not be accomplished through use of screws; for example, spacer rail 3 can be mounted in slots in one side 5 of thin server 1. Spacer rail 3 can be removably fastened to side 5 of thin server 1, and, as discussed further infra, can be removably fastened to the side of a support.Spacer rail 3 as shown in FIG. 1 has first and second leg portions, respectively to be attached to side 5 of thin server 1 and to the support (e.g., side of a cabinet) for thin server 1. Connecting first and second leg portions 9, 11, respectively, is connection member 13, which provides air channel 14 for air flow access to holes 7.First leg portion 9 of spacer rail 3 has cut-out portions 15, 15, for exposing holes 7. While cut-out portions are shown, as can be appreciated first leg portion 9 can have apertures corresponding to holes 7, for exposing holes 7 to permit air circulation.Also shown in FIG. 1 are holes 19 in second leg portion 11, to facilitate fastening second leg portion 9 to a support (discussed further infra).The chassis of thin server 1 can be made of materials, and by procedures, used to presently manufacture chassis of thin servers, with the additional procedure of forming holes in side 5 of the chassis of thin server 1 (e.g., by a same technique, such as stamping, used to form holes in a rear panel of the chassis of thin server 1).Preferably, spacer rail 3 is formed of a sheet metal, and can be formed by appropriate bending and cutting (or, e.g., stamping). Spacer rail 3 can also be made of a plastic; and, e.g., spacer rail 3 can be made by a molding process.FIG. 2 shows the example embodiment of FIG. 1, with spacer rail 3 fastened to side 5 of thin server 1. Also shown in FIG. 2 are air flow patterns in connection with the structure of this example embodiment. That is, as represented by arrows 21, ambient (cool) air enters front 2 of thin server 1 and also enters holes 7 in side 5 of thin server 1, closest to front 2. Warmed air, represented by arrows 23, exit from rear 4 of thin server 1, and from holes 7 in side 5 of thin server 1, which are closest to rear 4 of thin server 1.As can be appreciated, by appropriate positioning of holes 7 in side 5 of thin server 1, an appropriate ratio of openings for cool air to enter thin server 1 for cooling, and warmed air to exit thin server 1, can be achieved. In this regard, for example, the area of openings 7 toward rear 4 of thin server 1 can be increased, while the area of openings 7 in side 5 closer to front 2 of thin server 1 can be decreased, so as to increase area of warmed air outlet to thereby avoid any excessive back pressure to build up within thin server 1, thus avoiding inadequate internal component cooling. That is, through appropriate forming of holes, and positions thereof, in side 5 of thin server 1, especially together with positioning of a fan within thin server 1 to force air flow through thin server 1, the total outlet opening area for outlet warmed air can equal or exceed total inlet area for ambient air, to thereby minimize back pressure within the thin server and provide better designed internal cooling.FIG. 4 shows schematically, in perspective view, an example embodiment where all holes in side 5 are provided near rear 4 of thin server 1. That is, shown in FIG. 4 is side area 41 for outlet of warmed air. Cool, ambient air 21 enters only through front 2 of thin server 1. Warmed air having passed through thin server 1 (that is, passing internally through the chassis of thin server 1) exits as warmed air 23 through apertures in rear 4 of thin server 1 and via the holes in side area 41 of side 5 of thin server 1. Thus, outlet openings for warmed air can be increased, avoiding excessive back pressure as discussed previously.Generally, front 2 of a thin server has at least 15% open area as inlet for ambient air for cooling; due to design considerations, the area of holes in front 2 is always greater than the area of holes in rear 4. Through utilizing increased outlet area in side 5, and, in particular, with at least 5% open area in side 5, the total of cooling area in side 5 and rear 4 can be made greater than holes for inlet ambient air in front 2, providing effective cooling. With holes in side cooling area 41 as shown in FIG. 4, e.g., with the holes in side 5 only in the back 25% of side 5, improved outlet area for warmed air is achieved.FIG. 3 is a perspective view showing cabinet 25 having a plurality of servers one above the adjacent server, according to an example embodiment. Shown in FIG. 3 is cabinet 25, having front security door 27. Front security door 27 can, for example, be shut and locked, so that thin servers 1 cannot be removed from the cabinet. Shown in FIG. 3 are a plurality 33 of thin servers 1, each vertically above an adjacent server. Each server can, for example, be 1U (that is, 1.75 inches) in height. Of course, the present invention is not limited to being applied to servers having a height of 1U, but can be applied to other computers, including other servers, such as servers having a height of 2U (2U being 3.50 inches in height).FIG. 3 shows thin server 1 mounted on side 29 and its opposed side (not shown in the perspective view in FIG. 3). That is, cabinet 25 includes supporting structure 31 on opposed sides of cabinet 25, for supporting computer 1.Shown in FIG. 3 is one of the plurality 33 of thin servers, partially mounted into cabinet 25. As seen in FIG. 3, attachment 35 to spacer rail 3 is mounted on supporting structure 31, which is on the inside of the panel forming side 29; and a side of thin server 1 opposite side 5 thereof is mounted on supporting structure 31 on the side of cabinet 25 opposite to side 29.Also shown in FIG. 3 is fan 37 on top 39 of cabinet 25. Fan 37 is used in a preferred example embodiment, in order to draw warmed air out of the interior of cabinet 25, facilitating cooling of the plurality 33 of thin servers 1.FIG. 5 shows an example embodiment of air flow within thin server 1. Arrow 43 shows air flow into thin server 1; flow is then through, inter alia, internal component area 45 of thin server 1, which has internal components which are to be cooled. Component area 45 includes heat generating components to be cooled, which can include (but are not limited to) baseboard core components including processors, memories, voltage regulation chips and other chips (such as chipsets).Air is moved through thin server 1 by fans 46, 46' and 46''. There are four (4) fans 46, for cooling the core of server 1; two (2) fans 46', for cooling the power supply; and two (2) fans 46'', for cooling expansion cards, shown in FIG. 5. The present invention is not limited to the number and types of fans (in general, air moving devices) shown in FIG. 5. As can be seen in FIG. 5, air passes through internal component area 45, and as outlet air flow 43A passes through holes 7, and can then pass, for example, into air channel 14 of spacer rail 3 (shown in FIG. 2), to more efficiently cool internal components 45.Also shown in FIG. 5 is outlet air flow 43A' directed to rear 4 of thin server 1, and holes 44 in rear 4 for air circulation. As is clear in FIG. 5, the area of rear 4 of thin server 1 available for holes for air circulation is relatively small; however, any problems of back pressure can be avoided according to the present invention, utilizing holes7 in side 5 of thin server 1, as outlets for warmed air.Fans 46,46,46,46, draw ambient air from the front of thin server 1, shown by arrow 43, and forces it to flow over core components. Outlet air is forced out of thin server 1 through vents 7 and 44.The location of fans 46,46,46,46 is dependent on the location of the core components, which may vary from baseboard to baseboard. The position of fans 46,46',46'', etc., preferably is such that they draw cool ambient air from the front of the thin server and force the air over the heat generating components. The number of fans 46,46'',46'', etc., is variable between different thin servers. Fewer fans are desired for acoustical concerns; however, more are desired for cooling concerns. Acoustical and cooling requirements differ between different thin server (e.g., 1U server) designs.FIG. 6 shows, in cross-section, thin server 1 mounted on sides of a support. Shown in FIG. 6 is supporting structure 31, 31 at both sides of thin server 1, for example, attached to side 29 of cabinet 25 (not shown) and the side of cabinet 25 opposite side 29. Attached to supporting structure 31 is supporting members 32, which hold attachments 35, 35a respectively attached to spacer rail 3 and the side of thin server 1 opposite side 5 having spacer rail 3 fastened thereto (that is, side 37). Attachments 35, 35a cooperate with supporting members 32, 32 as typical extension slides operate. That is, attachments 35, 35a slide along surfaces of supporting member 32 so as to be inserted, for example, within the support (for example, a cabinet) and supported therein.As indicated previously, the aforementioned EIA Standard EIA-310-D provides a set size for internal width of, e.g., various types of cabinet for supporting electrical components (including computers, such as thin servers). According to the present invention, the width of thin server 1 is decreased and the decrease in width is made up for by insertion of the spacer rail, providing air channel 14.For example, and not to be limiting, internal dimensions of spacer rail 3 (distance a in FIG. 6) can be 0.610 inches and outside dimension b of spacer rail 3 can be 0.730 inches). The width c of thin server 1 can be narrowed to 16.02 inches to provide space for spacer rail 3, while maintaining the standard width between sides of the cabinet holding the thin server.FIG. 7 shows an exploded perspective view of the extension slide assembly, and thin server and spacer rail structure, according to an example embodiment. Shown in FIG. 7 is thin server 1, having internal component area 45. Also shown in FIG. 7 is a bank 51 of fans, to create air flow through thin server 1. Thin server 1 has front 2 and rear 4, and sides 5 and 37. On side 5 is spacer rail 3, having cut-out portions 15 exposing holes 7 in side 5 of thin server 1. Also shown in FIG. 7 is attachment 35 to be attached to spacer rail 3. Attachment 35 cooperates with supporting member 32 so that thin server 1 can be slid in and out of the support (e.g., a cabinet, not shown in FIG. 7) for supporting thin server 1. Supporting member 32 and attachment 35 cooperate to form extension slide assembly 39. As is clear from FIG. 6, similar structure to attachment 35 (that is, attachment 35a) is provided on side 37 of thin server 1, and a supporting member 32 cooperates therewith in supporting thin server 1. Supporting members 32, 32 can be fastened to the support.The thin server can be provided with the spacer rail fastened thereto by any appropriate technique. For example, with side 5 of thin server 1 and leg portion 9 of spacer rail 3 having corresponding screw holes, screws can be used to fasten spacer rail 3 to side 5 of thin server 1. Alternatively, side 5 of thin server 1 can have slots, with leg portion 9 of spacer rail 3 having complementary structure to fasten spacer rail 3 on side 5. As can be appreciated, spacer rail 3 can be removably fastened on side 5; however, it is also within the contemplation of the present invention that spacer rail 3 is permanently fastened on side 5, including being integrally formed with side 5.Thin server 1, with spacer rail 3 fastened thereto, can be mounted on a support (for example, mounted in a cabinet) by any appropriate technique. For example, side 37 and leg portion 11 of spacer rail 3 can be provided with attachments 35a, 35, respectively and slid into supporting member 32, 32, this technique operating as a known extension slide assembly similar to the conventional technique used with slides for, e.g., kitchen drawers. Alternatively, leg portion 11 of spacer rail 4 and side 37 of thin server 1, on the one hand, and side panels of cabinet 25, on the other, can be provided with corresponding screw holes, with screws then being used to fasten thin server 1 to cabinet 25 through these screw holes. Thin server 1 is removably fastened to its support (e.g., to cabinet 25).Through application of the present invention, having holes in a side of the thin server and having the spacer rail, e.g., providing an air flow channel, chassis side venting can be achieved; moreover, the spacer rail also provides reinforcement to the computer mounting scheme. The present invention is particularly applicable to thin servers which are only 1.75 inches tall, the spacer rail leaving exposed vent holes in the side of this thin server, and providing an air channel for efficient removal of warmed air from the thin server.Many different embodiments of the present invention may be constructed without departing from the spirit and scope of the invention. For example, it is desirable to make a common spacer rail which can be used for multiple designs of computer. Common mounting holes/techniques can be used on sides of different computers (e.g., different servers), so that a same spacer rail design (e.g., same spacer rail) can be used on the different computers. Moreover, it is desirable to limit area of the spacer rail leg portion adjacent the computer side having the vent holes, so that a common spacer rail can be used with various designs for the computers (having different numbers of side vent holes and/or different configurations for the side vent holes and hole array), while maintaining enough area of this spacer rail leg portion adjacent the computer side having the vent holes, to provide reinforcement and structural stability. It should be understood that the present invention is not limited to the specific embodiments described in this specification. To the contrary, the present invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the claims. |
A semiconductor package may include a package substrate, a base die disposed on and electrically coupled to the package substrate, and at least one power plane module disposed on the package substrate at a periphery of the base die. The power plane module may include a top surface and a bottom surface, and at least one vertically interleaved metal layer electrically coupled to the package substrate at the bottom surface. The semiconductor package may also include a semiconductor device including a first section disposed on the base die and a second section disposed on the power plane module, where the second section of the semiconductor device may be electrically coupled to the at least one vertically staggered metal layer at a top surface of the power plane module. |
1.A semiconductor package comprising:package substrate;a base die on and electrically coupled to the package substrate;At least one power plane module located on the package substrate at the periphery of the base die, the power plane module comprising:top and bottom surfaces; andat least one vertically interleaved metal layer electrically coupled to the package substrate at the bottom surface; anda semiconductor device comprising a first section disposed on the base die and a second section disposed on the power plane module, wherein the second section of the semiconductor device is in The power plane module is electrically coupled to the at least one vertically interleaved metal layer at the top surface.2.The semiconductor package of claim 1, wherein the at least one vertically interleaved metal layer further comprises a plurality of interleaved metal layers, each of the plurality of interleaved metal layers further comprising:coupled to the top portion of the semiconductor device; andA bottom portion coupled to the package substrate, wherein the bottom portion has a width greater than a width of the top portion.3.3. The semiconductor package of claim 2, wherein the width of the bottom portion is at least 1.5 times the width of the top portion.4.3. The semiconductor package of any one of claims 2 or 3, wherein the plurality of interleaved metal layers further comprises at least one ground reference voltage plane and at least one power supply reference voltage plane.5.3. The semiconductor package of any of claims 2 or 3, wherein the power plane module further comprises at least one passive component.6.6. The semiconductor package of claim 5, wherein the passive component is electrically coupled to at least one metal layer of the plurality of interleaved metal layers.7.The semiconductor package of claim 5, wherein the passive components comprise multilayer ceramic capacitors and/or silicon capacitors.8.3. The semiconductor package of any one of claims 2 or 3, wherein the power plane module further comprises a plurality of trenches on one or more of the plurality of interleaved metal layers.9.9. The semiconductor package of claim 8, wherein the plurality of trenches are separated by a dielectric layer.10.9. The semiconductor package of claim 8, wherein the plurality of trenches are arranged in an interdigitated arrangement.11.3. The semiconductor package of any one of claims 1 to 3, wherein the plurality of interleaved metal layers are separated by a dielectric layer.12.5. The semiconductor package of claim 4, wherein the ground reference voltage plane and the power supply reference voltage plane are parallel to each other.13.3. The semiconductor package of any one of claims 1 to 3, wherein the power plane module further comprises a first bottom portion at the periphery of the base die having a first bottom portion disposed on the package substrate part of a first section, and a second section at the periphery of the package substrate having a second bottom section disposed on the motherboard.14.A computing device comprising:circuit boards; andA semiconductor package coupled to the circuit board, wherein the semiconductor package includes:package substrate;a base die on and electrically coupled to the package substrate;At least one power plane module located on the package substrate at the periphery of the base die, the power plane module comprising:top and bottom surfaces; andat least one vertically interleaved metal layer electrically coupled to the package substrate at the bottom surface; anda semiconductor device comprising a first section disposed on the base die and a second section disposed on the power plane module, wherein the second section of the semiconductor device is in The power plane module is electrically coupled to the at least one vertically interleaved metal layer at the top surface.15.The computing device of claim 14,Wherein, the at least one vertically staggered metal layer further includes a plurality of staggered metal layers, and each of the multiple staggered metal layers further includes:coupled to the top portion of the semiconductor device; andA bottom portion coupled to the package substrate, wherein the bottom portion has a width greater than a width of the top portion.16.A method that includes:forming a package substrate;forming a base die on the package substrate;A power plane module is formed at the periphery of the base die, the power plane module comprising:top and bottom surfaces; andat least one vertically interleaved metal layer electrically coupled to the package substrate at the bottom surface;forming a semiconductor device including a first section disposed on the base die and a second section disposed on the power plane module, wherein the second section of the semiconductor device Electrically coupled to the at least one vertically interleaved metal layer at the top surface of the power plane module.17.The method of claim 16,Wherein, the at least one vertically staggered metal layer further includes a plurality of staggered metal layers, and each of the multiple staggered metal layers further includes:coupled to the top portion of the semiconductor device; andA bottom portion coupled to the package substrate, wherein the bottom portion has a width greater than a width of the top portion.18.18. The method of claim 17, further comprising coupling at least one passive component to the plurality of interleaved metal layers.19.18. The method of any of claims 17 or 18, further comprising coupling a plurality of trenches to one or more metal layers of the plurality of interleaved metal layers.20.20. The method of claim 19, further comprising arranging the trenches in an interdigitated arrangement. |
Vertical Power Plane Modules for Semiconductor PackagingBackground technique2.5D packaging techniques include two or more homogeneous and/or heterogeneous silicon process node(s) on a silicon interposer (ie, base die) for improved signal bandwidth density and system miniaturization Components of more silicon chiplets.However, the current 2.5D packaging technology faces challenges. One of the challenges includes power integrity (PI) performance (e.g., Fmax and Vmin) limitations of stacked chiplets or devices due to (a) redistribution layer (RDL) routing and through-silicon vias for silicon interposers (TSV) additional IR drop losses on interconnects, (b) larger alternating current (AC) due to increased distance between stacked chiplets (ie, circuit blocks) and package/board power transfer decoupling capacitors noise, and (c) the Imax constraint due to the reduced current-carrying capacity of the TSV.Current 2.5D packaging technologies also face constraints on scaling of chiplet integration density (ie, the number of chiplets stacked per interposer) due to the miniaturized interposer and package substrate footprint.Existing solutions to the above challenges include (a) increasing the platform voltage source (e.g., from 0.9V to 1.1V) to ensure performance, (b) reducing the silicon ICCMax threshold to avoid reliability risks, (c) increasing the Introduce metal-insulator-metal (MIM) capacitance into the chiplet and/or silicon interposer to suppress the peak impedance of the power delivery network (ZPDN), and (d) silicon interposer and/or package substrate footprint expansion , to achieve increased chiplet device integration density.However, the disadvantages of the above solutions include (a) increased device power consumption, (b) degradation of electrical performance, eg, reduction in maximum frequency (Fmax) threshold, and (c) increased device form factor.Description of drawingsIn the drawings, the same reference numbers generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the disclosure. The dimensions of various features or elements may be arbitrarily expanded or reduced for clarity. In the following description, various aspects of the present disclosure are described with reference to the following drawings, in which:1A shows a cross-sectional view of a semiconductor package with a peripheral vertical power plane module in accordance with aspects of the present disclosure;FIG. 1B shows a top view layout of a semiconductor package according to the aspect shown in FIG. 1A;2A shows a cross-sectional view of a semiconductor package with a peripheral vertical power plane module in accordance with another aspect of the present disclosure;2B shows a top view layout of a semiconductor package according to the aspect shown in FIG. 2A;3 shows a cross-sectional view of a semiconductor package with a peripheral vertical power plane module in accordance with yet another aspect of the present disclosure;FIGS. 4A-4P illustrate cross-sectional views of an exemplary simplified process flow related to a method for forming a semiconductor package having a peripheral vertical power plane module according to an aspect generally similar to that shown in FIG. 1A of the present disclosure and top view;5 shows an illustration of a computing device including a semiconductor package in accordance with yet another aspect of the present disclosure.6 shows a flowchart illustrating a method for forming a semiconductor package in accordance with aspects of the present disclosure.detailed descriptionThe following detailed description refers to the accompanying drawings, which show by way of illustration specific details and aspects in which the present disclosure may be practiced. These aspects are described in sufficient detail to enable those skilled in the art to practice the disclosure. Various aspects are provided for devices and various aspects are provided for methods. It should be understood that fundamental properties of the device also apply to the method and vice versa. Other aspects may be utilized and structural and logical changes may be made without departing from the scope of the present disclosure. The various aspects are not necessarily mutually exclusive, as some aspects may be combined with one or more other aspects to form new aspects.Advantages of the present disclosure may include mitigation of direct current (DC) and alternating current (AC) losses, eg, reduced Vmin and LL3 impedances, which may result in computational core and/or graphics Fmax performance gains.Another advantage of the present disclosure may include improved power integrity through parasitic power transfer network impedance (ZPDN) reduction, allowing for lower supply voltage thresholds, thus minimizing device power consumption.Yet another advantage of the present disclosure may include improved Imax capacity (device reliability) through peripheral vertical power plane modules. Compared to discrete cylindrical interconnects with constrained geometries, such as through-mold vias (TMVs) or through-silicon vias (TSVs) through the base die or silicon interposer, this can be achieved by increasing the The interconnect volume (ie, the vertical planar interconnect configuration between the chiplet and the package substrate) to achieve reduced interconnect resistance.Yet another advantage may include a reduction in base die or silicon interposer footprint and improved package warpage.The present disclosure generally relates to a device, eg, a semiconductor package, that may include a package substrate, a base die on and electrically coupled to the package substrate, and the package substrate at a periphery of the base die at least one power plane module on the The power plane module may include top and bottom surfaces, and at least one vertically interleaved metal layer electrically coupled to the package substrate at the bottom surface. The semiconductor package may also include a semiconductor device including a first section disposed on the base die and a second section disposed on the power plane module, wherein the second section of the semiconductor device is on top of the power plane module The surface may be electrically coupled to at least one vertically interleaved metal layer. As used herein, the term "vertically interleaved metal layers" may refer to metal layers that are parallel to the side surfaces of the base die.In various aspects of the present disclosure, the at least one vertically interleaved metal layer may also include a plurality of interleaved metal layers. Each of the plurality of interleaved metal layers may also include a top portion coupled to the semiconductor device and a bottom portion coupled to the package substrate, wherein the bottom portion has a width greater than that of the top portion.In various aspects of the present disclosure, a semiconductor device may include passive components coupled to a plurality of interleaved metal layers.In various aspects of the present disclosure, a semiconductor device may include a plurality of trenches coupled to a plurality of interleaved metal layers. As used herein, "grooves" may refer to raised sections.In various aspects of the present disclosure, a semiconductor device may include trenches arranged in an interdigitated arrangement.The present disclosure also generally relates to a computing device. The computing device may include a circuit board and a semiconductor package coupled to the circuit board, wherein the semiconductor package may include: a package substrate; a base die on and electrically coupled to the package substrate; at least one power plane module, at least one power plane module is located on the package substrate at the periphery of the base die, the power plane module includes: a top surface and a bottom surface; and at least one vertically interleaved metal layer electrically at the bottom surface coupled to a package substrate; and a semiconductor device including a first section disposed on the base die and a second section disposed on the power plane module, wherein the second section of the semiconductor device is in the power plane module The top surface is electrically coupled to at least one vertically interleaved metal layer.The present disclosure also generally relates to a method. The method may include: forming a package substrate; forming a base die on the package substrate; forming a power plane module at a periphery of the base die, the power plane module may include: a top surface and a bottom surface; and at least one vertically interleaved metal layer, at least one vertically interleaved metal layer is electrically coupled to the package substrate at the bottom surface; forming a semiconductor device, the semiconductor device including a first section disposed on the base die and a second section disposed on the power plane module, Wherein, the second section of the semiconductor device is electrically coupled to the at least one vertically interleaved metal layer at the top surface of the power plane module.In order to more readily understand and put the present disclosure into practice, certain aspects will now be described by way of example and not limitation and with reference to the accompanying drawings. Repeated descriptions of features and properties may be omitted for brevity.1A shows a cross-sectional view of a semiconductor package 100 in accordance with aspects of the present disclosure. The cross-sectional view is taken along line A-A' of Fig. 1B.In the aspect shown in FIG. 1A , the semiconductor package 100 may include a package substrate 102 . The package substrate 102 may include contact pads, electrical interconnects, wiring, and other features, which may or may not be shown in any of the figures, and which are conventional features known to those skilled in the art. The various couplings of the components may use conventional methods, including solder bonding, thermocompression bonding, or other metal diffusion methods. The package substrate 102 may have one or more rigid core layers for improved structural stability or a coreless substrate package for reduced form factor. In other aspects, the package substrate 102 may be part of a larger substrate that supports additional semiconductor packages and/or components.In one aspect, the semiconductor package 100 may include a plurality of solder balls 104 . The package substrate 102 may be connected to a motherboard (not shown) through a plurality of solder balls 104 . The plurality of solder balls 104 may also provide electrical connections between the package substrate 102 and the motherboard. In one aspect, the stacked semiconductor package 100 may include a plurality of package bumps 106 disposed on the package substrate 102 . The plurality of package bumps 106 may be controlled collapse chip attach (C4) bumps.In aspects of the present disclosure, semiconductor package 100 may include base die 108 . The base die 108 may be an active interposer or a passive interposer. In one aspect, base die 108 may be disposed on package substrate 102 . In one aspect, the base die 108 may be connected to the package substrate 102 through the plurality of package bumps 106 . The plurality of package bumps 106 may also provide electrical connections between the base die 108 and the package substrate 102 .In aspects of the present disclosure, the base die 108 may include at least one through silicon via (TSV) 118 . The plurality of package bumps 106 may provide electrical connections between the at least one TSV 118 and the package substrate 102 .In aspects of the present disclosure, the semiconductor package 100 may include a first power plane module 110a. In one aspect, the first power plane module 110a may be disposed on the package substrate 102 . In one aspect, the first power plane module 110a may be connected to the package substrate 102 through the plurality of package bumps 106a. In an aspect, the first power plane module 110a may be disposed at a first periphery of the base die 108 .In aspects of the present disclosure, the first power plane module 110a may include a plurality of vertically interleaved metal layers (112a, 112b) electrically coupled to the package substrate 102 at the bottom surface of the first power plane module 110a. As used herein, the term "vertically interleaved metal layers" may refer to metal layers that are parallel to the side surfaces of the base die 108 . In the aspect shown in FIG. 1A, the first power plane module 110a may include a first metal layer 112a and a second metal layer 112b interleaved with dielectric layers. In one aspect, the first power plane module 110a may include a first mold portion 113a. The first mold portion 113a may include a molding material, such as an epoxy polymer, a silicone polymer, or a polyimide material. The first mold portion 113a may have a first mold surface coupled to the package substrate 102 . The first mold portion 113a may have a second mold surface coupled to the semiconductor device 122 . In one aspect, the first and second metal layers (112a, 112b) may be embedded in the molding material of the first mold portion 113a. The first and second metal layers (112a, 112b) may extend through the first mold surface and the second mold surface of the first mold portion 113a.In aspects of the present disclosure, the plurality of package bumps 106a may provide electrical connections between the first power plane module 110a and the plurality of interleaved metal layers ( 112a , 112b ) and the package substrate 102 .In aspects of the present disclosure, the semiconductor package 100 may include a second power plane module 110b. In one aspect, the second power plane module 110b may be disposed on the package substrate 102 . In one aspect, the second power plane module 110b may be connected to the package substrate 102 through the plurality of package bumps 106b. In an aspect, the second power plane module 110b may be disposed at a second periphery of the base die 108 .In aspects of the present disclosure, the second power plane module 110b may include a plurality of vertically interleaved metal layers (112c, 112d, 112e) electrically coupled to the package substrate 102 at the bottom surface of the second power plane module 110b. In the aspect shown in FIG. 1A, the second power plane module 110b may include a third metal layer 112c, a fourth metal layer 112d, and a fifth metal layer 112e interleaved with dielectric layers. In one aspect, the second power plane module 110b can include a second mold portion 113b. The second mold portion 113b may include a molding material, such as an epoxy polymer, a silicone polymer, or a polyimide material. The second mold portion 113b may have a first mold surface coupled to the package substrate 102 . The second mold portion 113b may have a second mold surface coupled to the semiconductor device 122 . In one aspect, the third metal layer 112c, the fourth metal layer 112d, and the fifth metal layer 112e may be embedded in the molding material of the second mold portion 113b. The third metal layer 112c, the fourth metal layer 112d, and the fifth metal layer 112e may extend through the first mold surface and the second mold surface of the second mold portion 113b.In aspects of the present disclosure, the plurality of package bumps 106b may provide electrical connections between the plurality of interleaved metal layers ( 112c , 112d , 112e ) of the second power plane module 110b and the package substrate 102 .In aspects of the present disclosure, semiconductor package 100 may include semiconductor device 122 . In one aspect, semiconductor device 122 may be fabricated from any suitable semiconductor (eg, silicon or gallium arsenide). Semiconductor device 122 may be a semiconductor die, chip, or chiplet, eg, a system-on-chip (SOC), central processing unit (CPU), platform controller hub (PCH)/chiplet, memory device, field programmable gate array ( FPGA) device, or Graphics Processing Unit (GPU). In the aspect shown in FIG. 1A, semiconductor device 122 may be a set of three chiplets (124a, 124b, 124c). In one aspect, the first chiplet 124a may include a CPU, the second chiplet 124b may include a PCH, and the third chiplet 124c may include a GPU.In aspects of the present disclosure, semiconductor device 122 may be disposed at least partially on base die 108 . The semiconductor device 122 may also be disposed at least partially on the first power plane module 110a. The semiconductor device 122 may also be disposed at least partially on the second power plane module 110b. In one aspect, the semiconductor device 122 may have a first section disposed on the base die 108 . The semiconductor device 122 may have a second section disposed on the first power plane module 110a. The semiconductor device 122 may also have a third section disposed on the second power plane module 110b. In the aspect shown in FIG. 1A , the first chiplet 124a of the semiconductor device 122 may be disposed on the base die 108 . The second chiplet 124b of the semiconductor device 122 may be disposed in part on the base die 108 and may be disposed in part on the first power plane module 110a. The third chiplet 124c of the semiconductor device 122 may be disposed in part on the base die 108 and may be disposed in part on the second power plane module 110b.In aspects of the present disclosure, at least a portion of semiconductor device 122 may be electrically coupled to package substrate 102 through at least one TSV 118 .In aspects of the present disclosure, at least a portion of the semiconductor device 122 may be electrically coupled to the package substrate 102 through the first and second metal layers (112a, 112b) at the top surface of the first power plane module 110a. In one aspect, each of the first and second metal layers (112a, 112b) may be configurable. Each of the first and second metal layers (112a, 112b) may be configured based on the power transfer requirements of the semiconductor package 100 to alleviate the power transfer challenges of 2.5D and/or 3D stacked integrated circuit (IC) packaging architectures. For example, the size, width and/or volume of each of the first and second metal layers (112a, 112b) may be configured to meet power transfer requirements.In aspects of the present disclosure, the base die 108 has a cross-section in the x-z plane. In one aspect, the first and second metal layers (112a, 112b) may extend in a direction to form respective planes (112a', 112a', 112b'). In the aspects shown in FIGS. 1A and 1B , the first and second metal layers ( 112 a , 112 b ) may extend along the periphery of the base die 108 on the y-axis. In other words, the first and second metal layers (112a, 112b) may form a first conductive plane 112a' and a second conductive plane 112b', respectively, which may traverse the cross-section of the base die 108, thereby forming a first Power plane module 110a. In an aspect, the plane formed by the first metal layer 112a may include a first voltage reference plane 112a&apos;. The plane formed by the second metal layer 112b may include a second voltage reference plane 112b'.In aspects of the present disclosure, the first and second metal layers (112a, 112b) may have the same length or different lengths (on the y-axis). The first and second metal layers (112a, 112b) may extend along the y-axis of the base die 108 and parallel to each other. In one aspect, each of the first and second metal layers ( 112a , 112b ) may extend 30% to 120% of the length of the base die 108 . For example, each of the first and second metal layers (112a, 112b) may include a length ranging from 5 millimeters (mm) to 20 mm.In aspects of the present disclosure, the first metal layer 112a may include a first chiplet-side contact pad 114a and a first package-side contact pad 115a. In one aspect, the first chiplet side contact pad 114a may be coupled to the semiconductor device 122 . The first package side contact pads 115a may be coupled to the package substrate 102 . Similarly, the second metal layer 112b may include a second chiplet side contact pad 114b and a second package side contact pad 115b. In an aspect, the second chiplet side contact pad 114b may be coupled to the semiconductor device 122 . The second package side contact pads 115b may be coupled to the package substrate 102 . In the aspect shown in FIG. 1A , each of the first and second metal layers ( 112 a , 112 b ) may be arranged in a vertical orientation such that the shortest interconnection between the second chiplet 124 b and the package substrate 102 may be formed A connection path is formed, thereby forming a first vertical power plane module 110a. In an aspect, the vertical plane formed by the first metal layer 112a may include a first vertical voltage reference plane 112a&apos;. The vertical plane formed by the second metal layer 112b may include a second vertical voltage reference plane 112b'.Advantages of the present disclosure may include improved Imax capacity (device reliability) through peripheral vertical power plane modules. Compared to discrete cylindrical interconnects with constrained geometries, such as through-mold vias (TMVs) or through-silicon vias (TSVs) through the base die or silicon interposer, this can be achieved by increasing the The interconnect volume (ie, the vertical planar interconnect configuration between the chiplet and the package substrate) to achieve reduced interconnect resistance.In aspects of the present disclosure, the first chiplet side contact pad 114a and the first package side contact pad 115a may have different widths (on the x-axis). The first chiplet side contact pad 114a may have a width of the first dimension. The first package side contact pad 115a may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the first chiplet side contact pads 114a may have a smaller width than the first package side contact pads 115a. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the second chiplet side contact pad 114b and the second package side contact pad 115b may have different widths (on the x-axis). The second chiplet side contact pad 114b may have a width of the first dimension. The second package side contact pad 115b may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the second chiplet side contact pad 114b may have a smaller width than the second package side contact pad 115b. The first dimension may include a width geometry in the range of about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the first metal layer 112a and the second metal layer 112b may have the same height (on the z-axis). The first and second metal layers (112a, 112b) may include height geometries ranging from about 200 μm to 800 μm.In one aspect, the different sizes of the first and second chiplet side contact pads (114a, 114b) and the first and second package side contact pads (115a, 115b) are achieved by non-uniform vertical power plane thicknesses. In the aspect shown in FIG. 1A, a vertical step reference plane may be provided in an "L" shaped configuration having a first plane thickness adjacent to the chiplet side of the module and a greater than first thickness adjacent to the package side of the module The second plane thickness of the plane thickness. In one aspect, a similar effective volume of conductor planes can be achieved by configuring the ratio of the x-axis and z-axis between the first and second plane thicknesses, eg, the same between the first plane thickness and the second plane thickness Effective plane volume.In aspects of the present disclosure, a first via hole 116a may be formed on the first metal layer 112a. A second via hole 116b may be formed on the second metal layer 112b. In one aspect, the first passive component 120a may be disposed between the first and second metal layers (112a, 112b). The first passive component 120a may include a capacitor (eg, a silicon capacitor or a ceramic capacitor (eg, a multilayer ceramic capacitor (MLCC))), a resistor, a diode, or an inductor. In one aspect, the first passive component 120a may be used to improve the power integrity of the semiconductor package 100 . In the aspect shown in FIG. 1A, the body length of the first passive component 120a (eg, decoupling capacitor) may be arranged along the z-axis. In an aspect, the first terminal of the first passive component 120a may be electrically coupled to the first metal layer 112a through the first via 116a. The first terminal of the first passive component 120a may include a negative terminal. The second terminal of the first passive component 120a may be electrically coupled to the second metal layer 112b through the second via 116b. The second terminal of the first passive component 120a may include a positive terminal. In other words, the first passive component 120a may be coupled to the first vertical voltage reference plane 112a' through the first and second vias (116a, 116b), respectively, and may also be coupled to the second vertical voltage reference plane 112b' . This can result in a reduction in AC noise, as decoupling passive components close to the semiconductor device 100 can reduce power supply induced jitter, which can lead to improved electrical performance.In one aspect, the first vertical voltage reference plane 112a&apos; may be associated with a ground reference voltage (Vss). In one aspect, the second vertical voltage reference plane 112b&apos; may be associated with a power supply reference voltage (Vcc).In aspects of the present disclosure, at least a portion of the semiconductor device 122 may be electrically coupled to the package substrate 102 at the top surface of the second power plane module 110b through the third, fourth and fifth metal layers (112c, 112d, 112e) . In one aspect, each of the third, fourth and fifth metal layers (112c, 112d, 112e) may be configurable. Each of the third, fourth, and fifth metal layers (112c, 112d, 112e) may be configured based on the power delivery requirements of the semiconductor package 100 to ease 2.5D and/or 3D stacked integrated circuit (IC) packaging rack power transfer challenges. For example, the size, width and/or volume of each of the third, fourth and fifth metal layers (112c, 112d, 112e) may be configured to meet power transfer requirements.In aspects of the present disclosure, the base die 108 has a cross-section in the x-z plane. In one aspect, the third, fourth, and fifth metal layers ( 112c , 112d , 112e ) can extend in a direction to form corresponding layers that can traverse (ie, on the y-axis) the cross-section of the base die 108 flat. In the aspects shown in FIGS. 1A and 1B , the third, fourth and fifth metal layers ( 112c , 112d , 112e ) may extend along the periphery of the base die 108 on the y-axis. In other words, the third, fourth and fifth metal layers (112c, 112d, 112e) may form a third conductive plane 112c', a fourth conductive plane 112d' and a fifth conductive plane 112e', respectively, which may traverse the base This cross section of the die 108, thereby forming the second power plane module 110b. In an aspect, the plane formed by the third metal layer 112c may include a third voltage reference plane 112c'. The plane formed by the fourth metal layer 112d may include a fourth voltage reference plane 112d'. The plane formed by the fifth metal layer 112e may include a fifth voltage reference plane 112e'.In aspects of the present disclosure, the third, fourth and fifth metal layers (112c, 112d, 112e) may have the same length or different lengths (on the y-axis). The third, fourth and fifth metal layers ( 112c , 112d , 112e ) may extend along the periphery of the base die 108 and parallel to each other on the y-axis. In one aspect, each of the third, fourth, and fifth metal layers ( 112c , 112d , 112e ) may extend 30% to 120% of the length of the base die 108 . For example, each of the third, fourth and fifth metal layers (112c, 112d, 112e) may include a length ranging from 5mm to 20mm.In aspects of the present disclosure, the third metal layer 112c may include a third chiplet side contact pad 114c and a third package side contact pad 115c. In an aspect, the third chiplet side contact pad 114c may be coupled to the semiconductor device 122 . The third package side contact pad 115c may be coupled to the package substrate 102 . The fourth metal layer 112d may include fourth chiplet side contact pads 114d and fourth package side contact pads 115d. In an aspect, the fourth chiplet side contact pad 114d may be coupled to the semiconductor device 122 . The fourth package side contact pad 115d may be coupled to the package substrate 102 . The fifth metal layer 112e may include fifth chiplet side contact pads 114e and fifth package side contact pads 115e. In an aspect, the fifth chiplet side contact pad 114e may be coupled to the semiconductor device 122 . The fifth package side contact pad 115e may be coupled to the package substrate 102 . In the aspect shown in FIG. 1A , each of the third, fourth and fifth metal layers ( 112c , 112d , 112e ) may be arranged in a vertical orientation such that the third chiplet 124c and the package substrate 102 can be The shortest interconnection paths are formed therebetween, thereby forming the second vertical power plane module 110b. In an aspect, the vertical plane formed by the third metal layer 112c may include a third vertical voltage reference plane 112c'. The vertical plane formed by the fourth metal layer 112d may include a fourth vertical voltage reference plane 112d'. The vertical plane formed by the fifth metal layer 112e may include a fifth vertical voltage reference plane 112e'.In aspects of the present disclosure, the third chiplet side contact pad 114c and the third package side contact pad 115c may have different widths (on the x-axis). The third chiplet side contact pad 114c may have a width of the first dimension. The third package side contact pad 115c may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the third chiplet side contact pad 114c may have a smaller width than the third package side contact pad 115c. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the fourth chiplet side contact pad 114d and the fourth package side contact pad 115d may have different widths (on the x-axis). The fourth chiplet side contact pad 114d may have a width of the first dimension. The fourth package side contact pad 115d may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the fourth chiplet side contact pad 114d may have a smaller width than the fourth package side contact pad 115d. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the fifth chiplet side contact pad 114e and the fifth package side contact pad 115e may have different widths (on the x-axis). The fifth chiplet side contact pad 114e may have a width of the first dimension. The fifth package side contact pad 115e may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the fifth chiplet side contact pad 114e may have a smaller width than the fifth package side contact pad 115e. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the third, fourth and fifth metal layers (112c, 112d, 112e) may have the same height (on the z-axis). The third, fourth and fifth metal layers (112c, 112d, 112e) may include height geometries ranging from about 200 μm to 800 μm.In aspects of the present disclosure, a third via hole 116c may be formed on the third metal layer 112c. Fourth via holes 116d may be formed on the first surface of the fourth metal layer 112d. In an aspect, the second passive component 120b may be disposed between the third and fourth metal layers (112c, 112d). The second passive component 120b may include a capacitor (eg, a silicon capacitor or a ceramic capacitor (eg, a multilayer ceramic capacitor (MLCC))), a resistor, a diode, or an inductor. In an aspect, the first terminal of the second passive component 120b may be electrically coupled to the third metal layer 112c through the third via 116c. The second terminal of the second passive component 120b may be electrically coupled to the fourth metal layer 112d through the fourth via 116d. In other words, the second passive component 120b may be coupled to the third vertical voltage reference plane 112c' through the third and fourth vias (116c, 116d), respectively, and may also be coupled to the fourth vertical voltage reference plane 112d'.In aspects of the present disclosure, the fifth via hole 116e may be formed on the other surface of the fourth metal layer 112d. A sixth via hole 116f may be formed on the fifth metal layer 112e. In an aspect, a third passive component 120c may be disposed between the fourth and fifth metal layers (112d, 112e). The third passive component 120c may include a capacitor (eg, a silicon capacitor or a ceramic capacitor (eg, a multilayer ceramic capacitor (MLCC))), a resistor, a diode, or an inductor. In an aspect, the first terminal of the third passive component 120c may be electrically coupled to the fourth metal layer 112d through the fifth via 116e. The second terminal of the third passive component 120c may be electrically coupled to the fifth metal layer 112e through the sixth via 116f. In other words, the third passive component 120c may be coupled to the fourth vertical voltage reference plane 112d' through fifth and sixth vias (116e, 116f), respectively, and may also be coupled to the fifth vertical voltage reference plane 112e'.In one aspect, the third vertical voltage reference plane 112c&apos; may be associated with a power supply reference voltage (Vcc). In one aspect, the fourth vertical voltage reference plane 112d' may be associated with a ground reference voltage (Vss). In one aspect, the fifth vertical voltage reference plane 112e&apos; may be associated with a power supply reference voltage (Vcc). In other words, the second vertical power plane module 110b may include a vertical ground reference voltage plane (Vss) sandwiched between two vertical power reference voltage planes (Vcc).In one aspect, the third metal layer 112c and the fifth metal layer 112e may be configured as respective vertical power supply reference voltage (Vcc) connections between the package substrate 102 and the semiconductor device 122 . The corresponding power supply reference voltage (Vcc) may be approximately between 0.8 volts (V) and 3.3V. For example, the third vertical voltage reference plane (Vcc) may be about 0.8V, and the fifth vertical voltage reference plane (Vcc) may be about 1.0V.In aspects of the present disclosure, a plurality of microbumps 117 may be disposed on the base die 108 . In one aspect, a plurality of microbumps 117a may be disposed on the first power plane module 110a. In one aspect, a plurality of microbumps 117b may be disposed on the second power plane module 110b. The plurality of microbumps 117a may provide electrical connections between the first power plane module 110a and the second chiplet 124b. The plurality of package bumps 117 may also provide electrical connections between the base die 108 and the first chiplet 124a. The plurality of microbumps 117b may also provide electrical connections between the second power plane module 110b and the third chiplet 124c.In aspects of the present disclosure, the widths of the plurality of microbumps (117a, 117b) on the first and second power plane modules (110a, 110b) may be smaller than the widths of the corresponding plurality of package bumps (106a, 106b). In one aspect, the corresponding chiplet side contact pads (114a, 114b, 114c, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d, 114d) 114e) Size setting is performed. In one aspect, the corresponding package side contact pads (115a, 115b, 115c, 115d, 115e) may be aligned according to the widths of the corresponding package bumps (106a, 106b) of the first and second power plane modules (110a, 110b) Make size settings.In aspects of the present disclosure, the first chiplet 124a , the second chiplet 124b , and the third chiplet 124c may communicate with each other through a redistribution layer (RDL) 119 within the base die 108 . In one aspect, RDL 119 may include a plurality of conductive traces interleaved with a plurality of dielectric layers. In one aspect, RDL 119 may be coupled to TSV 118 within base die 108 .FIG. 1B shows a top view layout of the semiconductor package 100 in accordance with the aspects shown in FIG. 1A . The package substrate 102 may include a perimeter or footprint. The base die 108 may include a footprint. The first chiplet 124a may include a footprint. The second chiplet 124b may include a footprint. The third chiplet 124c may include a footprint. The first power plane module 110a may include a footprint. The second power plane module 110b may include a footprint. In one aspect, the semiconductor package 100 may also include one or more additional chiplets 124n disposed adjacent to the base die 108 and adjacent to the second chiplet 124b.In the aspect shown in Figure IB, base die 108, first chiplet 124a, second chiplet 124b, third chiplet 124c, additional chiplet 124n, first power plane module 110a, and second power plane module The footprint of 110b falls within the perimeter of substrate 102 .As described above, the first chiplet 124a may be disposed on the base die 108 . The second chiplet 124b may be disposed in part on the base die 108 and may be disposed in part on the first power plane module 110a. The third chiplet 124c may be disposed in part on the base die 108 and may be disposed in part on the second power plane module 110b. Thus, as shown in FIG. 1B , the footprint of the first chiplet 124a may fall within the footprint of the base die 108 . The footprint of the second chiplet 124b may include a portion that may overlap the base die 108 and another portion that may overlap the first power plane module 110a. The footprint of the third chiplet 124c may include a portion that may overlap the base die 108 and another portion that may overlap the second power plane module 110b. Similar to the arrangement of the second chiplet 124b, the additional chiplet 124n may include a footprint that may have a portion that may overlap the base die 108 and another portion that may overlap the first power plane module 110a.The first power plane module 110a may include a first voltage reference plane 112a' and a second voltage reference plane 112b', which may be aligned on the y-axis and may be parallel to each other. The first passive component 120a may be disposed between the first voltage reference plane 112a' and the second voltage reference plane 112b'. In the aspect shown in Figure IB, there may be an array of passive components disposed between the first voltage reference plane 112a' and the second voltage reference plane 112b'.The second power plane module 110b may include a third voltage reference plane 112c', a fourth voltage reference plane 112d', and a fifth voltage reference plane 112e', which may be aligned on the y-axis and may be parallel to each other. The second passive component 120b may be disposed between the third voltage reference plane 112c' and the fourth voltage reference plane 112d'. The third passive component 120c may be disposed between the fourth voltage reference plane 112d' and the fifth voltage reference plane 112e'. In the aspect shown in Figure IB, there may be an array of passive components disposed between the third voltage reference plane 112c' and the fourth voltage reference plane 112d'. In another aspect, there may be an array of passive components disposed between the fourth voltage reference plane 112d' and the fifth voltage reference plane 112e'.2A shows a cross-sectional view of a semiconductor package 200 in accordance with aspects of the present disclosure. The cross-sectional view is taken along line A-A' of Fig. 2B.In the aspect shown in FIG. 2A , the semiconductor package 200 may include a package substrate 202 . The package substrate 202 may include contact pads, electrical interconnects, wiring, and other features, which may or may not be shown in any of the figures, and which are conventional features known to those skilled in the art. The various couplings of the components may use conventional methods, including solder bonding, thermocompression bonding, or other metal diffusion methods. Package substrate 202 may have one or more rigid core layers for improved structural stability or a coreless substrate package for reduced form factor. In other aspects, package substrate 202 may be part of a larger substrate that supports additional semiconductor packages and/or components.In one aspect, the semiconductor package 200 may include a plurality of solder balls 204 . The package substrate 202 may be connected to a motherboard (not shown) through a plurality of solder balls 204 . The plurality of solder balls 204 may also provide electrical connections between the package substrate 202 and the motherboard. In one aspect, the stacked semiconductor package 200 may include a plurality of package bumps 206 disposed on the package substrate 202 . The plurality of package bumps 206 may be controlled collapse chip attach (C4) bumps.In aspects of the present disclosure, semiconductor package 200 may include base die 208 . The base die 208 may be an active interposer or a passive interposer. In one aspect, base die 208 may be disposed on package substrate 202 . In one aspect, the base die 208 may be connected to the package substrate 202 through the plurality of package bumps 206 . The plurality of package bumps 206 may also provide electrical connections between the base die 208 and the package substrate 202 .In aspects of the present disclosure, base die 208 may include at least one through silicon via (TSV) 218 . The plurality of package bumps 206 may provide electrical connections between the at least one TSV 218 and the package substrate 202 .In aspects of the present disclosure, the semiconductor package 200 may include a first power plane module 210a. In one aspect, the first power plane module 210a may be disposed on the package substrate 202 . In one aspect, the first power plane module 210a may be connected to the package substrate 202 through the plurality of package bumps 206a. In an aspect, the first power plane module 210a may be disposed at a first periphery of the base die 208 .In aspects of the present disclosure, the first power plane module 210a may include a plurality of vertically interleaved metal layers (212a, 212b) electrically coupled to the package substrate 202 at the bottom surface of the first power plane module 210a. In the aspect shown in FIG. 2A, the first power plane module 210a may include a first metal layer 212a and a second metal layer 212b interleaved with dielectric layers. In one aspect, the first power plane module 210a can include a first mold portion 213a. The first mold portion 213a may include a molding material, such as an epoxy polymer, a silicone polymer, or a polyimide material. The first mold portion 213a may have a first mold surface coupled to the package substrate 202 . The first mold portion 213a may have a second mold surface coupled to the semiconductor device 222 . In one aspect, the first and second metal layers (212a, 212b) may be embedded in the molding material of the first mold portion 213a. The first and second metal layers (212a, 212b) may extend through the first mold surface and the second mold surface of the first mold portion 213a.In aspects of the present disclosure, the plurality of package bumps 206a may provide electrical connections between the first power plane module 210a and the plurality of interleaved metal layers ( 212a , 212b ) and the package substrate 202 .In aspects of the present disclosure, the semiconductor package 200 may include a second power plane module 210b. In one aspect, the second power plane module 210b may be disposed on the package substrate 202 . In one aspect, the second power plane module 210b may be connected to the package substrate 202 through the plurality of package bumps 206b. In one aspect, the second power plane module 210b may be disposed at a second periphery of the base die 208 .In aspects of the present disclosure, the second power plane module 210b may include a plurality of vertically interleaved metal layers (212c, 212d, 212e) electrically coupled to the package substrate 202 at the bottom surface of the second power plane module 210b. In the aspect shown in FIG. 2A, the second power plane module 210b may include a third metal layer 212c, a fourth metal layer 212d, and a fifth metal layer 212e interleaved with dielectric layers. In one aspect, the second power plane module 210b can include a second mold portion 213b. The second mold portion 213b may include a molding material, such as an epoxy polymer, a silicone polymer, or a polyimide material. The second mold portion 213b may have a first mold surface coupled to the package substrate 202 . The second mold portion 213b may have a second mold surface coupled to the semiconductor device 222 . In one aspect, the third metal layer 212c, the fourth metal layer 212d, and the fifth metal layer 212e may be embedded in the molding material of the second mold portion 213b. The third metal layer 212c, the fourth metal layer 212d, and the fifth metal layer 212e may extend through the first mold surface and the second mold surface of the second mold portion 213b.In aspects of the present disclosure, the plurality of package bumps 206b may provide electrical connections between the plurality of interleaved metal layers ( 212c , 212d , 212e ) of the second power plane module 210b and the package substrate 202 .In aspects of the present disclosure, semiconductor package 200 may include semiconductor device 222 . In one aspect, semiconductor device 222 may be fabricated from any suitable semiconductor (eg, silicon or gallium arsenide). Semiconductor device 222 may be a semiconductor die, chip, or chiplet, eg, a system-on-chip (SOC), central processing unit (CPU), platform controller hub (PCH)/chiplet, memory device, field programmable gate array ( FPGA) device, or Graphics Processing Unit (GPU). In the aspect shown in FIG. 2A, the semiconductor device 222 may be a set of three chiplets (224a, 224b, 224c). In one aspect, the first chiplet 224a may include a CPU, the second chiplet 224b may include a PCH, and the third chiplet 224c may include a GPU.In aspects of the present disclosure, semiconductor device 222 may be disposed at least partially on base die 208 . The semiconductor device 222 may also be disposed at least partially on the first power plane module 210a. The semiconductor device 222 may also be disposed at least partially on the second power plane module 210b. In one aspect, semiconductor device 222 may have a first section disposed on base die 208 . The semiconductor device 222 may have a second section disposed on the first power plane module 210a. The semiconductor device 222 may also have a third section disposed on the second power plane module 210b. In the aspect shown in FIG. 2A , the first chiplet 224a of the semiconductor device 222 may be disposed on the base die 208 . The second chiplet 224b of the semiconductor device 222 may be disposed in part on the base die 208 and may be disposed in part on the first power plane module 210a. The third chiplet 224c of the semiconductor device 222 may be disposed in part on the base die 208 and may be disposed in part on the second power plane module 210b.In aspects of the present disclosure, at least a portion of semiconductor device 222 may be electrically coupled to package substrate 202 through at least one TSV 218 .In aspects of the present disclosure, at least a portion of the semiconductor device 222 may be electrically coupled to the package substrate 202 through the first and second metal layers (212a, 212b) at the top surface of the first power plane module 210a. In one aspect, each of the first and second metal layers (212a, 212b) may be configurable. Each of the first and second metal layers (212a, 212b) may be configured based on the power transfer requirements of the semiconductor package 200 to alleviate the power transfer challenges of 2.5D and/or 3D stacked integrated circuit (IC) packaging architectures. For example, the size, width and/or volume of each of the first and second metal layers (212a, 212b) may be configured to meet power transfer requirements.In aspects of the present disclosure, base die 208 has a cross-section in the x-z plane. In one aspect, the first and second metal layers (212a, 212b) may extend in a direction to form respective planes (212a', 212a', 212b'). In the aspects shown in FIGS. 2A and 2B , the first and second metal layers ( 212 a , 212 b ) may extend along the periphery of the base die 208 on the y-axis. In other words, the first and second metal layers (212a, 212b) may form a first conductive plane 212a' and a second conductive plane 212b', respectively, which may traverse the cross-section of the base die 208, thereby forming a first Power plane module 210a. In an aspect, the plane formed by the first metal layer 212a may include a first voltage reference plane 212a&apos;. The plane formed by the second metal layer 212b may include a second voltage reference plane 212b'.In aspects of the present disclosure, the first and second metal layers (212a, 212b) may have the same length or different lengths (on the y-axis). The first and second metal layers (212a, 212b) may extend along the y-axis of the base die 208 and parallel to each other. In one aspect, each of the first and second metal layers ( 212a , 212b ) may extend 30% to 120% of the length of the base die 208 . For example, each of the first and second metal layers (212a, 212b) may include a length ranging from 5mm to 20mm.In aspects of the present disclosure, the first metal layer 212a may include a first chiplet-side contact pad 214a and a first package-side contact pad 215a. In an aspect, the first chiplet side contact pad 214a may be coupled to the semiconductor device 222 . The first package side contact pad 215a may be coupled to the package substrate 202 . Similarly, the second metal layer 212b may include a second chiplet side contact pad 214b and a second package side contact pad 215b. In an aspect, the second chiplet side contact pad 214b may be coupled to the semiconductor device 222 . The second package side contact pad 215b may be coupled to the package substrate 202 . In the aspect shown in FIG. 2A, each of the first and second metal layers (212a, 212b) may be arranged in a vertical orientation such that the shortest interconnection between the second chiplet 224b and the package substrate 202 may be formed A connection path is formed, thereby forming a first vertical power plane module 210a. In an aspect, the vertical plane formed by the first metal layer 212a may include a first vertical voltage reference plane 212a&apos;. The vertical plane formed by the second metal layer 212b may include a second vertical voltage reference plane 212b'.In aspects of the present disclosure, the first chiplet side contact pad 214a and the first package side contact pad 215a may have different widths (on the x-axis). The first chiplet side contact pad 214a may have a width of the first dimension. The first package side contact pad 215a may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the first chiplet side contact pads 214a may have a smaller width than the first package side contact pads 215a. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the second chiplet side contact pad 214b and the second package side contact pad 215b may have different widths (on the x-axis). The second chiplet side contact pad 214b may have a width of the first dimension. The second package side contact pad 215b may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the second chiplet side contact pad 214b may have a smaller width than the second package side contact pad 215b. The first dimension may include a width geometry in the range of about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the first metal layer 212a and the second metal layer 212b may have the same height (on the z-axis). The first and second metal layers (212a, 212b) may include height geometries ranging from about 200 μm to 800 μm.In one aspect, the different sizes of the first and second chiplet side contact pads (214a, 214b) and the first and second package side contact pads (215a, 215b) are achieved by non-uniform vertical power plane thicknesses. In the aspect shown in FIG. 2A, a vertical step reference plane may be provided in an "L" shaped configuration having a first plane thickness adjacent to the chiplet side of the module and a greater than first thickness adjacent to the package side of the module The second plane thickness of the plane thickness. In one aspect, a similar effective volume of conductor planes can be achieved by configuring the ratio of the x-axis and z-axis between the first and second plane thicknesses, eg, the same between the first plane thickness and the second plane thickness Effective plane volume.In aspects of the present disclosure, a first via hole 216a may be formed on the first metal layer 212a. A second via hole 216a' may be formed on the second metal layer 212b, and the second via hole 216a' may be opposite to the first via hole 216a. A third via hole 216b may be formed on the first metal layer 212a. A fourth via hole 216b' may be formed on the second metal layer 212b, and the fourth via hole 216b' may be opposite to the third via hole 216b. In one aspect, the first passive component 220a may be disposed between the first and second metal layers (212a, 212b). In one aspect, a second passive component 220b may be disposed between the first and second metal layers (212a, 212b). The first passive component 220a and the second passive component 220b may include capacitors (eg, silicon capacitors or ceramic capacitors (eg, multilayer ceramic capacitors (MLCCs)), resistors, diodes, or inductors. In one aspect, the first The first and second passive components (220a, 220b) may be used to improve the power integrity of the semiconductor package 200. In one aspect, the body lengths of the first and second passive components (220a, 220b) may be arranged along the y-axis , to achieve a miniaturized or low z-profile vertical power plane module. In one aspect, the first terminal of the first passive component 220a can be electrically coupled to the first metal layer 212a through the first via 216a. The first passive component The second terminal of the component 220a may be electrically coupled to the second metal layer 212b through the second via 216a'. In other words, the first passive component 220a may be coupled through the first and second vias (216a, 216a'), respectively to the first vertical voltage reference plane 212a', and may also be coupled to the second vertical voltage reference plane 212b'. In one aspect, the first terminal of the second passive component 220b may be electrically coupled to the fourth via 216b' The second metal layer 212b. The second terminal of the second passive component 220b may be electrically coupled to the first metal layer 212a through the third via 216b. In other words, the second passive component 220b may be electrically coupled through the third and fourth vias, respectively The vias (216b, 216b') are coupled to the first vertical voltage reference plane 212a', and can also be coupled to the second vertical voltage reference plane 212b'. This can cause a drop in AC noise because the decoupling passive components are close to the semiconductor Device 200 can reduce power supply induced jitter, which can lead to improved electrical performance.In one aspect, the first vertical voltage reference plane 212a' may be associated with a ground reference voltage (Vss). In one aspect, the second vertical voltage reference plane 212b&apos; may be associated with a power supply reference voltage (Vcc).In aspects of the present disclosure, at least a portion of semiconductor device 222 may be electrically coupled to package substrate 202 at the top surface of second power plane module 210b through third, fourth, and fifth metal layers (212c, 212d, 212e) . In one aspect, each of the third, fourth and fifth metal layers (212c, 212d, 212e) may be configurable. Each of the third, fourth and fifth metal layers (212c, 212d, 212e) may be configured based on the power delivery requirements of the semiconductor package 200 to ease 2.5D and/or 3D stacked integrated circuit (IC) packaging rack power transfer challenges. For example, the size, width and/or volume of each of the third, fourth and fifth metal layers (212c, 212d, 212e) may be configured to meet power transfer requirements.In aspects of the present disclosure, base die 208 has a cross-section in the x-z plane. In one aspect, the third, fourth, and fifth metal layers ( 212c , 212d , 212e ) can extend in a direction to form corresponding layers that can traverse (ie, on the y-axis) the cross-section of the base die 208 flat. In the aspects shown in FIGS. 2A and 2B , the third, fourth and fifth metal layers ( 212c , 212d , 212e ) may extend along the periphery of the base die 208 on the y-axis. In other words, the third, fourth and fifth metal layers (212c, 212d, 212e) may form a third conductive plane 212c', a fourth conductive plane 212d' and a fifth conductive plane 212e', respectively, which may traverse the base This cross section of die 208, thereby forming a second power plane module 210b. In an aspect, the plane formed by the third metal layer 212c may include a third voltage reference plane 212c'. The plane formed by the fourth metal layer 212d may include a fourth voltage reference plane 212d'. The plane formed by the fifth metal layer 212e may include a fifth voltage reference plane 212e'.In aspects of the present disclosure, the third, fourth and fifth metal layers (212c, 212d, 212e) may have the same length or different lengths (on the y-axis). The third, fourth and fifth metal layers ( 212c , 212d , 212e ) may extend along the periphery of the base die 208 and parallel to each other on the y-axis. In one aspect, each of the third, fourth, and fifth metal layers ( 212c , 212d , 212e ) may extend 30% to 120% of the length of the base die 208 . For example, each of the third, fourth and fifth metal layers (212c, 212d, 212e) may include a length ranging from 5mm to 20mm.In aspects of the present disclosure, the third metal layer 212c may include a third chiplet side contact pad 214c and a third package side contact pad 215c. In an aspect, the third chiplet side contact pad 214c may be coupled to the semiconductor device 222 . The third package side contact pad 215c may be coupled to the package substrate 202 . The fourth metal layer 212d may include fourth chiplet side contact pads 214d and fourth package side contact pads 215d. In an aspect, the fourth chiplet side contact pad 214d may be coupled to the semiconductor device 222 . The fourth package side contact pad 215d may be coupled to the package substrate 202 . The fifth metal layer 212e may include fifth chiplet side contact pads 214e and fifth package side contact pads 215e. In an aspect, the fifth chiplet side contact pad 214e may be coupled to the semiconductor device 222 . The fifth package side contact pad 215e may be coupled to the package substrate 202 . In the aspect shown in FIG. 2A , each of the third, fourth and fifth metal layers ( 212c , 212d , 212e ) may be arranged in a vertical orientation such that the third chiplet 224c and the package substrate 202 The shortest interconnection paths are formed therebetween, thereby forming the second vertical power plane module 210b. In an aspect, the vertical plane formed by the third metal layer 212c may include a third vertical voltage reference plane 212c'. The vertical plane formed by the fourth metal layer 212d may include a fourth vertical voltage reference plane 212d'. The vertical plane formed by the fifth metal layer 212e may include a fifth vertical voltage reference plane 212e'.In aspects of the present disclosure, the third chiplet side contact pad 214c and the third package side contact pad 215c may have different widths (on the x-axis). The third chiplet side contact pad 214c may have a width of the first dimension. The third package side contact pad 215c may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the third chiplet side contact pad 214c may have a smaller width than the third package side contact pad 215c. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the fourth chiplet side contact pad 214d and the fourth package side contact pad 215d may have different widths (on the x-axis). The fourth chiplet side contact pad 214d may have a width of the first dimension. The fourth package side contact pad 215d may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the fourth chiplet side contact pad 214d may have a smaller width than the fourth package side contact pad 215d. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the fifth chiplet side contact pad 214e and the fifth package side contact pad 215e may have different widths (on the x-axis). The fifth chiplet side contact pad 214e may have a width of the first dimension. The fifth package side contact pad 215e may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the fifth chiplet side contact pad 214e may have a smaller width than the fifth package side contact pad 215e. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the third, fourth and fifth metal layers (212c, 212d, 212e) may have the same height (on the z-axis). The third, fourth and fifth metal layers (212c, 212d, 212e) may include height geometries ranging from about 200 μm to 800 μm.In aspects of the present disclosure, the second power plane module 210b may not include passive components. In one aspect, the second power plane module 210b may include at least one trench 221a (ie, a raised section) formed on the third metal layer 212c. The trench 221a may extend from the third metal layer 212c toward the fourth metal layer 212d. The second power plane module 210b may further include at least one trench 221c formed on the fifth metal layer 212e. The trench 221c may extend from the fifth metal layer 212e toward the fourth metal layer 212d. The second power plane module 210b may further include at least one trench 221b formed on the first and second surfaces of the fourth metal layer 212d. The trench 221b may extend from the first surface of the fourth metal layer 212d toward the third metal layer 212c. The trench 221b may extend from the second surface of the fourth metal layer 212d toward the fifth metal layer 212e. The trenches (221a, 221b, 221c) of the second power plane module 210b may form an interdigitated arrangement between the trenches of adjacent metal layers (212c, 212d, 212e). The trenches (221a, 221b, 221c) may be isolated from each other by a dielectric layer (eg, polymer dry film resist (DFR)). Advantages of providing such an interdigitated arrangement may include a reduction in the inductance loop between the power plane and the ground plane.In one aspect, the third vertical voltage reference plane 212c&apos; may be associated with a power supply reference voltage (Vcc). In an aspect, the fourth vertical voltage reference plane 212d' may be associated with a ground reference voltage (Vss). In one aspect, the fifth vertical voltage reference plane 212e&apos; may be associated with a power supply reference voltage (Vcc). In other words, the second vertical power plane module 210b may include a vertical ground reference voltage plane (Vss) sandwiched between two vertical power reference voltage planes (Vcc).In one aspect, the third metal layer 212c and the fifth metal layer 212e may be configured as respective vertical supply reference voltage (Vcc) connections between the package substrate 202 and the semiconductor device 222 . The corresponding power supply reference voltage (Vcc) may be approximately between 0.8 volts (V) and 3.3V. For example, the third vertical voltage reference plane (Vcc) may be about 0.8V, and the fifth vertical voltage reference plane (Vcc) may be about 1.0V.In aspects of the present disclosure, a plurality of microbumps 217 may be disposed on the base die 208 . In one aspect, a plurality of microbumps 217a may be disposed on the first power plane module 210a. In one aspect, a plurality of microbumps 217b may be disposed on the second power plane module 210b. The plurality of microbumps 217a may provide electrical connections between the first power plane module 210a and the second chiplet 224b. The plurality of package bumps 217 may also provide electrical connections between the base die 208 and the first chiplet 224a. The plurality of microbumps 217b may also provide electrical connections between the second power plane module 210b and the third chiplet 224c.In aspects of the present disclosure, the widths of the plurality of microbumps (217a, 217b) on the first and second power plane modules (210a, 210b) may be smaller than the widths of the corresponding plurality of package bumps (206a, 206b). In an aspect, the corresponding chiplet side contact pads (214a, 214b, 214c, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214d, 214a, 214d) 214e) Size setting is performed. In one aspect, corresponding package side contact pads (215a, 215b, 215c, 215d, 215e) may be aligned according to widths of respective package bumps (206a, 206b) of the first and second power plane modules (210a, 210b) Make size settings.In aspects of the present disclosure, the first chiplet 224a , the second chiplet 224b , and the third chiplet 224c may communicate with each other through a redistribution layer (RDL) 219 within the base die 208 . In one aspect, RDL 219 may include a plurality of conductive traces interleaved with a plurality of dielectric layers. In one aspect, RDL 219 may be coupled to TSV 218 within base die 208 .FIG. 2B shows a top view layout of the semiconductor package 200 in accordance with the aspects shown in FIG. 2A. Package substrate 202 may include a perimeter or footprint. The base die 208 may include a footprint. The first chiplet 224a may include a footprint. The second chiplet 224b may include a footprint. The third chiplet 224c may include a footprint. The first power plane module 210a may include a footprint. The second power plane module 210b may include a footprint. In one aspect, the semiconductor package 200 may also include one or more additional chiplets 224n disposed adjacent to the base die 208 and adjacent to the second chiplet 224b.In the aspect shown in Figure 2B, base die 208, first chiplet 224a, second chiplet 224b, third chiplet 224c, additional chiplet 224n, first power plane module 210a, and second power plane module The footprint of 210b falls within the perimeter of substrate 202 .As described above, the first chiplet 224a may be disposed on the base die 208 . The second chiplet 224b may be disposed in part on the base die 208 and may be disposed in part on the first power plane module 210a. The third chiplet 224c may be disposed in part on the base die 208 and may be disposed in part on the second power plane module 210b. Thus, as shown in FIG. 2B , the footprint of the first chiplet 224a may fall within the footprint of the base die 208 . The footprint of the second chiplet 224b may include a portion that may overlap the base die 208 and another portion that may overlap the first power plane module 210a. The footprint of the third chiplet 224c may include a portion that may overlap the base die 208 and another portion that may overlap the second power plane module 210b. Similar to the arrangement of the second chiplet 224b, the additional chiplet 224n may include a footprint that may have a portion that may overlap the base die 208 and another portion that may overlap the first power plane module 210a.The first power plane module 210a may include a first voltage reference plane 212a' and a second voltage reference plane 212b', which may be aligned on the y-axis and may be parallel to each other. The first passive component 220a may be disposed between the first voltage reference plane 212a' and the second voltage reference plane 212b'. The first passive component 220a may be coupled to the first voltage reference plane 212a&apos; through the first via 216a. The first passive component 220a may also be coupled to the second voltage reference plane 212b' through the second via 216a'. In the aspect shown in Figure 2B, there may be an array of passive components disposed between the first voltage reference plane 212a' and the second voltage reference plane 212b'. The body length of the first passive component 220a may be arranged along the y-axis.The second power plane module 210b may include a third voltage reference plane 212c', a fourth voltage reference plane 212d', and a fifth voltage reference plane 212e', which may be aligned on the y-axis and may be parallel to each other. The trenches (221b, 221c) may be arranged between adjacent voltage reference planes (212c', 212d', 212e').3 shows a cross-sectional view of a semiconductor package 300 according to yet another aspect of the present disclosure. Semiconductor package 300 may be similar to semiconductor package 100 of FIG. 1A and semiconductor package 200 of FIG. 2A and may include additional variations and components as described below.In the aspect shown in FIG. 3 , semiconductor package 300 may include a package substrate 302 . Package substrate 302 may include contact pads, electrical interconnects, wiring, and other features, which may or may not be shown in any of the figures, and which are conventional features known to those skilled in the art . The various couplings of the components may use conventional methods, including solder bonding, thermocompression bonding, or other metal diffusion methods. Package substrate 302 may have one or more rigid core layers for improved structural stability or a coreless substrate package for reduced form factor. In other aspects, package substrate 302 may be part of a larger substrate that supports additional semiconductor packages and/or components.In one aspect, the semiconductor package 300 may include a plurality of solder balls 304 . The package substrate 302 may be connected to a motherboard (not shown) through a plurality of solder balls 304 . The plurality of solder balls 304 may also provide electrical connections between the package substrate 302 and the motherboard. In one aspect, the stacked semiconductor package 300 may include a plurality of package bumps 306 disposed on the package substrate 302 . The plurality of package bumps 306 may be controlled collapse chip attach (C4) bumps.In aspects of the present disclosure, semiconductor package 300 may include base die 308 . Base die 308 may be an active interposer or a passive interposer. In one aspect, base die 308 may be disposed on package substrate 302 . In one aspect, the base die 308 may be connected to the package substrate 302 through the plurality of package bumps 306 . The plurality of package bumps 306 may also provide electrical connections between the base die 308 and the package substrate 302 .In aspects of the present disclosure, base die 308 may include at least one through-silicon via (TSV) 318 . The plurality of package bumps 306 may provide electrical connections between the at least one TSV 318 and the package substrate 302 .In aspects of the present disclosure, the semiconductor package 300 may include a first power plane module 310a. In one aspect, the first power plane module 310a may include a first section disposed on the package substrate 302 . In one aspect, the first power plane module 310a may be connected to the package substrate 302 through the plurality of package bumps 306a. In one aspect, the first power plane module 310a may further include a second section disposed on a motherboard (not shown). The first power plane module 310a may be connected to the motherboard through a plurality of solder balls 304 . In an aspect, the first power plane module 310a may be disposed at a first periphery of the base die 308 .In aspects of the present disclosure, the first power plane module 310a may include a plurality of vertically interleaved metal layers (312c, 312b, 312c) electrically coupled to the package substrate 302 and the motherboard. In the aspect shown in FIG. 3, the first power plane module 310a may include a first metal layer 312a, a second metal layer 312b, and a third metal layer 312c interleaved with dielectric layers. In one aspect, the first power plane module 310a can include a first mold portion 313a. The first mold portion 313a may include a molding material, such as an epoxy polymer, a silicone polymer, or a polyimide material. The first mold portion 313a may have a first mold surface coupled to the package substrate 302 . The first mold portion 313a may have a second mold surface coupled to the semiconductor device 322 . The first mold portion 313a may have a third mold surface coupled to the motherboard. In one aspect, the first, second and third metal layers (312a, 312b, 312c) may be embedded in the molding material of the first mold portion 313a. The first and second metal layers (312a, 312b) may extend through the third and second mold surfaces of the first mold portion 313a. The third metal layer 312c may extend across the first mold surface and the second mold surface of the first mold portion 313a.In aspects of the present disclosure, the plurality of package bumps 306a may provide electrical connections between the third metal layer 312c of the first power plane module 310a and the package substrate 302 .In aspects of the present disclosure, the plurality of solder balls 304 may provide electrical connections between the first and second metal layers (312a, 312b) of the first power plane module 310a and the motherboard.In aspects of the present disclosure, the semiconductor package 300 may include a second power plane module 310b. In one aspect, the second power plane module 310b may include a first section disposed on the package substrate 302 . In one aspect, the second power plane module 310b may be connected to the package substrate 302 through the plurality of package bumps 306b. In an aspect, the second power plane module 310b may also include a second section disposed on a motherboard (not shown). The second power plane module 310b may be connected to the motherboard through a plurality of solder balls 304 . In an aspect, the second power plane module 310b may be disposed at a second periphery of the base die 308 .In aspects of the present disclosure, the second power plane module 310b may include a plurality of vertically interleaved metal layers (312d, 312e, 312f) electrically coupled to the package substrate 302 and the motherboard. In the aspect shown in FIG. 3, the second power plane module 310b may include a fourth metal layer 312d, a fifth metal layer 312e, and a sixth metal layer 312f interleaved with the dielectric layers. In one aspect, the second power plane module 310b can include a second mold portion 313b. The second mold portion 313b may include a molding material, such as an epoxy polymer, a silicone polymer, or a polyimide material. The second mold portion 313b may have a first mold surface coupled to the package substrate 302 . The second mold portion 313b may have a second mold surface coupled to the semiconductor device 322 . The second mold portion 313b may have a third mold surface coupled to the motherboard. In one aspect, the fourth, fifth and sixth metal layers (312d, 312e, 312f) may be embedded in the molding material of the second mold portion 313b. The fourth metal layer 312d may extend through the first mold surface and the second mold surface of the second mold portion 313b. The fifth and sixth metal layers (312e, 312f) may extend through the second mold surface and the third mold surface of the second mold portion 313b.In aspects of the present disclosure, the plurality of package bumps 306b may provide electrical connections between the fourth metal layer 312d of the second power plane module 310b and the package substrate 302 .In aspects of the present disclosure, the plurality of solder balls 304 may provide electrical connections between the fifth and sixth metal layers (312e, 312f) of the second power plane module 310b and the motherboard.In aspects of the present disclosure, semiconductor package 300 may include semiconductor device 322 . In one aspect, semiconductor device 322 may be fabricated from any suitable semiconductor (eg, silicon or gallium arsenide). Semiconductor device 322 may be a semiconductor die, chip, or chiplet, eg, a system-on-chip (SOC), central processing unit (CPU), platform controller hub (PCH)/chiplet, memory device, field programmable gate array ( FPGA) device, or Graphics Processing Unit (GPU). In the aspect shown in FIG. 3, the semiconductor device 322 may be a set of three chiplets (324a, 324b, 324c). In one aspect, the first chiplet 324a may include a CPU, the second chiplet 324b may include a PCH, and the third chiplet 324c may include a GPU.In aspects of the present disclosure, semiconductor device 322 may be disposed at least partially on base die 308 . The semiconductor device 322 may also be disposed at least partially on the first power plane module 310a. The semiconductor device 322 may also be disposed at least partially on the second power plane module 310b. In one aspect, semiconductor device 322 may have a first section disposed on base die 308 . The semiconductor device 322 may have a second section disposed on the first power plane module 310a. The semiconductor device 322 may also have a third section disposed on the second power plane module 310b. In the aspect shown in FIG. 3 , the first chiplet 324a of the semiconductor device 322 may be disposed on the base die 308 . The second chiplet 324b of the semiconductor device 322 may be disposed in part on the base die 308 and may be disposed in part on the first power plane module 310a. The third chiplet 324c of the semiconductor device 322 may be disposed in part on the base die 308 and may be disposed in part on the second power plane module 310b.In aspects of the present disclosure, at least a portion of semiconductor device 322 may be electrically coupled to package substrate 302 through at least one TSV 318 .In aspects of the present disclosure, at least a portion of the semiconductor device 322 may be electrically coupled to the motherboard through the first and second metal layers (312a, 312b), and at least another portion of the semiconductor device 322 may be electrically coupled through the third metal layer 312c to the package substrate 302 . In one aspect, each of the first, second and third metal layers (312a, 312b, 312c) may be configurable. Each of the first, second and third metal layers (312a, 312b, 312c) may be configured based on the power transfer of the semiconductor package 300 to ease 2.5D and/or 3D stacked integrated circuit (IC) package architectures power transfer challenges. For example, the size, width and/or volume of each of the first, second and third metal layers (312a, 312b, 312c) may be configured to meet power transfer requirements.In aspects of the present disclosure, base die 308 has a cross-section in the x-z plane. In one aspect, the first, second, and third metal layers ( 312a , 312b , 312c ) can extend in a direction to form corresponding layers that can traverse (ie, on the y-axis) the cross-section of the base die 308 flat. In the aspect shown in FIG. 3 , the first, second and third metal layers ( 312a , 312b , 312c ) may extend along the periphery of the base die 308 on the y-axis. In other words, the first, second, and third metal layers ( 312a , 312b , 312c ) may form first, second, and third conductive planes, respectively, that may traverse the cross-section of the base die 308 , thereby forming the first power plane module 310a. In an aspect, the plane formed by the first metal layer 312a may include a first voltage reference plane. The plane formed by the second metal layer 312b may include a second voltage reference plane. The plane formed by the third metal layer 312c may include a third voltage reference plane.In aspects of the present disclosure, the first, second and third metal layers (312a, 312b, 312c) may have the same length or different lengths (on the y-axis). The first, second and third metal layers (312a, 312b, 312c) may extend along the y-axis of the base die 308 and parallel to each other. In one aspect, each of the first, second, and third metal layers ( 312a , 312b , 312c ) may extend 30% to 120% of the length of the base die 308 . For example, each of the first, second and third metal layers (312a, 312b, 312c) may include a length ranging from 5mm to 20mm.In aspects of the present disclosure, the first metal layer 312a may include a first chiplet-side contact pad 314a and a first motherboard-side contact pad 315a. In one aspect, the first chiplet side contact pad 314a may be coupled to the semiconductor device 322 . The first motherboard side contact pads 315a may be coupled to the motherboard. Similarly, the second metal layer 312b may include a second chiplet side contact pad 314b and a second motherboard side contact pad 315b. In an aspect, the second chiplet side contact pad 314b may be coupled to the semiconductor device 322 . The second motherboard side contact pads 315b may be coupled to the motherboard. The third metal layer 312c may include third chiplet side contact pads 314c and third package side contact pads 315c. In an aspect, the third chiplet side contact pad 314c may be coupled to the semiconductor device 322 . The third package side contact pad 315c may be coupled to the package substrate 302 . In the aspect shown in FIG. 3, each of the first, second and third metal layers (312a, 312b, 312c) may be arranged in a vertical orientation such that the second chiplet 224b is in contact with the package substrate 302 or The shortest interconnection paths are formed between the motherboards, thereby forming the first vertical power plane module 310a. In an aspect, the vertical plane formed by the first metal layer 312a may include a first vertical voltage reference plane. The vertical plane formed by the second metal layer 312b may include a second vertical voltage reference plane. The vertical plane formed by the third metal layer 312c may include a third vertical voltage reference plane.Advantages of the present disclosure may include improved Imax capacity (device reliability) through peripheral vertical power plane modules. Compared to discrete cylindrical interconnects with constrained geometries, such as through-mold vias (TMVs) or through-silicon vias (TSVs) through the base die or silicon interposer, this can be achieved by increasing the The interconnect volume (ie, the vertical planar interconnect configuration between the chiplet and the package substrate) to achieve reduced interconnect resistance.Another advantage of arranging the peripheral power plane modules to extend over the package substrate, coupled directly to the motherboard or printed circuit board, may include shorter power transfer paths between the chiplets and the motherboard. In an aspect, portions of the footprints of the second and third chiplets (324b, 324c) may extend over the footprints of both the base die 308 and the package substrate 302 to allow the package substrate and base die Core miniaturization.In aspects of the present disclosure, the first chiplet side contact pads 314a and the first motherboard side contact pads 315a may have different widths (on the x-axis). The first chiplet side contact pad 314a may have a width of the first dimension. The first motherboard side contact pad 315a may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the first chiplet side contact pads 314a may have a smaller width than the first motherboard side contact pads 315a. The first dimension may include a width geometry in the range of about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 2.5 times the first dimension.In aspects of the present disclosure, the second chiplet side contact pads 314b and the second motherboard side contact pads 315b may have different widths (on the x-axis). The second chiplet side contact pad 314b may have a width of the first dimension. The second motherboard side contact pad 315b may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the second chiplet side contact pad 314b may have a smaller width than the second motherboard side contact pad 315b. The first size may range from about 20 to 100 squares inclusive. width geometry. The second dimension may include a width structure that is at least 2.5 times larger than the first dimension.In aspects of the present disclosure, the third chiplet side contact pad 314c and the third package side contact pad 315c may have different widths (on the x-axis). The third chiplet side contact pad 314c may have a width of the first dimension. The third package side contact pad 315c may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the third chiplet side contact pad 314c may have a smaller width than the third package side contact pad 315c. The first size may range from about 20 to 100 squares inclusive. width geometry. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the first metal layer 312a and the second metal layer 312b may have the same height (on the z-axis). The first and second metal layers (312a, 312b) may include height geometries ranging from approximately 700 μm to 1800 μm. Due to the heights of the package substrate 302 and the package bumps 306a, the third metal layer 312c may have a lower height than the heights of the first and second metal layers (312a, 312b).In aspects of the present disclosure, a first via hole 316a may be formed on the first metal layer 312a. A second via hole 316a' may be formed on the second metal layer 312b, and the second via hole 316a' may be opposite to the first via hole 316a. A third via hole 316b may be formed on the first metal layer 312a. A fourth via hole 316b' may be formed on the second metal layer 312b, and the fourth via hole 316b' may be opposite to the third via hole 316b. A fifth via hole 316c may be formed on the second metal layer 312b. A sixth via hole 316c' may be formed on the third metal layer 312c, and the sixth via hole 316c' may be opposite to the fifth via hole 316c.In one aspect, a first passive component 320a may be disposed between the first and second metal layers (312a, 312b). In one aspect, a second passive component 320b may be disposed between the first and second metal layers (312a, 312b). In one aspect, a third passive component 320c may be disposed between the second and third metal layers (312b, 312c). The first, second and third passive components (320a, 320b, 320c) may include capacitors (eg, silicon capacitors or ceramic capacitors (eg, multilayer ceramic capacitors (MLCCs)), resistors, diodes, or inductors. In one aspect, the first, second and third passive components (320a, 320b, 320c) can be used to improve the power integrity of the semiconductor package 300. In an aspect, the first, second and third passive components (320a, 320b, 320c) body lengths can be arranged along the y-axis to enable vertical power plane modules that are now miniaturized or low z-profile. In one aspect, the first terminal of the first passive component 320a can pass through the first The via 316a is electrically coupled to the first metal layer 312a. The second terminal of the first passive component 320a may be electrically coupled to the second metal layer 312b through the second via 316a'. In other words, the first passive component 320a may be are coupled to the first vertical voltage reference plane through first and second vias (316a, 316a'), respectively, and may also be coupled to the second vertical voltage reference plane. In one aspect, the first The terminal may be electrically coupled to the second metal layer 312b through the fourth via 316b'. The second terminal of the second passive component 320b may be electrically coupled to the first metal layer 312a through the third via 316b. In other words, the second Passive component 320b may be coupled to the first vertical voltage reference plane through third and fourth vias (316b, 316b'), respectively, and may also be coupled to the second vertical voltage reference plane. In one aspect, the third passive The first terminal of the component 320c may be electrically coupled to the second metal layer 312b through the fifth via 316c. The second terminal of the third passive component 320c may be electrically coupled to the third metal layer 312c through the sixth via 316c'. In other words, the third passive component 320c may be coupled to the second vertical voltage reference plane through fifth and sixth vias (316c, 316c'), respectively, and may also be coupled to the third vertical voltage reference plane. This may cause AC The reduction in noise as decoupling passive components in proximity to the semiconductor device 300 can reduce power supply induced jitter, which can lead to improved electrical performance.In one aspect, the first vertical voltage reference plane may be associated with a ground reference voltage (Vss). In one aspect, the second vertical voltage reference plane may be associated with a power supply reference voltage (Vcc). In one aspect, the third vertical voltage reference plane may be associated with a ground reference voltage (Vss).In aspects of the present disclosure, at least a portion of semiconductor device 322 may be electrically coupled to the motherboard through fifth and sixth metal layers (312e, 312f), and at least another portion of semiconductor device 322 may be electrically coupled through fourth metal layer 312d to the package substrate 302 . In one aspect, each of the fourth, fifth and sixth metal layers (312d, 312e, 312f) may be configurable. Each of the fourth, fifth, and sixth metal layers (312d, 312e, 312f) may be configured based on the power delivery requirements of the semiconductor package 300 to ease 2.5D and/or 3D stacked integrated circuit (IC) packages Architecture's power delivery challenges. For example, the size, width and/or volume of each of the fourth, fifth and sixth metal layers (312d, 312e, 312f) may be configured to meet power transfer requirements.In aspects of the present disclosure, base die 308 has a cross-section in the x-z plane. In one aspect, the fourth, fifth, and sixth metal layers ( 312d , 312e , 312f ) can extend in a direction to form corresponding layers that can traverse (ie, on the y-axis) the cross-section of the base die 308 flat. In the aspect shown in FIG. 3 , the fourth, fifth and sixth metal layers ( 312d , 312e , 312f ) may extend along the periphery of the base die 308 on the y-axis. In other words, the fourth, fifth and sixth metal layers ( 312d , 312e , 312f ) may form fourth, fifth and sixth conductive planes, respectively, which may traverse the base die 308 of the cross section, thereby forming the second power plane module 310b. In an aspect, the plane formed by the fourth metal layer 312d may include a fourth voltage reference plane. The plane formed by the fifth metal layer 312e may include a fifth voltage reference plane. The plane formed by the sixth metal layer 312f may include a sixth voltage reference plane.In aspects of the present disclosure, the fourth, fifth and sixth metal layers (312d, 312e, 312f) may have the same length or different lengths (on the y-axis). The fourth, fifth and sixth metal layers ( 312d , 312e , 312f ) may extend along the periphery of the base die 308 and parallel to each other on the y-axis. In one aspect, each of the fourth, fifth, and sixth metal layers ( 312d , 312e , 312f ) may extend 30% to 120% of the length of the base die 308 . For example, each of the fourth, fifth and sixth metal layers (312d, 312e, 312f) may include a length ranging from 5mm to 20mm.In aspects of the present disclosure, the fourth metal layer 312d may include fourth chiplet side contact pads 314d and fourth package side contact pads 315d. In an aspect, the fourth chiplet side contact pad 314d may be coupled to the semiconductor device 322 . The fourth package side contact pad 315d may be coupled to the package substrate 302 . The fifth metal layer 312e may include fifth chiplet side contact pads 314e and fifth motherboard side contact pads 315e. In an aspect, the fifth chiplet side contact pad 314e may be coupled to the semiconductor device 322 . The fifth motherboard side contact pad 315e may be coupled to the motherboard. The sixth metal layer 312f may include sixth chiplet side contact pads 314f and sixth motherboard side contact pads 315f. In an aspect, the sixth chiplet side contact pad 314f may be coupled to the semiconductor device 322 . The sixth motherboard side contact pad 315f may be coupled to the motherboard. In the aspect shown in FIG. 3, each of the fourth, fifth and sixth metal layers (312d, 312e, 312f) may be arranged in a vertical orientation such that between the third chiplet 224c and the package substrate 302 or The shortest interconnection paths are formed between the motherboards, thereby forming the second vertical power plane module 310b. In an aspect, the vertical plane formed by the fourth metal layer 312d may include a fourth vertical voltage reference plane. The vertical plane formed by the fifth metal layer 312e may include a fifth vertical voltage reference plane. The vertical plane formed by the sixth metal layer 312f may include a sixth vertical voltage reference plane.In aspects of the present disclosure, the fourth chiplet side contact pad 314d and the fourth package side contact pad 315d may have different widths (on the x-axis). The fourth chiplet side contact pad 314d may have a width of the first dimension. The fourth package side contact pad 315d may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the fourth chiplet side contact pad 314d may have a smaller width than the fourth package side contact pad 315d. The first dimension may include a width geometry ranging from about 20 about to 100 about. The second dimension may include a width geometry that is at least 1.5 times the first dimension.In aspects of the present disclosure, the fifth chiplet-side contact pad 314e and the fifth motherboard-side contact pad 315e may have different widths (on the x-axis). The fifth chiplet side contact pad 314e may have a width of the first dimension. The fifth motherboard side contact pad 315e may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the fifth chiplet side contact pad 314e may have a smaller width than the fifth motherboard side contact pad 315e. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 2.5 times the first dimension.In aspects of the present disclosure, the sixth chiplet side contact pad 314f and the sixth motherboard side contact pad 315f may have different widths (on the x-axis). The sixth chiplet side contact pad 314f may have a width of the first dimension. The sixth motherboard side contact pad 315f may have a width of the second dimension. In one aspect, the second dimension can be larger than the first dimension. In other words, the sixth chiplet side contact pad 314f may have a smaller width than the sixth motherboard side contact pad 315f. The first dimension may include a width geometry ranging from about 20 μm to 100 μm. The second dimension may include a width geometry that is at least 2.5 times the first dimension.In aspects of the present disclosure, the fifth metal layer 312e and the sixth metal layer 312f may have the same height (on the z-axis). The fifth and sixth metal layers (312e, 312f) may include height geometries ranging from about 700 angstroms to 1800 angstroms. Due to the heights of the package substrate 302 and the package bumps 306b, the fourth metal layer 312d may have a lower height than the heights of the fifth and sixth metal layers (312e, 312f).In aspects of the present disclosure, a seventh via 316d may be formed on the fourth metal layer 312d. Eighth via holes 316e may be formed on the first surface of the fifth metal layer 312e. A ninth via hole 316f may be formed on the second surface of the fifth metal layer 312e. A tenth via hole 316g may be formed on the sixth metal layer 312f. In one aspect, a fourth passive component 320d may be disposed between the fourth and fifth metal layers (312d, 312e). A fifth passive component 320e may be disposed between the fifth and sixth metal layers (312e, 312f). The fourth and fifth passive components (320d, 320e) may include capacitors (eg, silicon capacitors or ceramic capacitors (eg, multilayer ceramic capacitors (MLCC)), resistors, diodes, or inductors. As shown in FIG. 3 In an aspect of , the body lengths of the fourth and fifth passive components (320d, 320e) may be arranged along the z-axis. In an aspect, the first terminal of the fourth passive component 320d may be electrically coupled through the seventh via 316d to the fourth metal layer 312d. The second terminal of the fourth passive component 320d may be electrically coupled to the fifth metal layer 312e through the eighth via 316e. In other words, the fourth passive component 320d may pass through the seventh and the fifth metal layer, respectively. Eight vias (316d, 316e) are coupled to the fourth vertical voltage reference plane and can also be coupled to the fifth vertical voltage reference plane. In one aspect, the first terminal of the fifth passive component 320e can pass through the ninth via 316f is electrically coupled to the fifth metal layer 312e. The second terminal of the fifth passive component 320e may be electrically coupled to the sixth metal layer 312f through the tenth via 316g. In other words, the fifth passive component 320e may pass through the The ninth and tenth vias (316f, 316g) are coupled to the fifth vertical voltage reference plane, and can also be coupled to the sixth vertical voltage reference plane. This can cause a drop in AC noise because the decoupling passive components are close to the semiconductor device 300 reduces power-induced jitter, which can lead to improved electrical performance.In one aspect, the fourth vertical voltage reference plane may be associated with a ground reference voltage (Vss). In one aspect, the fifth vertical voltage reference plane may be associated with a power supply reference voltage (Vcc). In one aspect, the sixth vertical voltage reference plane may be associated with a ground reference voltage (Vss).In aspects of the present disclosure, a plurality of microbumps 317 may be disposed on the base die 308 . In one aspect, a plurality of microbumps 317a may be disposed on the first power plane module 310a. In one aspect, a plurality of microbumps 317b may be disposed on the second power plane module 310b. The plurality of microbumps 317a may provide electrical connections between the first power plane module 310a and the second chiplet 324b. The plurality of package bumps 317 may also provide electrical connections between the base die 308 and the first chiplet 324a. The plurality of microbumps 317b may also provide electrical connections between the second power plane module 310b and the third chiplet 324c.In aspects of the present disclosure, the widths of the plurality of microbumps (317a, 317b) on the first and second power plane modules (310a, 310b) may be smaller than the widths of the corresponding plurality of package bumps (306a, 306b). In one aspect, the corresponding chiplet side contact pads (314a, 314b, 314c, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314d, 314a, 314d) 314e, 314f) for size setting. In one aspect, the respective package side contact pads (315c, 315d) may be sized according to the width of the respective package bumps (306a, 306b) of the first and second power plane modules (310a, 310b). In one aspect, the respective motherboard side contact pads (315a, 315b, 315e, 315f) may be sized according to the width of the respective solder balls 304 of the first and second power plane modules (310a, 310b).In aspects of the present disclosure, the first chiplet 324a , the second chiplet 324b , and the third chiplet 324c may communicate with each other through a redistribution layer (RDL) 319 within the base die 308 . In one aspect, RDL 319 may include a plurality of conductive traces interleaved with a plurality of dielectric layers. In one aspect, RDL 319 may be coupled to TSV 318 within base die 308 .4A-4P illustrate cross-sectional and top views of an exemplary simplified process flow for forming a semiconductor package in accordance with aspects generally similar to those shown in FIG. 1A of the present disclosure.FIG. 4A shows a cross-sectional view of the carrier 430 and the first mold layer 431a. The cross-sectional view is taken along the line A-A' of Fig. 4B. The first mold layer 431a may be formed on the carrier 430 by conventional techniques such as, but not limited to, compression, transfer, or injection molding processes.Figure 4B shows a top view of the carrier 430 and first mold layer 431a formed in this operation, which may show a solid plane of the first mold layer 431a.FIG. 4C shows a cross-sectional view of the formation of the first metal layer 412a. The cross-sectional view is taken along the line A-A' of Fig. 4D. The first metal layer 412a may be formed on the first mold layer 431a by conventional techniques such as, but not limited to, a lamination or plating process. In one aspect, the first metal layer 412a may be copper.FIG. 4D shows a top view of the first metal layer 412a formed in this operation, which may show a solid plane of the first metal layer 412a.4E shows a cross-sectional view of the second mold layer 431b and the formation of the first mold opening 440a in the second mold layer 431b. The cross-sectional view is taken along the line A-A' of Fig. 4F. The second mold layer 431b may be formed on portions of the first metal layer 412a by conventional techniques such as, but not limited to, compression, injection molding, or spin coating processes. In one aspect, the first mold opening 440a may be formed by laser drilling.FIG. 4F shows a top view of the second mold layer 431b and first mold opening 440a formed in this operation. In one aspect, multiple mold openings may be formed.4G shows a cross-sectional view of the formation of the first trench 421a and the first via 416a. The cross-sectional view is taken along the line A-A' of Fig. 4H. The first trenches 421a may be formed on portions of the first metal layer 412a that are not covered by the first mold layer 431a by conventional techniques, such as, but not limited to, an electroplating process. The first via 416a may be formed in the mold opening 440 on the first mold layer 431a by conventional techniques, such as, but not limited to, an electroplating process. In one aspect, the first trench 421a and the first via 416a may be copper.FIG. 4H shows the second mold layer 431b, first trench 421a, and first via 416a formed in this operation. In one aspect, multiple vias may be formed.FIG. 4I shows a cross-sectional view of the formation of the first passive component 420a. The cross-sectional view is taken along the line A-A' of Fig. 4J. The first passive component 420a may be disposed on the second mold layer 431b. The first terminal of the first passive component 420a may be coupled to the first via 416a by conventional techniques such as, but not limited to, thermocompression bonding or a solder reflow processFigure 4J shows a top view of the first passive component 420a formed in this operation. In one aspect, multiple passive components may be formed.4K shows a cross-sectional view of the third mold layer 431c and the formation of the second mold opening 440b in the third mold layer 431c. The cross-sectional view is taken along the line A-A' of Fig. 4L. The third mold layer 431c may be formed on the first trenches 421a and the first passive components 420a by conventional techniques such as, but not limited to, compression, injection molding, or spin coating processes. In one aspect, the second mold opening 440b may be formed by etching or laser drilling.Figure 4L shows a top view of the third mold layer 431c and second mold opening 440b formed in this operation. The second terminal of the first passive component 420a may be exposed through the second mold opening 440b. In one aspect, multiple mold openings may be formed.4M shows a cross-sectional view of the formation of the second trench 421b and the second via 416b. The cross-sectional view is taken along the line A-A' of Fig. 4N. The second trench 421b may be formed on the portion of the second mold layer 431b by conventional techniques such as, but not limited to, an electroplating process. The second via 416b may be formed in the second mold opening 440b by conventional techniques, such as, but not limited to, an electroplating process. Then, a second metal layer 412b is formed on the second trench 421b, the third mold layer 431c and the second via hole 416b by conventional techniques such as, but not limited to, electroplating and polishing processes.Figure 4N shows a top view of a build panel that may include the second metal layer 412b formed in this operation, which may show a solid plane of the second metal layer 412b. Then, a dicing process (not shown) can be performed to divide the panel into individual power plane modules. The cutting process can include mechanical or laser segmentation.FIG. 4O shows the disposition of a first power plane module 410a and a second power plane module 410b on a package substrate 402. As shown in FIG. After dicing, the first power plane module 410a and the second power plane module 410b can first be rotated so that the first and second metal layers ( 412a , 412b ) can form respective vertical voltage references after being disposed on the package substrate 402 flat. Package substrate 402 may include contact pads, electrical interconnects, wiring, and other features, which may or may not be shown in any of the figures, and which are conventional features known to those skilled in the art. The package substrate may also include pre-formed solder balls 404 and package bumps 406 . Base die 408 may be coupled to package substrate 402 through package bumps 406 . Base die 408 may include pre-formed TSVs and RDLs therein. The first and second power plane modules (410a, 410b) may be coupled to the package substrate through package bumps 406 via conventional techniques such as, but not limited to, thermocompression bonding or reflow processes. The first and second power plane modules ( 410a , 410b ) may be arranged at the periphery of the base die 408 .4P illustrates the attachment of semiconductor device 422 to base die 408, first power plane module 410a, and second power plane by conventional techniques such as, but not limited to, a thermocompression bonding or reflow process, according to one aspect of the present disclosure on the module 410b to obtain the semiconductor package 400 .It should be understood that the exemplary processes described above in relation to FIGS. 4A-P are not limited to this particular order. Any suitable, modified sequence of operations may be used.Aspects of the present disclosure may be implemented into a system using any suitable hardware and/or software. 5 schematically illustrates a computing device 500 that may include a semiconductor package as described herein, in accordance with some aspects. Computing device 500 may house a board, such as motherboard 502 . Motherboard 502 may include various components including, but not limited to, processor 504 and at least one communication chip 506 . The processor 504 , which may have a semiconductor package according to the present disclosure, may be physically and electrically coupled to the motherboard 502 . In some embodiments, at least one communication chip 506 may also be physically and electrically coupled to motherboard 502 . In other embodiments, the communication chip 506 may be part of the processor or package 504 .Depending on its application, computing device 500 may include other components that may or may not be physically and electrically coupled to motherboard 502 . These other components may include, but are not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas, Displays, touchscreen monitors, touchscreen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, Geiger counters, accelerometers, gyroscopes, speakers, cameras, and large Capacity storage devices (eg, hard drives, compact discs (CDs), digital versatile discs (DVDs), etc.). In another aspect, the processor 504 of the computing device 500 may be packaged in a semiconductor package with a peripheral vertical power plane module as described herein, and/or other semiconductor devices may be packaged with a peripheral vertical power plane module as described herein packaged together in a semiconductor package.Communication chip 506 may implement wireless communication for transferring data to and from computing device 500 . The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc., that can communicate data via a non-solid medium through the use of modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some respects they may not. Communication chip 506 may implement any of a variety of wireless standards or protocols, including but not limited to Institute of Electrical and Electronics Engineers (IEEE) standards, including Wi-Fi (IEEE 502.11 family), IEEE 502.16 standards (eg, IEEE 502.16 - 2005 revision), the Long Term Evolution (LTE) project, and any revisions, updates and/or amendments (eg, the LTE-Advanced project, the Ultra Mobile Broadband (UMB) project (also known as "3GPP2"), etc.). An IEEE 502.16 compliant BWA network is generally referred to as a WiMAX network, an acronym that stands for Worldwide Interoperability for Microwave Access, which is a certification mark for products that have passed the conformance and interoperability tests of the IEEE 502.16 standard.The communication chip 506 may also operate according to a Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Evolved HSPA (E-HSPA) or LTE networks . The communication chip 506 may operate according to Enhanced Data for GSM Evolution (EDGE), GSM EDGE Radio Access Network (GERAN), Universal Terrestrial Radio Access Network (UTRAN) or Evolved UTRAN (E-UTRAN). Communication chip 506 may be based on Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Digital Enhanced Cordless Telecommunications (DECT), Evolution-Data Optimized (EV-DO), derivatives thereof and designated as 3G, 4G, 5G and any other wireless protocol above. In other aspects, the communication chip 506 may operate according to other wireless protocols.Computing device 500 may include multiple communication chips 506 . For example, the first communication chip 506 may be dedicated to shorter range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 506 may be dedicated to communication such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, ev-DO, etc. Long range wireless communication.In various embodiments, computing device 500 may be a laptop computer, netbook, notebook, ultrabook, smart phone, tablet computer, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer , scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In one aspect, computing device 500 may be a mobile computing device. In other implementations, computing device 500 may be any other electronic device that processes data.6 shows a flowchart illustrating a method 600 of forming a semiconductor package in accordance with aspects of the present disclosure.As shown in FIG. 6, at operation 602, a method 600 of forming a semiconductor package may include forming a package substrate.At operation 604, the method may include forming a base die on a package substrate.At operation 606, the method may include forming a power plane module at a periphery of the base die. The power plane module may include top and bottom surfaces and at least one vertically interleaved metal layer electrically coupled to the package substrate at the bottom surface.At operation 608, the method may include forming a semiconductor device including a first section disposed on the base die and a second section disposed on the power plane module, wherein the second section of the semiconductor device is in the power plane The module is electrically coupled to at least one vertically interleaved metal layer at the top surface of the module.It should be understood that the above operations described above in relation to FIG. 6 are not limited to this particular order. Any suitable, modified sequence of operations may be used.ExampleExample 1 may include a semiconductor package comprising: a package substrate; a base die on and electrically coupled to the package substrate; at least one power plane module on the base At the periphery of the die on the package substrate, the power plane module includes: a top surface and a bottom surface; and at least one vertically interleaved metal layer electrically coupled to the package substrate at the bottom surface; and a semiconductor device , the semiconductor device includes a first section disposed on the base die and a second section disposed on the power plane module, wherein the second section of the semiconductor device is electrically coupled to at least one of the at least one Vertically staggered metal layers.Example 2 may include the semiconductor package of Example 1 and/or any other example disclosed herein, wherein the at least one vertically interleaved metal layer may further include a plurality of interleaved metal layers, each of the plurality of interleaved metal layers may further include: a coupling to a top portion of the semiconductor device; and a bottom portion coupled to the package substrate, wherein the bottom portion has a width greater than a width of the top portion.Example 3 may include the semiconductor package of Example 2 and/or any other example disclosed herein, wherein the width of the bottom portion may be at least 1.5 times the width of the top portion.Example 4 may include the semiconductor package of Example 2 and/or any other example disclosed herein, wherein the plurality of interleaved metal layers may further include at least one ground reference voltage plane and at least one power supply reference voltage plane.Example 5 may include the semiconductor package of Example 2 and/or any other example disclosed herein, wherein the power plane module may further include at least one passive component.Example 6 may include the semiconductor package of Example 5 and/or any other example disclosed herein, wherein the passive components may be electrically coupled to at least one metal layer of the plurality of interleaved metal layers.Example 7 may include the semiconductor package of Example 5 and/or any other example disclosed herein, wherein the passive components may include multilayer ceramic capacitors and/or silicon capacitors.Example 8 may include the semiconductor package of Example 1 and/or any other example disclosed herein, wherein the power plane module may further include a plurality of trenches on one or more of the plurality of interleaved metal layers.Example 9 may include the semiconductor package of Example 8 and/or any other example disclosed herein, wherein the plurality of trenches may be isolated by a dielectric layer.Example 10 may include the semiconductor package of Example 8 and/or any other example disclosed herein, wherein the plurality of trenches may be arranged in an interdigitated arrangement.Example 11 may include the semiconductor package of Example 1 and/or any other example disclosed herein, wherein the plurality of interleaved metal layers may be separated by a dielectric layer.Example 12 may include the semiconductor package of Example 1 and/or any other example disclosed herein, wherein the ground reference voltage plane and the power reference voltage plane may be parallel to each other.Example 13 may include the semiconductor package of Example 1 and/or any other example disclosed herein, wherein the power plane module may further include a first bottom portion having a first bottom portion disposed on the package substrate at the periphery of the base die section, and a second section at the periphery of the package substrate having a second bottom portion disposed on the motherboard.Example 14 may include a computing device including: a circuit board; and a semiconductor package coupled to the circuit board, wherein the semiconductor package may include: a package substrate; a base die on the package substrate and electrically coupled to a package substrate; at least one power plane module on the package substrate at the periphery of the base die, the power plane module comprising: a top surface and a bottom surface; and at least one vertically interleaved metal layer, at least a vertically interleaved metal layer is electrically coupled to the package substrate at the bottom surface; and a semiconductor device including a first section disposed on the base die and a second section disposed on the power plane module, wherein the semiconductor The second section of the device is electrically coupled to the at least one vertically interleaved metal layer at the top surface of the power plane module.Example 15 may include the computing device of Example 14 and/or any other example disclosed herein, wherein the at least one vertically interleaved metal layer may further include a plurality of interleaved metal layers, each of the plurality of interleaved metal layers may further include: a coupling to a top portion of the semiconductor device; and a bottom portion coupled to the package substrate, wherein the bottom portion has a width greater than a width of the top portion.Example 16 can include a method comprising: forming a package substrate; forming a base die on the package substrate; forming a power plane module at a periphery of the base die, the power plane module can include: a top surface and a bottom surface and at least one vertically interleaved metal layer electrically coupled to a package substrate at a bottom surface; forming a semiconductor device including a first section disposed on a base die and a power plane module disposed on and wherein the second section of the semiconductor device is electrically coupled to the at least one vertically interleaved metal layer at the top surface of the power plane module.Example 17 may include the method of Example 16 and/or any other example disclosed herein, wherein the at least one vertically interleaved metal layer may further include a plurality of interleaved metal layers, each of the plurality of interleaved metal layers may further include: coupled to a top portion of the semiconductor device; and a bottom portion coupled to the package substrate, wherein the bottom portion has a width greater than a width of the top portion.Example 18 may include the method of Example 16 and/or any other example disclosed herein, further comprising coupling at least one passive component to the plurality of interleaved metal layers.Example 19 may include the method of Example 16 and/or any other example disclosed herein, further comprising coupling the plurality of trenches to one or more of the plurality of interleaved metal layers.Example 20 may include the method of Example 19 and/or any other example disclosed herein, further comprising arranging the grooves in an interdigitated arrangement.The term "comprising" should be understood to have a broad meaning similar to the term "including" and should be understood to imply the inclusion of a stated integer or operation or group of integers or operations, but not Excludes any other whole or operation or group of wholes or operations. This definition also applies to variations of the term "comprising", such as "comprise" and "comprises".As used herein, the term "coupled" (or "connected") may be understood as electrical or mechanical coupling, such as attachment or fixation or mounting, or only contact without any fixation, and it will be understood that direct coupling and Indirectly couple (in other words, couple without direct contact) both.Although the present disclosure has been particularly shown and described with reference to specific aspects, it will be understood by those skilled in the art that changes may be made in form and detail without departing from the scope of the disclosure as defined by the appended claims Various changes. Accordingly, the scope of the present disclosure is indicated by the appended claims, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be covered. |
Embodiments include epitaxial semiconductor stacks for reduced defect densities in III-N device layers grown over non-III-N substrates, such as silicon substrates. In embodiments, a metamorphic buffer includes an AlxIn1-xN layer lattice matched to an overlying GaN device layers to reduce thermal mismatch induced defects. Such crystalline epitaxial semiconductor stacks may be device layers for HEMT or LED fabrication, for example. System on Chip (SoC) solutions integrating an RFIC with a PMIC using a transistor technology based on group III-nitrides (III-N) capable of achieving high Ft and also sufficiently high breakdown voltage (BV) to implement high voltage and/or high power circuits may be provided on the semiconductor stacks in a first area of the silicon substrate while silicon-based CMOS circuitry is provided in a second area of the substrate. |
1.A semiconductor material stack comprising:Silicon substratea III-N device layer disposed over the silicon substrate;A buffer portion disposed between the silicon substrate and the III-N device layer, wherein the buffer portion includes an Al x In 1-x N layer, wherein x is less than 1.2.The material stack of claim 1 wherein said Al x In 1-x N layer is lattice matched to said Group III-N device layer and is in direct contact with said III-N device layer.3.The material stack according to claim 2, wherein the group III-N device layer is GaN, wherein the top barrier comprises at least Al z Ga 1-z N, Al w In 1-w N or AlN One of them, and wherein x is between 0.80 and 0.84, and wherein the silicon substrate has a crystal orientation of (100), (110) or (111).4.The material stack according to claim 3, wherein the silicon substrate has an orientation of (100) and is cut between 4 and 8 degrees toward the [110] direction.5.The material stack according to claim 2, wherein the thickness of the Al x In 1-x N layer is 1.5 to 10 times larger than the thickness of the group III-N device layer.6.The material stack according to claim 2, wherein the buffer portion comprises a superlattice having a plurality of Al x In 1-x N layers and a plurality of III-N layers.7.The material stack according to claim 1, wherein the buffer portion further comprises an AlN nucleation layer disposed between the Al x In 1-x N layer and the silicon substrate.8.The material stack according to claim 7, wherein the buffer portion further comprises an Al y In 1-y N transition disposed between the AlN nucleation layer and the Al x In 1-x N layer Layer, where y>x.9.The material stack according to claim 8, wherein y is gradually reduced from about 1 which is closest to the nucleation layer toward about x which is closest to the Al x In 1-x N layer .10.The material stack according to claim 8, wherein the Al x In 1-x N layer comprises a portion between 50% and 99% of the total thickness of the buffer portion.11.A high electron mobility transistor (HEMT) comprising:a gate electrode disposed between the source contact portion and the drain contact portion;a gate dielectric disposed under the gate electrode;a III-N channel layer disposed under the gate dielectric;a bottom barrier disposed under the channel layer, wherein the bottom barrier comprises an Al x In 1-x N layer lattice-matched to the channel layer;A silicon substrate disposed under the bottom barrier, wherein the Al x In 1-x N layer is disposed over a (100) or (111) crystal plane of the substrate.12.The HEMT of claim 11, further comprising a top barrier layer having a first thickness between the gate electrode and the channel layer and at a gate electrode a second greater thickness between the source contact and the drain contact on either side of the two sides, wherein the top barrier layer comprises Al z Ga 1-z N, Al w In 1-w At least one of N or AlN.13.The HEMT according to claim 12, wherein the group III-N channel layer comprises a GaN layer having a thickness between 10 nm and 200 nm, wherein a thickness of the Al x In 1-x N layer is 400 nm and Between 2 μm, and wherein x is between 0.80 and 0.84;Wherein an AlN nucleation layer is disposed between the Al x In 1-x N layer and the silicon substrate;Wherein the Al x In 1-x N layer is disposed on the Al y In 1-y N transition layer, and the Al y In 1-y N transition layer is disposed on the AlN nucleation layer, and wherein y is gradually changed from about 1 which is closest to the nucleation layer toward about x which is closest to the Al x In 1-x N layer.14.The HEMT of claim 11, wherein the channel layer is undoped in a region disposed under the gate electrode, and only when the gate electrode is at a threshold voltage (V t ) greater than 0V In the upper portion, the first thickness portion of the top barrier layer induces a charge to form a two-dimensional electron gas (2DEG) in the channel layer.15.A mobile computing device comprising:touch screen;battery;antenna;a DC to DC converter coupled to the battery;A wireless transmitter, the wireless transmitter further comprising a power amplifier (PA), wherein at least one of the DC to DC converter and the PA comprises the HEMT of claim 11.16.The mobile computing device of claim 15, wherein the DC to DC converter comprises the first HEMT of claim 11 and the PA employs the second HEMT of claim 11.17.A method of forming a high electron mobility transistor, the method comprising:A sacrificial gate structure is formed over the stack of semiconductor material layers, the stack of semiconductor material layers being disposed on a crystalline silicon substrate, the stacked body comprising Al x In 1-x disposed in lattice matching a Group III-N semiconductor channel layer on the N layer, the Al x In 1-x N layer having a thickness greater than a thickness of the channel layer;Forming a source region and a drain region on opposite sides of the sacrificial gate structure;Removing the sacrificial gate structure to expose a surface of the epitaxially grown stack;Forming a gate dielectric layer on the exposed surface of the epitaxially grown stack using an atomic layer deposition process;A gate electrode is formed on the gate dielectric layer.18.The method of claim 17 wherein said method further comprises forming a stack of said layers of semiconductor material by:Epitaxially growing a gradually changing Al y In 1-y N transition layer over the AlN nucleation layer, the AlN nucleation layer being disposed on the substrate;Epitaxially growing the Al x In 1-x N layer over the Al y In 1-y N transition layer, wherein y is from about 1 closest to the nucleation layer toward the closest to the Al x In 1 -x N layer is approximately x gradually changing;Epitaxially growing the III-N semiconductor channel substantially composed of GaN over the Al x In 1-x N layer;A top barrier layer including a ternary III-nitride is epitaxially grown over the channel layer.19.The method according to claim 17, wherein said gradually changing Al y In 1-y N transition layer is grown directly on said AlN nucleation layer to a thickness between 50 nm and 100 nm, wherein said The Al x In 1-x N layer is grown on the Al y In 1-y N transition layer to a thickness between 300 and 2 μm, and wherein the trench is grown directly on the Al x In 1-x N layer The thickness of the via layer to between 10 nm and 200 nm.20.The method according to claim 19, wherein the stacked body of the semiconductor material layer is disposed on a (100) surface of the substrate cut between 4 and 8 degrees toward the [110] direction Up; andWherein the ternary group III-nitride is selected from the group consisting of Al x Ga 1-x N, Al w In 1-w N and In z Ga 1-z N . |
Epitaxial buffer layer for III-N transistors on a silicon substrateTechnical fieldEmbodiments of the present invention generally relate to microelectronic devices and fabrication, and more particularly to III-N transistor architectures and designs.Background techniqueMobile computing (eg, smartphones and tablets) markets benefit from smaller component form factors and lower power consumption. Since current platform solutions for smartphones and tablets rely on multiple packaged integrated circuits (ICs) mounted to the board, further upgrades to smaller, more power efficient form factors are constrained. For example, a smart phone will include a separate power management IC (PMIC), radio frequency IC (RFIC), and WiFi/Bluetooth/GPS IC in addition to a separate logic processor IC. The System-on-Chip (SoC) architecture provides the advantage of a micro-scale upgrade that is unmatched by board-level component integration. Although the logical processor IC itself can be seen as a system-on-a-chip (SoC) that integrates both memory and logic, the broader SoC solution for mobile computing platforms remains elusive because of the PMIC. And RFIC works with two or more of high voltage, high power, and high frequency.Thus, conventional mobile computing platforms typically employ incompatible transistor technology that is tailored to the different functions performed by the PMIC and RFIC. For example, laterally diffused silicon MOS (LDMOS) technology is commonly used in PMICs to manage voltage conversion and power distribution (including battery voltage regulation, such as step-up and/or step-down of voltage conversion). A III-V compound semiconductor such as a GaAs heterojunction bipolar transistor (HBT) is typically used in RFIC to generate sufficient power amplification on the GHz carrier frequency. Thus conventional silicon field effect transistors implementing CMOS technology require the use of a third transistor technology for logic and control functions within the mobile computing platform. In addition to the incompatibility of the underlying semiconductor materials of the various ICs in the mobile computing platform, the transistor design of the DC to DC transfer switch within the PMIC is generally incompatible with the transistor design of the high frequency power amplifier in the RFIC. For example, a relatively low silicon breakdown voltage requires that the source-to-drain spacing in a DC-to-DC converter be much larger than that allowed by a power amplifying transistor that requires more than 20 GHz and possibly Ft of up to 500 GHz, where The F t required for the power amplifying transistor depends on the carrier frequency (for example, WPAN is not 60 GHz, and thus the transistor needs to be many times Ft of 60 GHz). Such different transistor-level design requirements result in various transistor fabrication processes that are difficult to incorporate into a single process.Therefore, although the SoC solution that integrates PMIC and RFIC functions in the mobile computing field is attractive for improving scalability, reducing cost, and increasing platform power factor, one obstacle to such a SoC solution is that it is sufficiently high. The speed (ie, a sufficiently high gain cutoff frequency Ft) in turn has a sufficiently high breakdown voltage (BV) of the scalable upgrade transistor technology.Group III-nitride (III-N) devices offer a bright future for integrating PMIC and RFIC functions with CMOS while achieving both high BV and high Ft. However, heteroepitaxial epitaxy of III-N material stacks on silicon substrates poses technical challenges due to at least significant lattice mismatch and thermal mismatch, significant lattice mismatch and thermal mismatch. Both can cause intensive defects and poor device performance. Therefore, techniques and epitaxial semiconductor stack architectures that provide reduced defect density within the device layer are advantageous.DRAWINGSThe embodiments of the present invention will be described by way of example and not by way of limitation, the embodiments of the invention1A illustrates a cross section of a semiconductor stack in which a high electron mobility transistor can be formed, according to an embodiment;1B illustrates a cross section of a semiconductor stack in which a high electron mobility transistor can be formed, according to an embodiment;2A illustrates a cross section of a recessed gate III-N transistor having an elevated source/drain region epitaxially grown, in accordance with an embodiment;2B shows an energy band diagram of a region of a transistor comparing a bottom barrier of Al y Ga 1-y N with a bottom barrier of Al x In 1-x N in accordance with an embodiment of the present invention;3 is a functional block diagram of a III-N SoC implementation of a mobile computing platform in accordance with an embodiment of the present invention;4 is a flow chart illustrating a method of fabricating a non-planar high voltage transistor, in accordance with an embodiment.Detailed waysIn the following description, numerous details are set forth, and it is apparent to those skilled in the art that the invention may be practiced without the specific details. In some instances, well known methods and devices are shown in block diagram and not in detail to avoid obscuring the invention. The reference to "an embodiment" in this specification is intended to include a particular feature, structure, function, or characteristic described in connection with the described embodiments in at least one embodiment of the invention. Thus, the appearance of the phrase "in the embodiment" In addition, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, the first embodiment can be combined with the second embodiment as long as the two embodiments are not mutually exclusive.The terms "coupled" and "connected" may be used in the text along with their derivatives to describe the structural relationship between the components. It should be understood that these terms are not intended to be synonymous with each other. More specifically, in a particular embodiment, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may be used to mean that two or more elements are in direct or indirect mutual (between other intervening elements) physical or electrical contact, and/or two or more elements cooperate or interact with each other (eg, as in In the causal relationship).The words "above", "under", "between" and "on" refer to the relative position of one layer of material relative to the other. Thus, for example, one layer disposed above or below another layer can be in direct contact with the other layer, or can have one or more intervening layers. Furthermore, a layer disposed between two layers may be in direct contact with the two layers or may have one or more intervening layers. In contrast, the first layer on the second layer is in direct contact with the second layer.Embodiments of epitaxial semiconductor stacks having reduced defect densities in III-N devices grown over non-III-N substrates such as silicon substrates are described herein. In an embodiment, the metamorphic buffer portion includes an Al x In 1-x N layer that is lattice matched to a device layer such as GaN to have reduced thermal mismatch induced defects within the device layer. For example, such a crystal epitaxial semiconductor stack can be used to provide a device layer fabricated by HEMT or LED. In an embodiment, a III-nitride (III-N) semiconductor stack and high electrons formed thereon are employed in an SoC solution that integrates RFIC and PMIC to achieve high voltage and/or high power circuitry Mobility transistor. With the embodiment of the epitaxial stack described herein, the SoC solution can achieve the product-specific current and power requirements of the mobile platform. The fast switching high voltage transistor is capable of handling high input voltage swings and is capable of providing high power added efficiency at the RF frequency. In an embodiment, the III-N semiconductor stack and transistor architecture are monolithically integrated with a Group IV transistor architecture, such as planar and non-planar silicon CMOS transistor technology. In a particular embodiment, a Group III-N transistor is employed in a SoC architecture that combines high power wireless data transmission and/or high voltage power management functions with low power CMOS logic data processing. High frequency operation for broadband wireless data transmission applications is possible, while the use of large bandgap III-N materials also provides high BV, which can generate sufficient RF output power for wireless data transmission applications. This combination of high F t /F max and high voltage capability also makes it possible to use the transistors described herein for high speed switching applications in DC to DC converters using reduced size sensing elements. Since power amplification applications and DC to DC switching applications are key functional blocks in smart phones, tablets, and other mobile platforms, the structures described herein can be advantageously employed in SoC solutions of such devices.FIG. 1A illustrates a cross section of a III-N semiconductor stack 101 that can form a high electron mobility transistor (HEMT), in accordance with an embodiment. At the base of the stack 101 is a substrate 100. In general, substrate 100 is a non-III-N material such that stack 101 includes a metamorphic epitaxial layer. In an exemplary embodiment, substrate 100 is crystalline silicon (e.g., substantially single crystal). In a first silicon substrate embodiment, substrate 100 is (100) silicon (i.e., has a (100) top surface on which an epitaxial layer is disposed). (100) crystal orientation is advantageous for the formation of silicon transistors (eg, in other regions not covered by the III-N epitaxial layer), and thus for the III-N transistor and silicon to be formed in the stacked body 101 Embodiments of monolithic integration of CMOS transistor technology are desirable. In a specific (100) silicon substrate embodiment, substrate 100 has an abutment surface prepared by, for example, cutting a substrate from a blank that is grown to provide a wafer sheet having a (100) surface. Cutting (100) the surface of the substrate at an angle of between 4° and 8° (for example, 6°) toward the [110] direction to generate a surface having respective terraces, the respective platforms A surface having a (100) crystal plane is included. The surface area of the (100) plane associated with each platform depends on the specific cut-off angle. The greater the angle, the greater the number of platforms produced, and the less (100) surface area each platform has. In such an embodiment, the cutting creates an abutment face of an array having a (100) platform, many of which are separated by a diatomic step having a height of two silicon atoms, the steps being used to avoid An antiphase domain (APD) is formed in the stacked body 101. In a second silicon substrate embodiment, substrate 100 is (110) silicon. In some (110) embodiments, the (110) substrate surface is cut (110) at an angle between 4° and 8° (eg, 6°) to generate a double with each having a height of two silicon atoms. A surface of a platform separated by an atomic step, the respective platforms comprising a surface having a (110) crystal plane.In an embodiment of the third silicon substrate, the substrate 100 is (111) silicon (i.e., a (111) top surface having an epitaxial layer disposed above). The (111) crystal orientation facilitates III-N epitaxial growth because the lattice mismatch is significantly lower (about 16%, while the (100) silicon orientation has about 42% mismatch). In general, for the (111) silicon embodiment, no cutting is necessary. Although exemplary (100), (110), and (111) silicon embodiments require a substrate consisting essentially of silicon (ie, certain trace levels that are not detrimental to the function of III-N and/or silicon CMOS devices) Impurities), but it should be noted that other substrates having similarly mismatched lattice constants may also benefit from the epitaxial stack architecture described herein, for example, but not limited to, germanium (Ge). The substrate may be fused to silicon or have a pure ruthenium form.In an embodiment, the epitaxial semiconductor stack comprises at least one III-N device layer. In the exemplary embodiment illustrated in FIG. 1A, the stacked body 101 may be referred to as a metamorphic epitaxial stack suitable for forming a HEMT, wherein at least the channel layer 107 and the top barrier layer 109 represent device layers. The channel layer 107 is substantially single crystal, although it is referred to herein as "single crystal", those skilled in the art will recognize that there may still be low levels of crystal defects that are artifacts caused by the epitaxial growth process. There is a crystal arrangement of the first semiconductor material including one or more group III elements and nitrogen in the channel layer 107. In general, the III-nitride semiconductor within the channel layer 107 should have a relatively high carrier mobility, so in an embodiment, the channel layer 107 is substantially undoped Group III-nitride. The material (ie the lowest impurity concentration) has the lowest impurity scattering. In an exemplary embodiment, channel layer 107 is GaN. However, the channel layer 107 may also be one or more GaN ternary alloys such as AlGaN, AlInN or a GaN quaternary alloy including at least one group III element and nitrogen such as InxAlyGa1-x-yN.In an exemplary GaN embodiment, the thickness of the channel layer 107 is between 10 nm and 200 nm. With the buffer portion further described elsewhere herein, the GaN channel layer 107 can be at the top end of this thickness range and beyond the range without generating defects as the thickness increases because the channel layer 107 will at least The lattice layer 106 is lattice matched. The advantage of lattice matching of channel layer 107 to buffer layer 106 is also suitable in other embodiments of epitaxial stacks suitable for light emitting diodes (LEDs) or lasers integrated onto silicon substrates, as is the case with lasers. The device layer may include many quantum well layers, p-type and n-type contact layers, and one or more distributed Bragg structures, thus requiring a substantial total device layer thickness.A cap or barrier layer (top barrier layer 109) is disposed over the channel layer 107. In general, any Group III-N material known in the art can be used for the barrier layer 109 depending on the material selected for the channel layer 107 such that the barrier layer 109 has a specific channel layer 107. The band gap is larger. Preferably, the top barrier layer 109 is substantially monocrystalline (i.e., has a thickness below a critical thickness for a given composition or a lattice match with a Group III-N material employed within the channel layer 107). In an exemplary embodiment, the barrier layer 109 includes a second III-N material having the same crystallinity as the channel layer 107, thereby forming a hetero interface. In the first exemplary embodiment in which the channel layer 107 is GaN, the top barrier layer 109 is Al z Ga 1-z N, Al w In 1-w N or AlN. An exemplary top barrier layer 109 has 18% In. In an embodiment, the barrier layer 109 has only an intrinsic impurity doping level (e.g., i-Al w In 1-w N). A quaternary alloy including at least one group III element and nitrogen, for example, In x Al y Ga 1-x-y N is also possible. The barrier layer 109 may also include any multi-layered stack of various III-nitrides, for example, an Al w In 1-w N/AlN stack, wherein the stack of AlN layers and trenches The track layers 107 are adjacent to serve as a mobility enhancement layer. According to the embodiment, the thickness of the barrier layer 109 may be in a range between 1 nm and 20 nm.In an embodiment, the metamorphic epitaxial semiconductor stack comprises an aluminum nitride indium ternary alloy (Al x In 1-x N) buffer layer disposed between the non-III-N substrate and the III-N device layer. In general, for AlxIn1-xN buffer layers, the mole percent is below 100 (e.g., x < 1), but the exact concentration may vary within different layers of the buffer layer. Although the Al x In 1-x N buffer layer exhibits many advantages, it is particularly important to point out the relatively low epitaxial growth temperature of Al x In 1-x N . Regardless of whether the growth is carried out by any of MBE, MOCVD, MOPVE, etc., the growth of Al x In 1-x N is always performed at around 300 ° C, which is lower than many alternative III-N materials. For example, Al x In 1-x N has a growth temperature generally between 750 and 800 ° C, while AlGaN has a growth temperature of about 1050-1100 ° C. Thus, the total thermal budget experienced during the growth of the stacked body 101 is advantageously reduced.Moreover, the thermal expansion coefficient of the Al x In 1-x N buffer layer is more closely matched to the thermal expansion coefficient of silicon. The strain due to thermal mismatch is generally characterized as σ = ΔT (ε substrate - ε epilayer ), where ΔT represents the difference between the growth temperature and the ambient room temperature, and α represents the thermal expansion coefficient of the substrate and the grown epitaxial layer. The thermal expansion coefficient of Al x In 1-x N is lower than the thermal expansion coefficient of GaN (about 5.1x10 -6 K -1 ) or the thermal expansion coefficient of AlGaN (>4x10 -6 K -1 ), which increases with the proportion of indium. The decrease, and thus the net thermal mismatch between the buffer layer and the substrate 100, can be significantly reduced relative to the non-Al x In 1-x N . The presence of one or more Al x In 1-x N buffer layers having a substantial thickness reduces thermal stress applied by the silicon substrate 100 to the upper III-V device layer having greater thermal mismatch, for example, The upper device layer is an exemplary GaN channel layer 107. It has been found that the reduction in thermal stress reduces the defect density in the device layer and the formation of surface cracks in the III-N epitaxial film deposited on the silicon.In an exemplary embodiment in which the buffer layer comprises an Al x In 1-x N layer, the mole fraction within the buffer layer causes the Al x In 1-x N layer to lattice match the epitaxial device layer disposed over the buffer layer. Thus, the Al x In 1-x N layer differs from the buffer layer that induces strain in the device layer by means of a pseudo-isomorphic mechanism (i.e., the device layer is strained to accept a non-native lattice constant). In the exemplary embodiment in which the epitaxial layer 101 illustrated in FIG. 1A includes a GaN channel layer 107, the buffer layer includes an Al x In 1-x N layer 106, where x is between 0.80 and 0.84, at which point the In percentage is approximately It is 18% so as to be sufficiently lattice-matched with the GaN channel layer 107. As shown in FIG. 1A, lattice-matched Al x In 1-x N 106 is disposed immediately below the channel layer 107. In an embodiment, the lattice-matched Al x In 1-x N layer 106 has only intrinsic impurity doping levels (eg, Al x In 1-x N) and can be relatively thick to most effectively mitigate silicon lining The thermal stress applied by the bottom 100. Moreover, in the case of a lattice-matched Al x In 1-x N layer 106 having a lattice mismatch of (100) silicon substrate 100, layer 106 will be sufficiently thick to substantially alleviate the resulting Dislocations and lateral slippage (eg, toward a topographical feature, etc.). Thus, in an embodiment, the lattice matched Al x In 1-x N layer is between 50% and 99% of the total thickness of the buffer layer, wherein the specific embodiment of Al x In 1-x N is at 300 nm and Between 2 μm is preferred for at least 1 μm for most HEMT applications, although larger thicknesses will generally provide lower defect densities, but additional expense/longer growth times will occur. Thus, for a HEMT embodiment where the GaN channel layer 107 is between 10 nm and 200 nm, Al x In 1-x N 106 is expected to be between 1.5 and 10 times.FIG. 1B illustrates a cross section of a semiconductor stack 102 that may also form an exemplary HEMT, in accordance with an embodiment. In general, stack 102 includes substantially the same epitaxial layers as described for stack 101, wherein like layers are identified with like reference numerals. Similarly, the stack 102 is placed onto the same (growth) substrate 100 as previously described in the context of Figure 1A. However, the stack 102 further includes a nucleation layer 104 and a transition layer 105 disposed between the nucleation layer 104 and the lattice matched Al x In 1-x N layer 106. Functionally, the nucleation layer will initiate epitaxial growth of the III-N semiconductor material including the stacked body 101, although the stacked body 101 of the lattice-matched Al x In 1-x N layer 106 is formed directly on the substrate 100. It is possible to obtain good results, but the addition of a nucleation layer can advantageously reduce the occurrence of APD, and/or further reduce the defect density within the device layer (eg, channel layer 107), and/or reduce overall growth time, thermal budget. Wait. As the first III-N material layer of the stack 101, the nucleation layer 104 can be relatively thin, for example, less than 100 nm (nanometers) along the z dimension of Figure IB. The thickness of the nucleation layer 104 can depend, at least in part, on whether the substrate surface is cut, such that a larger cutting angle is associated with a greater thickness. In general, the mobility of both the Group III and Group V nuclei of the nucleation layer 104 is desirably sufficiently high so that sufficiently random nuclide motion can be efficiently concentrated into the direction indicated by the substrate platform. To avoid the formation of APD within the polar epitaxial material. In an exemplary embodiment, nucleation layer 104 is aluminum nitride (AlN) grown to a thickness between 50 nm and 100 nm. The AlN embodiment has a lattice mismatch of about 43% with respect to the (100) silicon plane.As shown in FIG. 1B, in addition to the lattice matched Al x In 1-x N layer 106, the buffer layer further includes a transition layer 105 disposed over the nucleation layer 104. Although it is possible to insert one or more intermediate layers between the transition layer 105 and the nucleation layer 105, in an exemplary embodiment, the transition layer 105 is disposed directly on and in contact with the nucleation layer, and also with Al. The x In 1-x N layer 106 is in direct contact. The transition layer 105 can be considered as a lower buffer layer that functions to transition from the composition of the nucleation layer to the composition of the Al x In 1-x N layer 106 disposed over the transition layer 105. In general, the transition layer 105 is grown at a growth temperature above the growth temperature employed by the nucleation layer 104 (e.g., at the same temperature as the Al x In 1-x N layer 106). Moreover, during the formation of the transition layer 105, the flux rate can be relatively higher than that of the nucleation layer 104 due to the presence of the polar nucleation layer 104 (or crystal growth directly on the substrate 100 as shown in FIG. 1A). In the embodiment of the lattice matched Al x In 1-x N layer 106, the flux rate of the initial growth of the lattice-matched Al x In 1-x N layer 106 is higher). For embodiments in which the nucleation layer 104 is AlN, the transition layer 105 includes an Al x In 1-x N layer. In general, the mole fraction y can be any value less than one and greater than the x of the lattice matched Al x In 1-x N layer 106. Thus, in the exemplary embodiment where channel layer 107 is GaN and the lattice matched Al x In 1-x N layer 106 has an x of about 0.82, y in transition layer 105 is greater than 0.82. In other embodiments, the composition of the transition layer 105 gradually changes between the composition of the nucleation layer component and the lattice matched layer 106. For example, in one such Al y In 1-y N embodiment, y decreases from approximately 1 closest to the nucleation layer to approximately x closest to the lattice-matched Al x In 1-x N layer 106. The transition layer 105 is generally thinner than the layer 106 and may even be thinner than the nucleation layer 104. As an example, 50 nm will be sufficient to transition from the AlN nucleation layer 104 to 18% of the Al x In 1-x N layer 106.In other embodiments, the buffer portion between the III-N device layer and the non-III-N substrate includes a superlattice comprising a plurality of Al x In 1-x N layers and III-N layers. It is to be noted that Al x In 1-x N in the superlattice is not necessarily a layer 106 having 18% of In, and may have other components. In one embodiment, for example, the superlattice comprises an AlInN and AlN layer. In another embodiment, the III-N device layer composition is lattice matched to the Al x In 1-x N of the device layer, and the superlattices of the two can be interposed by the intervening Al x In 1-x N layer Easily formed, the intervening layer still functions to alleviate thermal mismatch between the device layer and the substrate.2A shows a cross section of a recessed gate III-N transistor 200 in accordance with an embodiment. In general, transistor 200 is a majority carrier (electron) gate voltage control device (i.e., FET). The transistor 200 is planar and disposed on the epitaxial semiconductor stack 102. In an exemplary embodiment, transistor 200 has no junction formed by an impurity doping gradient. Thus, defects associated with dopant diffusion, scattering, and breakdown voltage degradation are avoided. A heavily doped (e.g., N+) contact layer 212 is disposed over the epitaxial semiconductor stack 102.In an exemplary embodiment, a top barrier layer 109 of a suitable thickness or a separate material disposed between the top barrier layer 109 and the channel layer 107 acts as a charge sensing layer to be supplied in a controlled manner by inducing surface charge. Carrier, this layer of charge is commonly referred to as 2D electron gas (eg, 2DEG 211 in Figure 2A). Although the embodiment may use the top barrier layer 109 as the sole source of surface charge, in other embodiments, the presence of a different charge-sensitive layer of the composition enables the top barrier layer 109 to be thinned to achieve threshold voltage tuning. While ensuring a thin (eg, >0.5 nm) wide bandgap material on the surface of the channel layer 107 to achieve reduced alloy scattering and high carrier mobility.As a result of the different polarizations of the materials employed in the channel layer 107 and the top barrier layer 109 (or intervening charge sensing layer), a metal can be provided by selecting a work function as the gate electrode 220 and/or control along the gate The charge density further modulated by the thickness of the semiconductor (length x). Thus, the performance characteristics of the transistor 200 depend on the material selected by the barrier layer 109, the gate electrode 220, and the length of the vertical transistor defined as the recessed gate region 225 disposed between the gate electrode 220 and the channel layer 107. In an exemplary embodiment, the channel layer 107 is GaN, and the top barrier layer 109 is at least one of Al z Ga 1-z N, Al w In 1-w N, or AlN (eg, AlN is on the material) It is different from the charge sensing layer of another material that serves as a portion of the top barrier layer 109).In an embodiment, transistor 200 can operate in an enhanced mode. Transistor 200 has a threshold voltage (Vt) greater than 0V in enhanced mode operation, which is important for, for example, power efficient switching in the PMIC and efficient shutdown of the power amplifier within the RFIC in the idle state. . In an embodiment, the gate electrode 220 includes a large work function metal to increase Vt. The work function can be selected to achieve the desired threshold voltage (Vt) (eg, greater than 0V, etc.), and exemplary conductive gate materials include tungsten (W), aluminum (Al), titanium (Ti), tantalum ( Ta), nickel (Ni), molybdenum (Mo), germanium (Ge), platinum (Pt), gold (Au), ruthenium (Ru), palladium (Pd), iridium (Ir), alloys thereof and their silicides , carbides, nitrides, phosphides, and carbonitrides.Transistor 200 is a single recessed gate structure in which top barrier layer 109 has only one recessed gate region 225. Thus, the top barrier layer 109 has a first thickness between the gate electrode 220 and the channel layer 107 and a second thickness between the source or drain semiconductor 212 and the channel layer 107. Such thinning of the top barrier layer 109 contributes to the enhancement mode because the autogenous piezoelectric polarization induced charges disposed in the channel layer under the gate electrode 220 can be depleted, thereby increasing Vt. According to the embodiment, the first thickness may be from 0% to 50% of the second thickness (eg, in the range of 0-2.5 nm). For embodiments without a work function gate metal, the top barrier layer 109 must be completely etched away to obtain Vt > 0V. In the presence of a separate charge-sensing layer, the recessed gate region 225 may have a top barrier thickness of 0% to expose the charge-sensitive layer, and thus it is the only source of carriers within the recess. In an exemplary embodiment where channel layer 107 is undoped, enhanced mode operation is achieved using work function metal gate electrodes and gate recesses.In addition to facilitating the acquisition of low defect density device layers, the lattice matched Al x In 1-x N layer also acts as a more efficient back barrier to define 2DEG within the channel layer 107 because the materials are relatively Alternative materials such as AlGaN have relatively greater polarization, thereby significantly increasing the short channel performance of the device relative to alternative device stacks that lack lattice matched Al x In 1-x N buffer layers. More specifically, for lattice matched Al x In 1-x N, the subthreshold slope and the drain induced barrier lowering (DIBL) are reduced relative to AlGaN. In fact, for an exemplary HEMT channel length (Lg) of 20 nm with a symmetrical source and drain (L GD = L GS = 40 nm), a V DS of 5V and a V GS of -2V are expected for the AlInN barrier. It will have a drain current of 1e -5 A/mm, and for a value of AlGaN barrier it will have a magnitude of three times.2B illustrates a bottom barrier of a lattice barrier of Al x In 1-x N with a bottom barrier of Al y Ga 1-y N (where y is 0.08-0.10), in accordance with an embodiment of the present invention. The energy band diagrams of the various regions of the transistor 200 are compared. As shown in the area highlighted by the dashed box, the large band gap (about 4.9 eV) of Al x In 1-x N makes it a more insulating buffer layer and reduces the underlying channel layer 107. Parallel conduction, which is especially beneficial for high voltage devices. It should be further pointed out that if there is no metamorphic Al x In 1-x N buffer layer (for example, in the case of using an AlGaN buffer layer under the GaN channel layer), then the Al x In 1-x N bottom barrier is combined ( If it has a similar lattice match with GaN), the allowable thickness of the GaN channel layer will be further reduced because such a bottom barrier and the accumulated thickness of the channel layer will be limited to a given critical thickness.Returning to Figure 2A, a source 235 and a drain 245 are disposed on both sides of the gate electrode 220, the two sides including an impurity-doped (e.g., N+) semiconductor region 212 electrically coupled to the ohmic contact metal 235A, 245A. The impurity-doped semiconductor region 212 may be any low band gap Group III-N material used to form low resistance contacts, such as InGaN and InN, or may simply be n-type GaN.The dielectric layer 230 is disposed between the top barrier layer 109 and the gate electrode 220. The dielectric layer 230 electrically insulates the gate electrode 220 from the semiconductor stack 102 and also isolates the gate electrode 220 from the source and drain electrodes 235, 245. In the embodiment illustrated in Figure 2A, dielectric layer 230 acts both as a gate dielectric and as a spacer dielectric, thereby laterally isolating gate electrode 220 from source and drain electrodes 235,245. In an exemplary embodiment, dielectric layer 230 is a self-aligned spacer structure capable of self-alignment, thereby minimizing the source-to-drain spacing to <100 nm to reduce the transistor's extrinsic resistance (R ext ), This results in a higher transconductance (G m ) or gain, which in turn leads to a higher F t . The dielectric spacers can also achieve a reduction in transistor channel length (Lg) to a size that is less than a feature size that can be lithographically defined. Dielectric materials such as silicon nitride (SixN), silicon oxide (SiO2), aluminum oxide (Al2O3), and high-k dielectrics, for example, Gd2O3, HfO2, high-k silicates such as HfOSiO, TaSiO, AlSiO, and such as HfON, SiON A high-k oxynitride of AlON, ZrSiON, HfSiON, or Group III-ON is suitable for the dielectric layer 230. In an embodiment, dielectric layer 230 functions to passivate the interface between gate electrode 220 and the top surface of stack 102 to maintain high channel mobility and reduce gate leakage current. In one embodiment, high quality passivation is achieved using an atomic layer deposition (ALD) dielectric layer 230.Although not shown, other HEMT embodiments are thin double recessed gate III-N transistors that include the same semiconductor stack 102, gate electrode 220, and source and drain 235 as described for transistor 200. 245. However, unlike the single recess 225 shown in FIG. 2A, the dual recessed HEMT embodiment includes the recess 225 and another recessed region, such that the top barrier layer 109 has three thicknesses, ie, at the channel layer and the source and a first thickness between the drains 235, 245, a second thickness between the channel layer 107 and the dielectric layer 230 (under the gate electrode 220), and a third between the channel layer 107 and the spacer dielectric The spacer dielectric is a dielectric that laterally separates the gate electrode from the source and drain electrodes 235, 245. The third thickness is generally between the first and second thicknesses. An advantage of the dual recess embodiment over the transistor 200 is that the 2DEG charge density under the spacer dielectric is maintained when the region disposed under the gate electrode 220 is depleted, thereby maintaining the channel region under the gate electrode 220. Low access resistance.Although transistor 200 is a planar device, in other embodiments, a non-planar Group III-N transistor can be formed in stack 101 or 102. Although not shown, for a non-planar transistor embodiment, at least one of the semiconductor layers of the epitaxial semiconductor stack (eg, 101 or 102) is a non-planar semiconductor body having a reverse sidewall, which will be The gate dielectric, the gate electrode and/or the non-planar source and drain are covered on the reverse sidewall. The non-planar transistors may include all of the functional features described for the exemplary planar transistor 200, the material and thickness of which are as previously described. Depending on the crystal orientation of the group III-nitride stacks 101, 102, the 2DEG can be close to the top surface or sidewall of the non-planar semiconductor body. The GaN and other Group III-nitrides described herein form a wurtzite structure which is important because it is asymmetrical, which means that the crystal lacks reverse symmetry, more specifically each {0001} plane It is non-equivalent, and thus in a non-planar embodiment, the wurtzite crystal is oriented such that the (0001) plane forms the top surface of the crystal and interfaces with the lattice-matched Al x In 1-x N layer 106. For such an embodiment, the top barrier layer 109 and the Al x In 1-x N layer 106 function as charge-inducing layers and back barriers, respectively.Thus, in an alternative non-planar HEMT embodiment in which the channel layer 107 is formed into a non-planar body, a semiconductor layer over the epitaxial semiconductor stack 101 or 102 can be grown onto the top and sidewall surfaces . For such embodiments, the crystal orientation can be as described above, and the (100) plane can be formed to form the top surface of the crystal and interface with the lattice-matched Al x In 1-x N layer 106. For such an embodiment, the barrier layer formed on the sidewalls of the non-planar channel layer 107 induces a spontaneous polarization field PSP within the non-planar subject that is directed away from the first sidewall toward the second sidewall. Thus, the polarization of the non-planar Group III-N transistor can extend across the width or thickness of the non-planar semiconductor body of the non-planar HEMT embodiment.3 is a functional block diagram of a SoC implementation of a mobile computing platform in accordance with an embodiment of the present invention. Mobile computing platform 700 can be any portable device configured to implement each of electronic data display, electronic data processing, and wireless electronic data transmission. For example, mobile computing platform 700 can be any of a tablet, smart phone, laptop, etc., including display screen 705, 7oC 510, and battery 713, which in the exemplary embodiment is a touch screen ( For example, capacitive, inductive, resistive, etc., allowing for user input to be received. As shown, the higher the degree of integration of the SoC 710, such as a solid state drive within the mobile computing platform 700 that can be used to achieve the longest working life between charges, such as a solid state drive, can be used to achieve maximum functionality. The shape factor occupied by the memory (not shown) is larger.Depending on its application, mobile computing platform 700 may include other components including, but not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital Signal processor, cryptographic processor, chipset, antenna, display, touch screen display, touch screen controller, battery, audio codec, video codec, power amplifier, global positioning system (GPS) device, compass, accelerometer , gyroscopes, speakers, cameras, and mass storage devices (for example, hard drives, compact disks (CDs), digital versatile disks (DVDs), etc.).The SoC 710 is also illustrated by a magnified view 721. According to an embodiment, the SoC 710 includes a portion (ie, a chip) of the substrate 100 on which a power management integrated circuit (PMIC) 715, an RF integrated circuit (RFIC) 725 including an RF transmitter and/or receiver, and its control are fabricated And 711 and one or more of the one or more central processing cores 730, 731. The RFIC 725 can implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth and its derivatives and any other wireless protocols designated as 3G, 4G, 5G and higher. The RFIC 725 can include multiple communication chips. For example, the first communication chip can be dedicated to a shorter range of wireless communications, such as Wi-Fi and Bluetooth, and the second communication chip can be dedicated to a longer range of wireless communications, such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO and others.Those skilled in the art will recognize that in these various functionally distinct circuit modules, CMOS transistors are typically employed outside of PMIC 715 and RFIC 725. In an embodiment of the invention, as described herein, PMIC 715 and RFIC 725 employ one or more of a III-nitride transistor (eg, III-nitride transistor 200) as described herein. Embodiments of the epitaxial stack described herein (e.g., stack 101 or 102) are employed. In other embodiments, PMIC 715 and RFIC 725 employing the III-nitride transistors described herein are integrated with one or more of controller 711 and processor cores 730, 731 provided by silicon CMOS technology, The PMIC 715 and/or RFIC 725 are monolithically integrated onto the (silicon) substrate 100. It should be recognized that within the PMIC 715 and/or RFIC 725, it is not necessary to employ a high voltage, high frequency III-nitride transistor as described herein, but in PMIC 715 and RFIC 725, in the case of CMOS rejection. Each of them further includes a silicon CMOS.In particular, the III-nitride transistors described herein can be employed in the presence of high voltage swings (e.g., 7-10V battery power regulation, DC to DC conversion, etc. within the PMIC 715). As shown, in the exemplary embodiment, PMIC 715 has an input coupled to battery 713 with an output that provides a current supply to all other functional modules within SoC 710. In another embodiment where the SoC 710 is left in the mobile computing platform 700 to provide additional ICs, the PMIC 715 output also provides current supply to all of these additional ICs leaving the SoC 710. The III family described herein with the aid of reduced available on-resistance (eg, by symmetry L gd /L gs ) and low access resistance (eg, 2DEG 211 present in the spacer region within channel layer 107) The particular embodiment of the nitride transistor allows the PMIC to operate at a higher frequency (e.g., 50 times the possible frequency in an LDMOS implementation). In some such embodiments, components within the PMIC (eg, buck-boost converters, etc.) can be scaled down to a much smaller size. Since such sensing elements account for 60-70% of the chip area in the PMIC, embodiments of the PMIC implemented by the III-nitride transistors described herein provide significant reductions over other PMIC architectures.As further shown, in an exemplary embodiment, PMIC 715 has an output coupled to an antenna, and may also have an input coupled to a communication module on SoC 710, for example, the communication module is an RF analog and digital baseband module ( Not shown). Alternatively, such a communication module can be provided on the IC that leaves the chip of the SoC 710 and coupled into the SoC 710 to effect the transmission. Depending on the III-nitride material employed, the III-nitride transistors (eg, transistor 200) described herein can also provide at least ten times the carrier frequency (eg, in RFIC 725 designed for 3G or GSM cellular communications). The high power added efficiency (PAE) required for the FT power amplifier transistor of 1.9 GHz).4 is a flow chart showing a method 400 of fabricating a high voltage III-nitride transistor as described herein, in accordance with an embodiment. Although method 400 emphasizes certain operations, each of these operations may require many further process sequences.Starting from operation 401, any standard metal organic chemical vapor deposition (MOCVD), molecular beam epitaxy (MBE) or metal organic vapor phase epitaxy (MOVPE) growth tool/technology is used to grow the standard precursor, temperature, etc. of a given film. A stack of single crystal semiconductor materials. In one embodiment, the entire semiconductor stack 101 or 102 is grown using such techniques (Figs. 1A, 1B). For example, in order to form the stacked body 102, an AlN nucleation layer 104 is grown on the (100) surface of the silicon substrate. Next, the growth temperature was changed to 750-800 ° C, and In was introduced in an amount increased relative to Al to form a gradually changing Al y In 1-y N transition layer 105 until 18% of the In composition was reached. The lattice-matched Al x In 1-x N layer 106 is grown at this point until, for example, the thickness range described elsewhere herein. Thereafter, the growth temperature is ramped up from the Al x In 1-x N growth temperature by about 300 ° C to, for example, 1050 ° C, and the precursor is appropriately changed to satisfy the growth of the channel layer 107 such as GaN. The top barrier layer 109 composed of Al z Ga 1-z N is formed to be formed at a higher temperature, and/or the growth temperature is lowered to form an AlN or Al w In 1-w N layer. In one embodiment, the n-type doped source/drain layer can then be grown in-situ as a device layer on a higher level, or in an alternate embodiment (eg, as shown by operation 410 in FIG. 4, This operation is indicated by the dashed line as optional), followed by a regrowth process in the fabrication process to form the source/drain regions.In operation 403, at least a portion of the epitaxial semiconductor stack 110 is etched using any plasma or wet chemical etching technique known in the art for a particular material that is epitaxially grown into portions of the semiconductor stack 101 or 102. With further reference to FIG. 2A, in some embodiments, operation 403 requires etching at least a portion of top barrier layer 109 to form recessed regions 225. For embodiments in which the semiconductor stack 101 includes a source/drain layer disposed over the top barrier layer 109, the source/drain layers are etched in operation 403. For embodiments that later form the source/drain by regrowth, the etching process in operation 403 only requires etching portions of the top barrier layer 109. For non-planar transistor embodiments (not shown), in operation 403, the epitaxial stack (eg, 101 or 102) is etched into the semiconductor fin structure.Operation 405 continues with the formation of a sacrificial gate within the recessed region. The gate replacement process allows for epitaxial regrowth of the source/drain regions (if desired), which enables the final formation of a gate electrode with a work function metal (if desired), and enables a dual recessed gate structure, etc. Wait. In an exemplary embodiment, the sacrificial gate comprises CVD polysilicon or silicon nitride/oxynitride or the like. The sacrificial gate may be laterally separated from the surrounding film (e.g., field dielectric, etched layer of epitaxial stack) by a spacer structure. In some embodiments, the long source and the regrowth are performed on, for example, the top barrier layer 109 in operation 410 by means of a sacrificial gate and spacer structure that acts as a stem for the channel region of the protection device stack. The drain region (e.g., 212 in Figure 2A). In one embodiment, a compositionally graded ternary alloy of GaN is epitaxially grown on an epitaxial stack that is not protected by a sacrificial gate. In an alternate embodiment of the method 400 of FIG. 4, the epitaxial stack includes source/drain regions, and in such an embodiment, operation 410 is omitted.In operation 415, the sacrificial gate (stack) is removed until the exposed epitaxial stack (e.g., 101 or 102). For the double recessed gate embodiment, the top barrier layer 109 is etched a second time to form a second recessed region that is narrower than the recess 225. In some single recess embodiments, in operation 415, it is desirable to perform a first etch of at least a portion of the top barrier layer 109 after the sacrificial gate structure is removed, rather than prior to forming the sacrificial gate, to form the recess 225. . A gate dielectric layer is formed in the first recess or second recess region by means of the device layer of the prepared epitaxial stack. In an embodiment, the gate dielectric layer is formed by depositing any of the described dielectric materials (e.g., high K dielectric materials) for dielectric layer 230 using ALD techniques known to be suitable for particular dielectric materials. Thereafter, a work function metal (e.g., any of those metals described in the context of transistor 200) is deposited on the gate dielectric layer and planarized to form gate electrode 220. Thereafter, ohmic contacts 235A, 245A and interconnect metallization (not shown in Figure 2A) are formed in operation 420 using, for example, conventional techniques, thereby completing the device.In other embodiments in which CMOS transistors are also formed within the same silicon substrate 100, the CMOS regions and HEMT regions of the substrate may be simultaneously or selectively performed (eg, using conventional masking techniques) in method 400. One or more of the operations.Thus, embodiments of semiconductor material stacks have been described. A semiconductor material stack includes a silicon substrate; a III-N device layer disposed over the silicon substrate; and a buffer portion disposed between the silicon substrate and the III-N device layer, wherein The buffer portion includes an Al x In 1-x N layer, where x is less than 1. In other embodiments, the Al x In 1-x N layer is lattice matched to the III-N device layer and is in direct contact with the III-N device layer. In other embodiments, the III-N device layer is GaN, and the top barrier includes at least one of Al z Ga 1-z N, Al w In 1-w N or AlN, wherein x is at Between 0.80 and 0.84, and wherein the silicon substrate has a (100), (110) or (111) crystal orientation. In other embodiments, the silicon substrate has a (100) orientation and is cut to an angle between 4 and 8 degrees toward the [110] direction. In other embodiments, the Al x In 1-x N layer has a thickness between 1.5 and 10 times the thickness of the III-N device layer. In other embodiments, the buffer portion includes a superlattice having a plurality of Al x In 1-x N layers and a III-N layer. In other embodiments, the buffer portion further includes an AlN nucleation layer disposed between the Al x In 1-x N layer and the silicon substrate. In other embodiments, the buffer portion further includes an AlxIn1-xN transition layer disposed between the AlN nucleation layer and the Al x In 1-x N layer, wherein y > x. In other embodiments, y gradually decreases from approximately 1 towards the nucleation layer toward approximately the closest lattice-matched AlxIn1-xN layer. In other embodiments, the Al x In 1-x N layer comprises a thickness between 50% and 99% of the total thickness of the buffer portion.In an embodiment, a high electron mobility transistor (HEMT) includes: a gate electrode disposed between a source contact and a drain contact; a gate dielectric disposed under the gate electrode; a III-N channel layer under the dielectric; a bottom barrier disposed under the channel layer, wherein the bottom barrier includes an Al x In 1-x N layer lattice-matched to the channel layer; and a bottom potential A silicon substrate under the barrier, wherein the Al x In 1-x N layer is disposed over the (100) or (111) crystal plane of the substrate. In other embodiments, the HEMT includes a top barrier layer having a first thickness between the gate electrode and the channel layer and a source contact on either side of the gate electrode a second greater thickness between the portion and the drain contact, wherein the top barrier layer comprises at least one of Al z Ga 1-z N, Al w In 1-w N, or AlN. In other embodiments, the III-N channel layer comprises a GaN layer having a thickness between 10 nm and 200 nm, wherein the Al x In 1-x N layer has a thickness between 400 nm and 2 μm Thickness, and wherein x is between 0.80 and 0.84; an AlN nucleation layer is disposed between the Al x In 1-x N layer and the silicon substrate; and the Al x In 1-x N layer is set to be disposed on the AlN On the Al y In 1-y N transition layer above the nucleation layer, where y gradually changes from approximately 1 closest to the nucleation layer toward approximately x closest to the Al x In 1-x N layer. In other embodiments, the channel layer is undoped in a region disposed under the gate electrode, and the first is only when the gate electrode is at a threshold voltage (V t ) greater than 0V The top barrier layer of the thickness induces a charge to form a two-dimensional electron gas (2DEG) within the channel layer.In an embodiment, a mobile computing device includes a touch screen; a battery; an antenna; a DC to DC converter coupled to the battery; and a wireless transmitter further including a power amplifier (PA), wherein the DC to DC conversion At least one of the device and the PA includes the HEMT described herein. In an embodiment, the DC to DC converter includes a first HEMT as described herein, and the PA employs a second HEMT as described herein.In an embodiment, a method of forming a high electron mobility transistor includes forming a sacrificial gate structure over a stack of semiconductor material layers disposed on a crystalline silicon substrate, the stacked body comprising being disposed in a crystal a Group III-N semiconductor channel layer on a lattice-matched Al x In 1-x N layer, the lattice-matched Al x In 1-x N layer having a thickness greater than a channel layer; at a sacrificial gate mechanism a source region and a drain region are formed on opposite sides; a sacrificial gate structure is removed to expose a surface of the epitaxial growth stack; and an electrode is formed on the exposed surface of the epitaxial growth stack by an atomic layer deposition process a dielectric layer; and forming a gate electrode on the gate dielectric layer.In an embodiment, the method further includes forming a stack of layers of semiconductor material by epitaxially growing a graded Al y In1 -y N transition layer over the AlN nucleation layer disposed on the substrate; An Al x In 1-x N layer is epitaxially grown over the y In 1-y N transition layer, wherein y is approximately the x-gradient closest to the Al x In 1-x N layer from approximately 1 super closest to the nucleation layer; Epitaxially growing a III-N semiconductor channel substantially composed of GaN over the Al x In 1-x N layer; and epitaxially growing a top barrier including a ternary III-nitride over the channel layer Floor.In an embodiment, a gradually changing Al y In 1-y N transition layer is grown directly on the AlN nucleation layer up to a thickness between 50 nm and 100 nm, and Al is grown directly on the Al y In 1-y N transition layer. x In 1-x N layer up to a thickness between 300 and 2 μm, and wherein the channel layer is grown directly on the Al x In 1-x N layer up to a thickness between 10 nm and 200 nm.In an embodiment, a stack of layers of semiconductor material is disposed on a (100) surface of the substrate that is cut to an angle between 4° and 8° toward the [110] direction; and wherein the ternary The group III-nitride is selected from the group consisting of Al x Ga 1-x N, Al w In 1-w N and In z Ga 1-z N.It should be understood that the above description is intended to be illustrative and not restrictive. For example, although the flowchart in the figures shows a specific sequence of operations performed by some embodiments of the present invention, it should be understood that such an order is not required (e.g., alternative embodiments may perform operations in a different order, in combination Some operations, superimposing certain operations, etc.). In addition, many other embodiments will be apparent to those skilled in the <RTIgt; Although the present invention has been described with reference to the specific embodiments thereof, it is understood that the invention is not limited to the embodiments described herein, but the invention may be practiced with modifications and variations within the spirit and scope of the appended claims. The scope of the invention should be determined by the appended claims and the appended claims |
Various systems and methods for establishing security profiles for Internet of Things (IoT) devices and trusted platforms, including in OCF specification device deployments, are discussed herein. In an example, a technique for onboarding a subject device for use with a security profile, includes: receiving a request to perform an owner transfer method of a device associated with a device platform; verifying attestation evidence associated with the subject device, the attestation evidence being signed by a certificate produced using a manufacturer-embedded key, with the key provided from a trusted hardware component of the device platform; and performing device provisioning of the subject device, based on the attestation evidence, as the device provisioning causes the subject device to use a security profile tied to manufacturer-embedded keys. |
CLAIMSWhat is claimed is:1. A device comprising:communications circuitry;processing circuitry; anda memory device including instructions embodied thereon, wherein the instructions, which when executed by the processing circuitry, configure the processing circuitry to perform operations comprising:receiving a request to perform an owner transfer method of a subject device, the subject device being associated with a device platform;verifying attestation evidence associated with the subject device, the attestation evidence provided by the device platform, wherein the attestation evidence is signed by a certificate produced using a manufacturer-embedded key, wherein the manufacturer-embedded key is provided from a trusted hardware component of the device platform; and performing device provisioning of the subject device, based on the attestation evidence, wherein the device provisioning causes the subject device to use a security profile tied to manufacturer-embedded keys.2. The device of claim 1, the operations further comprising:maintaining a list of owned and trusted devices of the device platform, the list including the subject device.3. The device of claim 1, wherein performing device provisioning includes further operations comprising:provisioning the subject device with local credentials from a local certificate authority, the local certificate authority operated by the device, wherein the local credentials indicate a verified use of the security profile tied to manufacturer-embedded keys.4. The device of claim 1, wherein performing device provisioning includes further operations comprising:updating a resource of the subject device to a value associated with the security profile, wherein the subject device is transitioned to use of the security profile upon completion of the de vice provisioning. 5. The device of claim 1, wherein the manufacturer-embedded key is associated with a trust anchor, wherein the trust anchor is managed through use of a trust anchor management protocol.6. The device of claim 1 wherein the manufacturer-embedded key is linked to a certificate chain, wherein the certificate chain is terminated by a trust anchor, and wherein the attestation evidence includes the trust anchor.7. The device of claim 1, wherein the manufacturer -embedded key is associated with a platform attribute credential of the device platform, and wherein the platform attribute credential includes platform information that is publicly verifiable at a third party data source.8. The device of claim 1, the operations further comprising:querying a blockchain to confirm a trust anchor linked to the manufacturer-embedded kev.9. The device of claim 8, the operations further comprising:querying the blockchain to search for a trust anchor revocation for the trust anchor linked to the manufacturer-embedded key; andcausing the subject device to use another security profile based on identifying the trust anchor revocation.10. The device of claim 1, wherein the subject device conducts a trusted boot sequence of device software for operation on the subject device, and wherein the attestation evidence includes verification of the trusted boot sequence by the device platform.11. The device of claim 1, wherein the device is an onboarding tool, and wherein the device and the device platform are configured according to a specification of an Open Connectivity Foundation (OCF) standards family.12. The device of claim 1, wherein the trusted hardware component and the device are configured according to specification of a Trusted Computing Group (TCG) standards family.13. A method for onboarding a subject device for use with a security profile, using operations performed by an onboarding tool device comprising:receiving a request to perform an owner transfer method of the subject device, the subject device being associated with a device platform;verifying attestation evidence associated with the subject device, the attestation evidence provided by the device platform, wherein the attestation evidence is signed by a certificate produced using a manufacturer-embedded key, and wherein the manufacturer-embedded key is provided from a trusted hardware component of the device platform; andperforming device provisioning of the subject device, based on the attestation evidence, wherein the device provisioning causes the subject device to use a security profile tied to manufacturer-embedded keys.14. The method of claim 13, further comprising:maintaining a list of owned and trusted devices of the device platform, the list including the subject device.15. The method of claim 13, wherein performing device provisioning includes further operations comprising:provisioning the subject device with local credentials from a local certificate authority, the local certificate authority operated by the device, wherein the local credentials indicate a verified use of the security profile tied to manufacturer-embedded keys.16. The method of claim 13, wherein performing device provisioning includes further operations comprising:updating a resource of the subject device to a value associated with the security profile, wherein the subject device is transitioned to use of the security profile upon completion of the device provisioning.17. The method of claim 13, wherein the manufacturer-embedded key is associated with a trust anchor, wherein the trust anchor is managed through use of a trust anchor management protocol.18. The method of claim 13, wherein the manufacturer-embedded key is linked to a certificate chain, wherein the certificate chain is terminated by a trust anchor, and wherein the attestation evidence includes the trust anchor.19. The method of claim 13, wherein the manufacturer-embedded key is associated with a platform attribute credential of the device platform, and wherein the platform attribute credential includes platforminformation that is publicly verifiable at a third party data source.20. The method of claim 13, the operations further comprising:querying a blockchain to confirm a trust anchor linked to the manufacturer-embedded key.21. The method of claim 20, the operations further comprising:querying the blockchain to search for a trust anchor revocation for the trust anchor linked to the manufacturer-embedded key; andcausing the subject device to use another security profile based on identifying the trust anchor revocation.22. The method of claim 13, wherein the subject device conducts a trusted boot sequence of device software for operation on the subject device, and wherein the attestation evidence includes verification of the trusted boot sequence by the device platform.23. The method of claim 13, wherein the onboarding tool device and the device platform operate according to a specification of an Open Connectivity Foundation (OCF) standards family.24. The method of claim 23, wherein the trusted hardware component and the device platform are configured according to specification of a Trusted Computing Group (TCG) standards family.25. A machine-readable storage medium including instructions, wherein the instructions, when executed by a processing circuitry of a computing device, cause the processing circuitry to perform operations of any of claims 13 to 24.26. An apparatus, comprising:means for receiving a request to perform an owner transfer method of a subject device, the subject device being associated with a device platform; means for verifying attestation evidence associated with the subject device, the attestation evidence provided by the device platform, wherein the attestation evidence is signed by a certificate produced using a manufacturer-embedded key, and wherein the manufacturer-embedded key is provided from a trusted hardware component of the device platform; andmeans for performing device provisioning of the subject device, based on the attestation evidence, wherein the device provisioning causes the subject device to use a security profile tied to manufacturer- embedded keys.27. The apparatus of claim 26, further comprising:means for maintaining a list of owned and trusted devices of the device platform, the list including the subject device.28. The apparatus of claim 26, further comprising:means for provisioning the subject device with local credentials from a local certificate authority, the local certificate authority operated by the device, wherein the local credentials indicate a verified use of the security profile tied to manufacturer-embedded keys.29. The apparatus of claim 26, further comprising:means for updating a resource of the subject de vice to a value associated with the security profile, wherein the subject device is transitioned to use of the security profile upon completion of the device provisioning.30. The apparatus of clai 26, wherein the manufacturer-embedded key is associated with a trust anchor, wherein the trust anchor is managed through use of a trust anchor management protocol.31. The apparatus of claim 26, wherein the manufacturer-embedded key is linked to a certificate chain, wherein the certificate chain is terminated by a trust anchor, and wherein the attestation e vidence includes the trust anchor32. The apparatus of claim 26, wherein the manufacturer - embedded key is associated with a platform attribute credential of the device platform, and wherein the platform attribute credential includes platform information that is publicly verifiable at a third party data source.33. The apparatus of claim 26, further comprising:means for querying a blockchain to confirm a trust anchor linked to the manufacturer-embedded key.34. The apparatus of claim 33, further comprising:means for querying the blockchain to search for a trust anchor revocation for the trust anchor linked to the manufacturer-embedded key; andmeans for causing the subject device to use another security profile based on the trust anchor revocation.35. The apparatus of claim 26, wherein the subject device conducts a trusted boot sequence of de vice software for operation on the subject device, and wherein the attestation evidence includes verification of the trusted boot sequence by the device platform.36. The apparatus of claim 26, wherein the apparatus and the device platform operate according to a specification of an Open Connectivity Foundation (OCF) standards family.37. The apparatus of claim 26, wherein the trusted hardware component and the device platfor are configured according to specification of a Trusted Computing Group (TCG) standards family. |
SECURITY PROFILES FOR OCF DEVICESAND TRUSTED PLATFORMSPRIORITY CLAIM[0001] This application claims the benefit of priority to United States Provisional Patent Application Serial No. 62/621,376, filed January 24,2018 and titled“SECURITY PROFILES FOR OCF DEVICESAND TRUSTED PLATFORMS”, which is incorporated herein by reference in its entirety.TECHNICAL FIELD[0002] Embodiments described herein generally relate to data communications and interconnected device networks, and in particular, to techniques for establishing connections and implementing functionality among internet of things (loT) devices and device networks.BACKGROUND[0003] loT devices are physical or virtualized objects that may communicate on a network, and may include sensors, actuators, and other input/output components, such as to collect data or perform actions from a real world environment. For example, loT devices may include low- powered devices that are embedded or attached to everyday things, such as buildings, vehicles, packages, etc., to provide an additional level of artificial sensory perception of those things. Recently, loT devices have become more popular and thus applications using these devices have proliferated.[0004] Various standards have been proposed to more effectively interconnect and operate loT devices and IoT network use cases. These include the specialization of communication standards distributed by groups such as Institute of Electrical and Electronics Engineers (IEEE), and the specialization of app lication interaction architecture and configuration standards distributed by groups such as the Open Connectivity Foundation (OCF). BRIEF DESCRIPTION OF THE DRAWINGS[0005] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar· components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which:[0006] FIG. 1 illustrates a domain topology for respective intemet-of- things (IoT) networks coupled through links to respective gateways, according to an example;[0007] FIG. 2 illustrates a cloud computing network in communication with a mesh network of IoT devices operating as a fog device at the edge of the cloud computing network, according to an example;[0008] FIG. 3 illustrates a configuration and operation of OCF devices hosted on a Trusted Platform, according to an example;[0009] FIG. 4 illustrates a flowchart of a procedure for device conformance, tor use of devices in a Trusted Platform, according to an example;[0010] FIG. 5 illustrates a flowchart of a procedure for platform conformance, for use of a Trusted Platform, according to an example;[0011] FIG. 6 illustrates a flowchart of a trusted boot sequence, for devices in a Trusted Platform, according to an example;[0012] FIG. 7 il lustrates a flo wchart of a procedure for trusted or attested device onboarding, for devices in a Trusted Platform, according to an example;[0013] FIG. 8 illustrates a flowchart of device-to-device operations, for devices in a Trusted Platform, according to an example;[0014] FIG. 9 il lustrates a flo wchart of a method for onboarding a subject device for use with a security profile, according to an example;[0015] FIG. 10 illustrates a block diagram of a network illustrating communications among a number of IoT devices, according to an example; and [0016] FIG. 11 illustrates a block diagram for an example IoT processing system architecture upon which any one or more of the techniques (e.g., operations, processes, methods, and methodologies) discussed herein may be performed.DETAILED DESCRIPTION[0017] In the following description, methods configurations and related apparatuses are disclosed for the use of a trusted platform to support security profiles among respective IoT devices. In an example, applicable to an OCF device configuration that involves public key infrastructure (PKI) components, such a platform may enable deployment of OCF devices and services that utilize certificates generated by a common root certificate authority (CA) (e.g., a CA owned or operated by OCF or another trusted organization), while ensuring validity of the certificates through the use of a security profile and associated attestation.[0018] The use and configuration of a trusted platform, as discussed herein, may involve aspects of Trusted Computing Group (TCG) technology and the incorporation of TCG feat ures into an OCF or similar IoT network deployment. The following techniques enable an OCF device to be bound to TCG-compliant platform, such as by using TCG platform certificates to link to OCF trust assertions. Further, the use of this trusted platform may enable trust to be established directly among respective OCF devices, and compliance testing with published electronic results.[0019] The following techniques may be used, for example, to enable a standards or overseeing organization to sign a document capturing the status of conformance tests. The organization then may identify which trust anchor that the organization is using and communicate this information to relevant onboarding tools (OBTs) and services.[0020] The following techniques also enable an organization to publish which CA that the organization has selected through use of a public blockchain. OBTs used by the network deployment may query the blockchain to reliably find the most current published trust anchor. If any of the keys found in the CA hierarchy between the organization and their chosen root have been compromised, a reputable organization may publish a trust anchor revocation message to the blockchain, thereby invalidating the previously committed block containing the trust anchor.[0021] Additionally, further extensions of the present techniques may utilize TCG specifications for Public Key Infrastructure (PKI) certificates, to link platform trust assertions to the device quality assertions maintained by the organization (e.g., by the OCF organization). These and other technical benefits may be achieved within OCF and like IoT network deployments.[0022] In an example, the present techniques and configurations may integrate aspects of traditional and highly constrained trusted environments, to support storage of a trust anchor policy where access to the storage is conditional upon execution of a certificate path validation code or logic having been securely launched in a trusted environment. Such trusted environments include, but are not limited to, TCG Trusted Platform Module (TPM), Intel Secure Guard Extensions (SGX), Intel Management Engine (ME), Intel Virtualization Technology (VT-X), Intel Trusted Execution Technology (TXT), Intel Memory Controller, Intel 3D CrossPoint or Optane Memory, Intel Memory Encryption Technology, Intel SMI Transfer Monitor (STM), ARM TrustZone or other hardware security modules.[0023] Also in an example, the present techniques and configurations may enable use of a network onboarding tool (such as is defined by the OCF specification), to utilize a trust environment mechanism for securing trust anchor policies.[0024] Also in an example, the present techniques and configurations may enable a highly constrained platform to perform network onboarding duties on behalf of a network owner.[0025] Also in an example, the present techniques and configurations may enable a mesh of trusted environments (containing certificate path validation code / logic) and secure memory (containing a trust anchor policy) to be provisioned consistently. The present techniques and configurations may also provide the effect of allowing every mesh platform to apply the same trust anchor policy. [0026] Also in an example, the present techniques and configurations enable any network node having a trust anchor policy to onboard other non- member nodes, using the trust anchor policy along with other policies that define network onboarding and network expansion criteria. These and other technical benefits, usable in a variety of loT network deployments, will be apparent from the following discussion.[0027] FIG. 1 illustrates an example domain topology for respective loT networks coupled through links to respective gateways. The loT supports deployments in which a large number of computing devices areinterconnected to each other and to the Internet to provide functionality and data acquisition at very low levels. Thus, as used herein, an loT device may include a semiautonomous device performing a function, such as sensing or control, among others, in communication with other loT devices and a wider network, such as the Internet.[0028] Often, IoT devices are limited in memory, size, or functionality, allowing larger numbers to be deployed for a similar cost to smaller numbers of larger devices. However, an IoT device may be a smart phone, laptop, tablet, or PC, or other larger device. Further, an IoT device may be a virtual device, such as an application on a smart phone or other computing device. IoT devices may include IoT gateways, used to couple IoT devices to other IoT devices and to cloud applications, for data storage, process control, and the like.[0029] Networks of IoT devices may include commercial and home automation devices, such as water distribution systems, electric power distribution systems, pipeline control systems, plant control systems, light switches, thermostats, locks, cameras, alarms, motion sensors, and the like. The IoT devices may be accessible through remote computers, servers, and other systems, for example, to control systems or access data[0030] The future growth of the Internet and like networks may involve very large numbers of IoT devices. Accordingly, in the context of the techniques discussed herein, a number of innovations for such future networking will address the need for all these layers to grow unhindered, to discover and make accessible connected resources, and to support the ability to hide and compartmentalize connected resources. Any number of network protocols and communications standards may be used, wherein each protocol and standard is designed to address specific objectives. Further, the protocols are part of the fabric supporting human accessible services that operate regardless of location, time or space. The innovations include service delivery and associated infrastructure, such as hardware and software; security enhancements; and the provision of services based on Quality of Service (QoS) terms specified in service level and service delivery agreements. As will be understood, the use of loT devices and networks, such as those introduced in FIGS. 1 and 2, present a number of new challenges in a heterogeneous network of connectivity comprising a combination of wired and wireless technologies.[0031] FIG. 1 specifically provides a simplified drawing of a domain topology that may be used for a number of loT networks comprising loT devices 104, with the loT networks 156, 158, 160, 162, coupled through backbone links 102 to respective gateways 154. For example, a number of IoT devices 104 may communicate with a gateway 154, and with each other through the gateway 154. To simplify the drawing, not every IoT device 104, or communications link (e.g., link 116, 122, 128, or 132) is labeled.The backbone links 102 may include any number of wired or wireless techno logies, including optical networks, and may be part of a local area network (LAN), a wide area network (WAN), or the Internet. Additionally, such communication links facilitate optical signal paths among both IoT devices 104 and gateways 154, including the use of MUXing/deMUXing components that facilitate interconnection of the various devices.[0032] The network topology may include any number of types of IoT networks, such as a mesh network provided with the network 156 using Bluetooth low energy (BLE) links 122. Other types of IoT networks that may be present include a wireless local area network (WLAN) network 158 used to communicate with IoT devices 104 through IEEE 802.11 (Wi-Fi®) links 128, a cellular network 160 used to communicate with IoT devices 104 through an LTE/LTE- A (4G) or 5G cellular network, and a low-power wide area (LPWA) network 162, for example, a LPWA network compatible with the LoRaWan specification promulgated by the LoRa alliance, or a I Pv6 over Low Power Wide-Area Networks (LPWAN) network compatible with a specification promulgated by the Internet Engineering Task Force (IETF). Further, the respective loT networks may communicate with an outside network provider (e.g., a tier 2 or tier 3 provider) using any number of communications links, such as an LTE cellular link, an LPWA link, or a link based on the IEEE 802.15.4 standard such as Zigbee®. The respective IoT networks may also operate with use of a variety of network and internet application protocols such as Constrained Application Protocol (CoAP).The respective IoT networks may also be integrated with coordinator devices that provide a chain of links that forms cluster tree of linked devices and networks.[0033] Each of these IoT networks may provide opportunities for new technical features, such as those as described herein. The improved technologies and networks may enable the exponential growth of devices and networks including the use of IoT networks into“fog” devices or systems. As the use of such improved technologies grows, the IoT networks may be developed for self-management, functional evolution, and collaboration, without needing direct human intervention. The improved technologies may even enable IoT networks to function without centralized controlled systems. Accordingly, the improved technologies described herein may be used to automate and enhance network management and operation functions far beyond current implementations.[0034] In an example, communications between IoT devices 104, such as over the backbone links 102, may be protected by a decentralized system for authentication, authorization, and accounting (AAA). In a decentralized AAA system, distributed payment, credit, audit, authorization, and authentication systems may be implemented across interconnected heterogeneous network infrastructure. This allows systems and networks to move towards autonomous operations. In these types of autonomous operations, machines may even contract for human resources and negotiate partnerships with other machine networks. This may allow the achievement of mutual objectives and balanced service delivery against outlined, planned service level agreements as well as achieve solutions that provide metering, measurements traceability and trackability. The creation of new supply chain structures and methods may enable a multitude of services to be created, mined for value, and collapsed without any human involvement.[0035] Such loT networks may be further enhanced by the integration of sensing technologies, such as sound, light, electronic traffic, facial and pattern recognition, smell, or vibration, into the autonomous organizations among the loT devices. The integration of sensory systems may allow systematic and autonomous communication and coordination of service delivery against contractual service objectives, orchestration and QoS-based swarming and fusion of resources. Individual examples of network-based resource processing include the following.[0036] The mesh network 156, for instance, may be enhanced by systems that perform inline data-to-information transforms. For example, self forming chains of processing resources comprising a multi-link network may distribute the transformation of raw data to information in an efficient manner, and the ability to differentiate between assets and resources and the associated management of each. Furthermore, the proper components of infrastructure and resource based trust and service indices may be inserted to improve the data integrity, quality, assurance and deliver a metric of data confidence.[0037] The WLAN network 158, for instance, may use systems that perform standards conversion to provide multi- standard connectivity, enabling IoT devices 104 using different protocols to communicate. Further systems may provide seamless interconnectivity across a multi- standard infrastructure comprising visible Internet resources and hidden Internet resources.[0038] Communications in the cellular network 160, for instance, may be enhanced by systems that offload data, extend communications to more remote devices, or both. The LPWA network 162 may include systems that perform non- Internet protocol (IP) to IP interconnections, addressing, and routing. Further, each of the IoT devices 104 may include the appropriate transceiver for wide area communications with that device. Further each IoT device 104 may include other transceivers for communications using additional protocols and frequencies This is discussed further with respect to the communication environment and hardware of an IoT processing device depicted in FIGS. 10 and 11.[0039] In still further examples, aspects of network virtualization and virtualized/software-based functional management, including software defined networking (SDN), may be implemented with the networks 158, 160, 162, or other entities. For instance, SDN may provide a software-based programmable network that separates the control plane from the data plane to make the network and network functions more flexible, agile, scalable, and less dependent on networking equipment, vendors, and service providers. Other use cases of SDN features may involve dynamic network configurations, monitoring, and the abstraction of network functions in virtualized and dynamic systems, for redundancy, control, and improved performance[0040] Finally, clusters of IoT devices may be equipped to communicate with other IoT devices as well as with a cloud network. This may allow the IoT devices to form an ad-hoc network between the devices, allowing them to function as a single device, which may be termed a fog device, fog platform, or fog network. This configuration is discussed further with respect to FIG. 2 below.[0041] FIG. 2 illustrates a cloud computing network in communication with a mesh network of IoT devices (devices 202) operating as a fog platform in a networked scenario. The mesh network of IoT devices may be termed a fog network 220, established from a net work of devices operating at the edge of the cloud 200. To simplify the diagram, not every IoT device 202 is labeled.[0042] The fog network 220 may be considered to be a massively interconnected network wherein a number of IoT devices 202 are in communications with each other, for example, by radio links 222. The fog network 220 may establish a horizontal, physical, or virtual resource platform that can be considered to reside between IoT edge devices and cloud or data centers. A fog network, in some examples, may support vertically-isolated, latency-sensitive applications through layered, federated, or distributed computing, storage, and network connectivity operations. However, a fog network may also be used to distribute resources and services at and among the edge and the cloud. Thus, references in the present document to the“edge”,“fog”, and“cloud” are not necessarily discrete or exclusive of one another.[0043] As an example, the fog network 220 may be facilitated using an interconnect specification released by the Open Connectivity Foundation™ (OCF). This standard allows devices to discover each other and establish communications for interconnects. Other interconnection protocols may also be used including, for example, the optimized link state routing (OLSR) Protocol, the better approach to mobile ad-hoc networking (B.A.T.M.A.N.) routing protocol, or the OMA Lightweight M2M (LWM2M) protocol, among others.[0044] Three types of loT devices 202 are shown in this example, gateways 204, data aggregators 226, and sensors 228, although any combinations of IoT devices 202 and functionality may be used. The gateways 204 may be edge devices that provide communications between the cloud 200 and the fog network 220, and may also provide the backend process function for data obtained from sensors 228, such as motion data, flow' data, temperature data, and the like. The data aggregators 226 may collect data from any number of the sensors 228, and perform the back end processing function for the analysis. The results, raw' data, or both may be passed along to the cloud 200 through the gateways 204. The sensors 228 may be full IoT devices 202, for example, capable of both collecting data and processing the data. In some cases, the sensors 228 may be more limited in functionality, for example, collecting the data and allowing the data aggregators 226 or gateways 204 to process the data.[0045] Communications from any IoT device 202 may be passed along a convenient path (e.g., a most convenient path) between any of the IoT devices 202 to reach the gateways 204. In these networks, the number of interconnections provide substantial redundancy, allowing communications to be maintained, even with the loss of a number of IoT devices 202. Further, the use of a mesh network may allow IoT devices 202 that are very- low power or located at a distance from infrastructure to be used, as the range to connect to another IoT device 202 may be much less than the range to connect to the gateways 204[0046] The fog network 220 provided from these IoT devices 202 maybe presented to devices in the cloud 200, such as a server 206, as a single device located at the edge of the cloud 200, e.g., a fog network operating as a de vice or platform. In this example, the alerts coming from the fog platform may be sent without being identified as coming from a specific IoT device 202 within the fog network 220. In this fashion, the fog network 220 may be considered a distributed platform that provides computing and storage resources to perform processing or data-intensive tasks such as data analytics, data aggregation, and machine-learning, among others[0047] In some examples, the IoT devices 202 may be configured using an imperative programming style, e.g , with each IoT device 202 having a specific function and communication partners. However, the IoT devices 202 forming the fog device may be configured in a declarati veprogramming style, allowing the IoT devices 202 to reconfigure their operations and communications, such as to determine needed resources in response to conditions, queries, and device failures. As an example, a query from a user located at a server 206 about the operations of a subset of equipment monitored by the IoT devices 202 may result in the fog network 220 device selecting the IoT devices 202, such as particular sensors 228, needed to answer the query. The data from these sensors 228 may then be aggregated and analyzed by any combination of the sensors 228, data aggregators 226, or gateways 204, before being sent on by the fog network 220 device to the server 206 to answer the query. In this example, IoT devices 202 in the fog network 220 may select the sensors 228 used based on the query, such as adding data from flow sensors or temperature sensors. Further, if some of the IoT de vices 202 are not operational, other IoT devices 202 in the fog network 220 may provide analogous data, if available. [0048] In an OCF architecture, entities in the real physical world (e.g., a temperature sensor) are represented as resources. Interactions with entities are implemented through resource representations, which use operations that adhere to Representational State Transfer (REST) architectures, e.g., RESTful interactions. As such, entities are exposed as resources, each with their unique identifiers (URIs) and support interfaces that enable RESTful operations on their resources. A client initiates a RESTful operation on a server. The client is the initiator and the server is a responder. Any device may act as a client to initiate a RESTful operation or any other device acting as a server. Thus, the role of a device as a client or server, in many circumstances, may be interchangeable. Any device that exposes a resource is by definition, a server. Each RESTful operation contains all of the information needed to understand the context of the operation and is supported by a set of generic operations (e.g., CREATE, RETRIEVE, UPDATE, DELETE, and NOTIFY (CRUDN)).[0049] As discussed herein, the following techniques may beimplemented in connection with use of various OCF services, including DOTS (also known as DOXS, Device Owner Transfer Service). In a further example, the following techniques may be implemented in connection with an onboarding tool (OBT). In the context of an OCF implementation, an OBT is a logical entity within a specific loT network that establishes ownership for a specific device and helps bring the device into operational state within that network. For instance, a typical OBT may implement DOXS, AMS (Access Management Service), and CMS (Credential Management Service) functionality.[0050] In some implementations of the OCF specification, a Public Key Infra struct ure (PKI) component may he utilized, that requires OCF devices and services to obtain a manufacturing certificate from a common root CA (e.g., a CA owned / operated by OCF or another trusted organization). Such a PKI approach may introduce undesirable security and operational issues for hardware suppliers. For example, it could result in the hardware vendor or manufact urer having to recall OCF-compliant product s if the OCF root CA key is compromised. The following techniques and configurations are usable in the context of an OCF loT deployment to provide security in the context of this PKI approach[0051] With the following techniques, in a scenario where OCF establishes a common OCF root CA infrastructure for security operation, vendors of OCF Devices may embed an OCF trust anchor in platforms they manufacture, and obtain manufacturer certs for embedded manufacturing keys. One of the goals of the OCF PKI approach is to include OCF compliance extensions in certificates. This may be used to ensure that signed compliance assertions cannot be easily faked; such compliance extensions also may be verified by onboarding tools, and may bestandardized / interoperable. These goals may be maintained with use of the presently described platform configurat ion.[0052] As used in the following discussion, an OCF device refers to a logical representation of OCF-defmed functionality; a platform refers to an environment used to host OCF devices. Thus, although a“platform” may not be expressly or fully defined by current OCF specifications, the following use of a trusted platform and security profile offers the potential for additional security definition and verification.[0053] Some of the security challenges with current and proposed PKI implementations of OCF devices (or like loT device deployments) include:[0054] a) OCF devices are software (not a trusted hardware platform). OCF compliance assertions do not specify the platform environment; and as a result, less trustworthy platforms may host the same OCF device software. As a security consequence, less trustworthy OCF devices appear· equally trustworthy.[0055] b) Multiple OCF devices may be hosted by the same platform. As a result, onboarding tools will observe reuse of the manufacturing key for multiple onboarding events. Use of the same manufacturing key to onboard multiple devices is an attack signature. As a security consequence, an onboarding tool has conflicting (exploitable) requirements.[0056] c) Non-OCF functionality may be hosted by platforms that host OCF Devices, and a manufacturing certificate must be used for both OCF and non-OCF functions. Different ecosystems should agree on platform trust atributes. As a security consequence, misunderstood trust semantics may be exploitable by a confused deputy.[0057] To address these security concerns, the present techniques and configuration introduce hosting of OCF devices in an interoperable trusted platform. As one example, Trusted Computing Group (TCG) defines specifications for interoperable trusted platforms, in the form of: (a) TPM - with manufacturing keys, embedding, storage, use (aka“attestation”) and trust; (b) Trusted microcontroller units (MCUs) - Trust features applied to MCU environments; (c) Attestation protocols - proof of trustworthiness to a network peer; (d) Binding of trusted platform to software; and (e)Extensible certificate profiles: (e.g., TPM (aka Endorsement Key) credential - Binding of manufacturing key to TPM or T-MCU, Platform attribute credential - Binding of TPM / T-MCU to platform, or Certificate extensions that capture trust and quality attributes).[0058] TCG specifications have broad industry adoption. For instance, major computer ODMs, OEMs, OS Vs and IS Vs deliver TCG compliant products, and TCG credentials issued by many existing CAs. Multiple open source software implementations available. With the presently described configurations, an OCF deployment may be adapted to interoperate with TCG compliant platforms.[0059] FIG. 3 illustrates a configuration and operation of OCF de vices hosted on an example Trusted Platform 330. In this example, the platform 330 may include a TCG platform credential 334 (e.g., implemented by TCG features such as TPM, T-MCU, in a Platform Attribute Cert instance), with links to compliance documentation 342. As shown, the platform 330 may operate to determine the validity of various certificates, and enable an onboarding tool 310 to proceed with use of a valid device certificate. Also, in an example, for each OCF Device (332) instance there may be a corresponding platform certificate 334 where the first platform credential may reference a first OCF Device Doc (e.g., a first document 342) and a second platform credential may reference a second OCF Device Doc (e.g., a second document). Both platform certificates may reference the same endorsement key certificate 336 (EK cert). [0060] As shown, the TCG platform credential 334 of the platform 330 is linked to a number of platform characteristics, such as identifiers, configurations, data values, and security assertions. These characteristics are in turn linked to the documentation 342 provided from an organization website 340 (such as an OCF web site). Information such as device vendor, device type, and compliance status, are maintained in a signed document 344. This signed document 344 is signed by a trusted organization (such as OCF). As also shown, information such as security assertions may be provided in compliance status of the signed document 344; platform configuration information may also be provided in the signed document 344.[0061] Relative to a common root CA, a TCG‘verifier’ operating within the platform 330 is not highly constrained. Rather, the TCG features (e.g., TPM, T-MCU) may be used to evaluate platform and endorsement key credentials, trust assertions, quality assertions, even as platform owners already manage the platform for other reasons. Additionally, the list of trust anchors for TCG CAs is small compared to other trust attributes.[0062] In this configuration, the onboarding tool 310 also is not“highly constrained”. Rather, the onboarding tool 310 implements all defined owner transfer methods 322 for respective devices (e.g., as defined in the OCF specification), maintains a list 314 of‘owned’ and trusted OCF devices 332 of the platform 330, provisions 324 new devices with local credentials (e.g., through use of a Local CA 312) and ACLs, manages many aspects of the OCF Device lifecycle, and may even be certified by multiple root CAs.[0063] With the configuration of FIG. 3, trust anchor provisioning is viable for IoT device deployments, as may be implemented through use of IETF Trust Anchor Management Protocol (RFC 5934). Trust anchor provisioning keeps OCF-approved trust anchors up-to-date; the OCF specification may also define other meaningful defaults. Additionally, OCF devices 332 may store multiple roots in secure read-write memory. Platform vendors may embed such information if they choose.[0064] In addition to the operations depicted in FIG. 3, various operational sequences may be utilized to enable device conformance,IS platform conformance, trusted booting, trusted / attested onboarding, and device operations as discussed with the following flowcharts[0065] FIG. 4 illustrates a flowchart of an example procedure for device conformance, for use of devices in a Trusted Platform. Specifically, in the example of FIG. 4, this process describes steps a vendor must follow in order to have“quality assertions” created for a device that is hosted on a trusted platform. An evaluation lab may sign evaluation results and may possess a signing key / certificate path that is rooted by a CA that is distinct from any other CA this invention might reference.[0066] The Device Conformance evaluation test lab may identify a ‘security profile’ that the loT Device conforms to, given the OCF Device Software running on a Trusted Platform (as defined by FIG. 5 discussed below).[0067] In an example, a procedure for a device conformance (e.g. as depicted in FIG, 4) may include:[0068] Operation 410: loT Device definition files and software are loaded on a trusted platform (such as a TCG defined Platform).[0069] Operation 420: The IoT Device and Platform are connected to a conformance testing facility and test suite that evaluates conformance and compliance to a validation suite. Such a facility and test suite is referred to as an evaluation lab.[0070] Operation 430: The evaluation lab constructs an electronic document (e.g., JSON, XML, HTML, ASN.l etc...) containing compliance results and identifying attributes for the Device and Platform. (These need not identify the instance but rather the type or model).[0071] Operation 440: The evaluation lab digitally signs evaluation results using a digital signature and a signed document such as a certificate (e.g., RFC5280), attributed certificate (e.g., RFC3281), signed document (e.g., RFC8152, Object Security for Constrained RESTfui Environments (OSCORE) internet draft, RFC5652), or manifest.[0072] Operation 450: The evaluation lab results are published (e.g., posted to a web site, published in a certificate, and/or contributed to a public blockchain). [0073] Operation 460: The evaluation lab public signing key may he published (as performed in operation 450) or stored in a certificate chain that is terminated by a trust anchor.[0074] FIG. 5 illustrates a flowchart of an example procedure for platform conformance, for use of a Trusted Platform. Specifically, in the example of FIG. 5, this process describes operations (largely established by TCG) that a platform vendor follows to establish trust in a platform configuration. Such a platform configuration may include a Trusted Platform Module (TPM) or Trusted MCU (TMCU) where TPM / TMCU have a manufacturer embedded key also known as the ΈK”. The TPM / TMCU vendor may obtain an EK certificate / certificate path that is rooted by a CA that is distinct from any other CA. The Platform Attribute Cert (PAC) may be issued by a trusted platform vendor and may have a certificate path that is rooted by a CA that is distinct from any other CA.[0075] In an example, a procedure for a platform conformance (e.g., as depicted in FIG. 5) may include:[0076] Operation 510: A trusted platform (such as TCG definedPlatform) is evaluated by a platform vendor, evaluation lab (e.g. common criteria), or other security evaluation organization or entity.[0077] Operation 520: The platform vendor constructs a Platform Attribute Credential (such as TCG PAC) containing Platform Configuration references (e.g., URIs to hardware components (e.g., vendor, model and version), software components (e.g. vendor, model, version), and Platform Quality Assurance references (e.g., URIs to evaluation labs hosting public documents / signed documents describing what software evaluated using which trusted platform definition)).[0078] Operation 530: The platform vendor signs the PAC using a key pair where the public key is made public (as discussed in previous operations).[0079] Operation 540: The evaluation lab public signing key may be published (as performed in operation 530) or contained in a certificate chain that is terminated by a trust anchor. [0080] FIG. 6 il lustrates a flo wchart of an example trusted hoot sequence for devices in a Trusted Platform Specifically, the flowchart of FIG. 6 shows a“Trusted Boot” sequence (which is an industry term of art) that shows a bootstrap process that includes an loT Device loader 630 which may load an OCF Device software 640 and configuration files. This process may measure the software (to one or more platform configuration registers (PCRs)) and eonfig files before passing execution to the software entry point. Further, this process may measure to the PCRs for each respective phase.[0081] In an example, a procedure for a trusted hoot sequence (e.g., as depicted in FIG. 6) may include an iterative process to launch an loT bootstrap loader, loT platform system software loader, and loT device software loader, and the measurement of the loader and such software code to respective platform configuration registers (PCRs). As shown, the trusted boot sequence may begin with an IoT firmware (bootstrap) loader 610 used to load (1) the IoT platform firmware and system software loader 660. This is followed by an execution flow (2) to the IoT platform system software loader 620 used to load (3) the IoT platform system software and device software loader 650. This is followed by the execution flow (4) to the IoT device software loader 630, which loads (5) the IoT device software 640 and passes execution to the entry point of this software. Execution flow (6) continues with the IoT device software 640.[0082] As shown in the example of FIG. 6, various components that make up the trusted platform and the IoT device it hosts are“bound” to one another. Binding here is e videnced in the form of trusted loading of the IoT Device Software 640. In some constrained environments there may be only one hybrid image consisting of firmware, system software, and device software. This conflates the trusted boot sequence into fewer operations, but the PCR(s) still contain the measurements needed for trusted and attested onboarding (as discussed with reference to FIG. 7, below).[0083] FIG. 7 illustrates a flowchart of an example procedure for trusted or attested device onboarding, for devices in a Trusted Platform.Specifically, the procedure depicted in FIG. 7 show's QBT onboarding steps involving a new device that hopes to be onboarded with rights to operate at one or more of the security profiles that were asserted to be valid by a device conformance process (e.g., as depicted in FIG. 4, discussed above).[0084] In an example, a procedure for trusted or attested device onboarding (e.g., as depicted in FIG. 7) may include:[0085] Operation 705: A trusted platform performs a secure or trusted boot that measures the platform firmware, system software and the loT Device software as it is loaded.[0086] Operation 710: An loT Network onboarding tool (OBT) connects to the loT platform and requests attestation evidence.[0087] Operation 715: The trusted platform signs the boot / load measurements in PCRs using a platform embedded manufacturing key (e.g. TPM EK or AIK) and the PAC certificate.[0088] Operation 720: The OBT verifies the PCRs and PAC including certificate paths to a trust anchor provisioned by the network owner. This includes verification of the manufacturing (mfg) key (which may follow a separate certificate path).[0089] Operation 725: The OBT obtains signed documents available using URI links in the PAC; and verifies the doc signature and chain (which may follow' a separate certificate path).[0090] Operation 730: The OBT obtains signed documents available using URI links in the PAC; and verifies the doc signature and chain (which may follow a separate certificate path).[0091] Operation 735: The OBT obtains the network owner’s trust anchors that terminates each certificate chain (Owner does this by reading from any of the public sources and evaluates legitimacy of public keys.) [0092] Operation 740: The OBT verifies the loT Device security profiles that the loT Device evaluation lab assigned to the evaluation results.[0093] Operation 745: The OBT selects a security profile from among the profiles the device supports and sets (configures) the device to operate using the OBT selected profile. [0094] Operation 750: The OBT issues a local credential or role authorizing the device to operate according to one or more of the supported security profiles.[0095] Operation 755: The OBT updates a Device Resource instructing it which security profile to transition to when the Device hoots.[0096] Operation 760: The OBT or device closes the connection.[0097] FIG. 8 illustrates a flowchart of example device-to-device operations, for devices in a Trusted Platform. Specifically, the procedure depicted in FIG. 8 shows a scenario where an loT Client requests access to a Resource hosted by an IoT Server where the Resource is accessible only with the Server is operating at a specific security profile.[0098] In this scenario, the Client supplies a credential (issued by the OBT or local CA) that authorizes the Client to operate at a specific security profile. The Server attempts to transition to the expected security profile and tries to satisfy the request. The local OBT / CA may issue local certificates, or role certificates where the certificate path may be terminated by a CA that none of the other diagrams use to terminate the path. The OBT and loT Server(s) use a“trust anchor policy” that the network owner provisions that is used to terminate the various cert chains used during onboarding and device-to-device operation.[0099] In an example, the example procedure for trusted or attested device onboarding may include a sequence of:[0100] Operation 810: The loT Client requests access to an IoT Server device; and supplies a credential that asserts a security profile.[0101] Operation 820: The IoT Server verifies IoT Client credential and security profile authorization.[0102] Operation 825: A determination is made, to identify whether the client is operational at the requested profile. If not already operational at requested profile, then a transition is made to a requested security profile at 830. This connection is then completed at 840.[0103] Operation 845: A determination is made whether the Resource ACL showing available is available at the current security profile. If this is available, then the client request is processed, at operation 850. [0104] FIG. 9 il lustrates a flo wchart 900 of an example method for onboarding a subject device for use with a security profile. As illustrated, the operations of the flowchart 900 are illustrated from the perspective of an onboarding tool device, which operates to onboard respective new (subject) devices onto use of a device platform (e.g., an OCF platform). It will be understood that these operations may be distributed among multiple entities or actors, in some examples, and such operations may be modified using any of the alternative security approaches discussed in the examples herein.[0105] The flowchart 900 begins with operations at 910 to receive a request (e.g., at the onboarding tool device) to perform an owner transfer method (e.g., as part of onboarding operations) of a subject device associated with a device platform. This is followed at 920 with operations to obtain attestation evidence associated with the subject device, and operations at 930 to verify the attestation evidence. In an example, the attestation evidence is provided by the device platform, and the attestation evidence is signed by a certificate produced using a manufacturer-embedded key. Further, the manufacturer-embedded key may be provided from a trusted hardware component of the device platform, as the device platform operates trusted hardware and includes relevant trust attestation information for operations and hardware.[0106] The flowchart 900 continues with operations at 940 to provision the subject device, such as with the use of local credentials issued from a local certificate authority. In an example, the local certificate authority is operated by the onboarding tool device. Also in an example, the local credentials indicate a verified use of the security profile tied tomanufacturer-embedded keys. The flowchart 900 then continues with operations at 950 to transition the subject device to use of a specified security profile. In an example, this includes updating a resource of the subject device to a value associated with the security profile, such that the subject device is transitioned to use of the security profile upon completion of the device provisioning. In an example, this may also include the onboarding tool device maintaining a list of owned and trusted devices of the device platform, such as updating the list to include the subject device. [0107] The flowchart 900 concludes with operations at 960 to complete device provisioning for the subject device, as the subject device is configured to operate with use of the security profile. Finally, subsequent operations with the subject device (and in the network platform, and any defined network domain) may include the verification of the security[0108] In further examples, the onboarding tool device, the device platform, and/or the subject device are configured and/or operable according to a specification of an Open Connectivity Foundation (OCF) standards family. Further, the trusted hardware component, the onboarding tool device, and other aspects of the device platform and/or subject device are configured and/or operable according to specification of a TrustedComputing Group (TCG) standards family.[0109] In a further example, the subject device conducts a trusted boot sequence of device software for operation on the subject device, and the attestation evidence includes the verification of the trusted boot sequence by the device platform. Also in further examples, the manufacturer-embedded key is associated with a trust anchor, such that the trust anchor is managed through use of a trust anchor management protocol. Also in further examples, the manufacturer-embedded key is linked to a certificate chain, and the certificate chain is terminated by a trust anchor, such that the attestation evidence includes the trust anchor. Also in further examples, the manufacturer-embedded key is associated with a platform attribute credential of the device platform, and the platform attribute credential includes platform information that is publicly verifiable at a third party data source. In still further examples, verification may include querying a blockchain to confirm a trust anchor linked to the manufacturer-embedded key, or querying the blockchain to search for a trust anchor revocation for the trust anchor linked to the manufacturer-embedded key. Theidentification of a trust anchor revocation may result in causing the subject device to use another security profile.[0110] In specific examples, the techniques discussed herein may be implemented in an OCF deployment as part of a security profile assignment (e.g., during device onboarding). OCF Devices may have been evaluated according to an OCF Security Profile. Evaluation results could be accessed from a manufacturer’s certificate, OCF web server or other public repository. The DOTS reviews evaluation results to determine which OCF Security Profiles that the OCF Device is authorized to possess and configures the Device with the subset of evaluated security profiles best suited for the network owner’s intended network segmentation strategy. The following paragraphs provide additional details regarding a possible implementation with reference to OCF resources and resource properties.[0111] In an example, the techniques discussed herein may be incorporated in a security profile referred to as the“Security Profile Blue”. The Security Profile Blue is used when manufacturers issue platform certificates for platforms containing manufacturer-embedded keys.Compatibility with interoperable trusted platforms is anticipated using certificate extensions defined by the Trusted Computing Group (TCG). Network owners evaluate manufacturer supplied certificates and attributed data to determine an appropriate OCF Security Profile that is configured for OCF Devices at onboarding. OCF Devices may satisfy multiple OCF Security Profiles. The network owner may configure network deployments using the Security Profile as network partitioning criteria.[0112] The OCF“Security Profile Blue” anticipates an ecosystem where platform vendors may differ from the OCF Device vendor and where platform vendors may implement trusted platforms that may conform to industry standards defining trusted platforms. The OCF Security Profile Blue specifies mechanisms for linking platforms with OCF Device(s) and for referencing quality assurance criteria produced by OCF conformance operations. The network owner evaluates these data when an OCF Device is onboarded into the network. Based on this evaluation the network owner determines which Security Profile shall be applied during OCF Device operation. All OCF Device types may be considered for evaluation using the OCF Security Profile Blue.[0113] In an example OCF Security Profile Blue defines the following quality assurances: The OCF Conformance criteria shall require vendor attestation that the conformant OCF Device was hosted on one or more platforms that satisfied OCF Security Profile Blue security assurances and security and privacy functionality requirements. In an example, OCF Security Profile Blue defines the quality assurance functionality as: the results of OCF Conformance testing and Security Profile compliance are published to an OCF web site; and the results of OCF Conformance testing and Security Profile compliance are digitally signed by an OCF owned signing key.[0114] In an example, OCF Security Profile Blue defines the following security assurances: Platforms implementing cryptographic service provider functionality and secure storage functionality shall be evaluated with a minimum FIPS 140-2 Level 1 or Common Criteria EAL Level I. Platforms implementing trusted platform functionality should be evaluated with a minimum Common Criteria EAL Level 1.[0115] In an example, OCF Security Profile Blue defines the following security and privacy functionality: OCF Device(s) shall use cryptographic algorithms using a cryptographic service provider (CSP). CSP functionality shall include cryptographic algorithms, random number generation, secure time. OCF Device(s) shall use a secure storage provider for cryptographic key storage. OCF Device(s) shall use AES 128 equivalent minimum protection for transmitted data and shall use platfor hosted CSP for cryptographic algorithm functionality. OCF Device(s) shall use AES 128 equivalent minimum protection for stored data and shall use platform hosted secure key storage functionality. OCF Device! s) shall protect the/oic/sec/ered resource using the platform secure storage provider. OCF Device(s) shall protect trust anchors (aka policy defining trusted CAs and pinned certificates) using platform secure storage functionality. OCF onboarding (aka DOTS) shall terminate certificate path validation of manufacturer certificates using the network owner authorized trust anchors. OCF onboarding (aka DOTS) shall check certificate revocation status for all certificates in a manufacturer certificate path. OCF Device(s) should check certificate revocation status tor locally issued certificates. [0116] In an example, OCF Security Profile Blue defines security and privacy functionality for a platform. Platform hosting OCF Device(s) should implement a platform identifier following IEEE802.1AR or TCG Trusted Platform Module (TPM) specifications. Platforms hosting OCF Device(s) may implement TCG-defined trusted platform security assertion extension: tBBSecurity Assertions ATTRIBUTE{WITH SYNTAX TBBSecurity AssertionsID teg— at-tbbSecurity Assertions}[0117] Platforms hosting OCF Device(s) may implement TCG-defined platform configuration assertion extension:platformConfiguration ATTRIBUTE{WITH SYNTAX PlatformConfigurationID tcg-at-platformConfiguration-vl1[0118] In an example the OCF Device vendor sets a manufacturer default value for the supported...profiles and the active_profile Properties of a /oic/sec/sp Resource to“oic. sec. sp. unspecified”. The default value is re asserted when the Device transitions to RESET Device State. The OCF Device allows the /oic/sec/sp_updaie Resource to be updated exclusively when the Device is in one of the following Device States: RFOTM,RFPRO SRESET.[0119] In an example, the DOTS updates the supported_profiles Property of the /oic/sec/sp..update Resource with a subset of the OCF Security Profiles values the Device achieved as part of OCF Conformance testing. The DOTS may locate conformance results by inspecting manufacturer certificates supplied with the OCF Device by selecting credentials that have a‘credusage’ Property value of“oic.sec.cred.mfgcert”. The DOTS may further locate conformance results by visiting a well-known OCF web site URI corresponding to the platform, OCF Device type and respective platform and OCF Device vendors. The DOTS may select a subset of Security Profiles (from those evaluated by OCF conformance testing) based on a local policy[0120] In an example, the DOTS updates the current_profile Property of the /oic/sec/sp..update Resource with the value that most correctly depicts the network owner’s intended network segmentation strategy. The CMS may issue role credentials using the Security Profile value (e.g.“oic.sec.sp.blue”) to indicate the network owner’s intention to segment the network according to a Security Profile. The CMS retrieves thesupported_profiles Property of the /oic/sec/sp Resource to select role names corroborated with the Device’s supported security profiles when issuing role credentials. If the CMS issues role credentials based on Security Profile, the AMS should supply access control entries that include the roledesignation/ s).[0121] Also in an example, the oic.sec.sp Resource is used by the OCF Device to show' which OCF Security Profiles are authorized by the network owner and which OCF Security Profile is currently operational. A Security Profile Resource Definition may be provided through the properties of the oic.sec.sp Resource and the oic.sec.sp_update resource. For instance, a Security Profile Resource Definition may be provided through the values of the oic.sec.sp Properties of the /oic/sec/sp Resource (in both R and RW access modes), which indicates an array of supported security profiles, and a security profile currently active.[0122] Variations to the preceding platforms, security and privacy functionality, certification requirements, security profiles, andimplementations in an OCF specification or other loT network deployment mav also occur.[0123] In various examples, the operations and functionality described above with reference to FIGS, 3 to 9 may be embodied by an loT device machine in the example form of an electronic processing system, within which a set or sequence of instructions may be executed to cause the electronic processing system to perform any one of the methodologies discussed herein, according to an example embodiment. The machine may be an loT device or an loT gateway, including a machine embodied by aspects of a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile telephone or smartphone, or any machine capable of executing instructions (sequential or otherwise) that specify actions to he taken by that machine.[0124] Further, while only a single machine may be depicted and referenced in the examples above, such machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. Further, these and like examples to a processor-based system shall he taken to include any set of one or more machines that are controlled by or operated by a processor set of processors, or processing circuitry (e.g., a machine in the form of a computer, IoT processing device, etc.) to individually or jointly execute instructions to perform any one or more of the methodologies discussed herein. Accordingly, in various examples, applicable means for processing (e.g., processing, controlling, generating, evaluating, etc.) may be embodied by such processing circuitry.[0125] FIG, 10 illustrates a drawing of a cloud computing network, or cloud 1000, in communication with a number of Internet of Things (IoT) devices. The cloud 1000 may represent the Internet, or may be a local area network (LAN), or a wide area network (WAN), such as a proprietary network for a company. The IoT devices may include any number of different types of devices, grouped in various combinations. For example, a traffic control group 1006 may include IoT devices along streets in a city. These IoT devices may include stoplights, traffic flow monitors, cameras, weather sensors, and the like. The traffic control group 1006, or other subgroups, may be in communication with the cloud 1000 through wired or wireless links 1008, such as IJPWA links, optical links, and the like.Further, a wired or wireless sub-network 1012 may allow the IoT devices to communicate with each other, such as through a local area network, a wireless local area network, and the like. The IoT devices may use another device, such as a gateway 1010 or 1028 to communicate with remote locations such as the cloud 1000; the IoT devices may also use one or more servers 1030 to facilitate communication with the cloud 1000 or with the gateway 1010. For example, the one or more servers 1030 may operate as an intermediate network node to support a local edge cloud or fog implementation among a local area network. Further, the gateway 1028 that is depicted may operate in a cloud-to-gateway-to-many edge devices configuration, such as with the various IoT devices 1014, 1020, 1024 being constrained or dynamic to an assignment and use of resources in the cloud 1000.[0126] Other example groups of IoT devices may include remote weather stations 1014, local information terminals 1016, alarm systems 1018, automated teller machines 1020, alarm panels 1022, or moving vehicles, such as emergency vehicles 1024 or other vehicles 1026, among many others. Each of these IoT devices may be in communication with other IoT devices, with servers 1004 with another IoT fog device or system (not shown, but depicted in FIG, 2), or a combination therein. The groups of IoT devices may be deployed in various residential, commercial, and industrial settings (including in both private or public environments).[0127] As can be seen from FIG. 10, a large number of IoT devices maybe communicating through the cloud 1000. This may allow' different IoT devices to request or provide information to other devices autonomously.For example, a group of IoT de vices (e.g., the traffic control group 1006) may request a current weather forecast from a group of remote weather stations 1014, which may pro vide the forecast without human intervention. Further, an emergency vehicle 1024 may be alerted by an automated teller machine 1020 that a burglary is in progress. As the emergency vehicle 1024 proceeds towards the automated teller machine 1020, it may access the traffic control group 1006 to request clearance to the location, for example, by lights turning red to block cross traffic at an intersection in sufficient time for the emergency vehicle 1024 to have unimpeded access to the intersection.[0128] Clusters of IoT devices, such as the remote weather stations 1014 or the traffic control group 1006, may be equipped to communicate with other IoT devices as well as with the cloud 1000. This may allow the IoT devices to form an ad-hoc network between the devices, allowing them to function as a single device, which may be termed a fog device or system (e.g., as described above with reference to FIG. 2).[0129] FIG. 11 is a block diagram of an example of components that may be present in an IoT device 1150 for implementing the techniques described herein. The IoT device 1150 may include any combinations of the components shown in the example or referenced in the disclosure above.The components may be implemented as ICs, portions thereof, discrete electronic devices, or other modules, logic, hardware, software, firmware, or a combination thereof adapted in the IoT device 1150, or as components otherwise incorporated within a chassis of a larger system. Additionally, the block diagram of FIG. 11 is intended to depict a high-level view of components of the IoT device 1150. However, some of the components shown may be omitted, additional components may be present, and different arrangement of the components shown may occur in other implementations.[0130] The IoT device 1150 may include processing circuitry in the form of a processor 1152, which may be a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, or other known processing elements. The processor 1152 may be a part of a system on a chip (SoC) in which the processor 1152 and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel. As an example, the processor 1152 may include an Intel® Architecture Core™ based processor, such as a Quark™, an Atom™, an B, an i5, an 17, or an MCU-class processor, or another such processor available from Intel® Corporation, Santa Clara, California. However, any number other processors may be used, such as available from Advanced Micro Devices, Inc. (AMD) of Sunnyvale, California, a MIPS-based design from MIPS Technologies, Inc. of Sunnyvale California, an ARM-based design licensed from ARM Holdings, Ltd. or customer thereof, or their licensees or adopters. The processors may include units such as an A5-A10 processor from Apple® Inc., a Snapdragon™ processor from Qualcomm®Technologies, Inc., or an OMAP™ processor from Texas Instruments, Inc. [0131] The processor 1152 may communicate with a system memory 1154 over an interconnect 1156 (e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory may be random access memory (RAM) in accordance with a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., LPDDR, LPDDR2, LPDDR3, or LPDDR4). In various implementations the individual memory devices may be of any number of different package types such as sing le die package (SDP), dual die package (DDP) or quad die package (Q17P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs or MiniDIMMs.[0132] To provide for persistent storage of information such as data, applications operating systems and so forth, a storage 1158 may also couple to the processor 1152 via the interconnect 1156. In an example the storage 1158 may be implemented via a solid state disk drive (SSDD). Other devices that may be used for the storage 1158 include flash memory cards, such as SD cards, microSD cards, xD picture cards, and the like, and USB flash drives. In low power implementations, the storage 1158 may be on-die memory or registers associated with the processor 1152. However, in some examples, the storage 1158 may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage 1158 in addition to, or instead of the technologies described, such resistance change memories, phase change memories, holographic memories or chemical memories, amone others.[0133] The components may communicate over the interconnect 1156. The interconnect 1156 may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other techno logies. The interconnect 1156 may be a proprietary bus, for example, used in a SoC based system. Other bus systems may be included, such as an I2C interface, an SPI interface, point to point interfaces, and a power bus, among others.[0134] The interconnect 1156 may couple the processor 1152 to a mesh transceiver 1162, for communications with other mesh devices 1164. The mesh transceiver 1162 may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee© standard, among others. Any number of radios, configured for a particular wireless communication protocol, may be used for the connections to the mesh devices 1164. For example, a WLAN unit may be used to implement Wi Fi™ communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a WWAN unit.[0135] The mesh transceiver 1162 may communicate using multiple standards or radios for communications at different range. For example, the loT device 1150 may communicate with close devices, e.g., within about 11 meters, using a local transceiver based on BLE, or another low power radio, to save power. More distant mesh devices 1164, e.g., within about 50 meters, may be reached over ZigBee or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels, or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee.[0136] A wireless network transceiver 1166 may be included to communicate with devices or services in the cloud 1100 via local or wide area network protocols. The wireless network transceiver 1166 may be a LPWA transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4e standards, among others. The loT device 1150 may communicate over a wide area using LoRaWAN™ (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies, but may be used with any number of other cloud transceivers that implement long range, lo w bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 802.15.4e specification may be used.[0137] Any number of other radio communications and protocols may be used in addition to the systems mentioned for the mesh transceiver 1162 and wireless network transceiver 1166, as described herein. For example, the radio transceivers 1162 and 1166 may include an LTE or other cellular transceiver that uses spread spectrum (SPA/S AS) communications for implementing high speed communications. Further, any number of other protocols may be used, such as Wi-Fi® networks for medium speed communications and provision of network communications.[0138] The radio transceivers 1162 and 1166 may include radios that are compatible with any number of 3 GPP (Third Generation Partnership Project) specifications, notably Long Term Evolution (LTE), Long Term Evolution- Advanced (LTE- A), and Long Term Evolution- Advanced Pro (LTE- A Pro). It can be noted that radios compatible with any number of other fixed, mobile, or satellite communication technologies and standards may be selected. These may include, for example, any Cellular Wide Area radio communication technology, which may include e.g. a 5th Generation (5G) communication systems, a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, or an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, a UMTS (Universal Mobile Telecommunications System) communicationtechnology, In addition to the standards listed above, any number of satellite uplink technologies may be used for the wireless network transceiver 1166, including, for example, radios compliant with standards issued by the ITU (International Telecommunication Union), or the ETSI (EuropeanTelecommunications Standards Institute), among others. The examples provided herein are thus understood as being applicable to various other communication technologies both existing and not yet formulated.[0139] A network interface controller (NIC) 1168 may be included to provide a wired communication to the cloud 1100 or to other devices, such as the mesh devices 1164. The wired communication may provide an Ethernet connection, or may be based on other types of networks, such as Controller Area Network (CAN) Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway-·-, PROFIB US, or PROFINET, among many others. An additional NIC 1168 may be included to allow connect to a second network, for example, a NIC 1168 providing communications to the cloud over Ethernet, and a second NIC 1168 providing communications to other devices over another type of network.[0140] Given the variety of types of applicable communications from the device to another component or network applicable communications circuitry used by the device may include or be embodied by any one or more of components 1262, 1266, 1268, or 1270. Accordingly, in various examples, applicable means for communicating (e.g., receiving,transmitting, etc.) may be embodied by such communications circuitry.[0141] The interconnect 1156 may couple the processor 1152 to an external interface 1170 that is used to connect external devices or subsystems. The external devices may include sensors 1172, such as accelerometers, level sensors, flow' sensors, optical light sensors, camera sensors, temperature sensors, a global positioning system (GPS) sensors, pressure sensors, barometric pressure sensors, and the like. The external interface 1170 further may be used to connect the loT device 1150 to actuators 1174, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.[0142] In some optional examples, various input/output (I/O) devices may be present within, or connected to, the loT device 1150. For example, a display or other output device 1 184 may he included to show information, such as sensor readings or actuator position. An input device 1186, such as a touch screen or keypad may be included to accept input. An output device 1184 may include any number of forms of audio or visual display, including simple visual outputs such as binary status indicators (e.g., LEDs) and multi-character visual outputs, or more complex outputs such as display screens (e.g., LCD screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the loT device 1150.[0143] A battery 1176 may power the loT device 1150, although in examples in which the loT device 1150 is mounted in a fixed location, it may have a power supply coupled to an electrical grid. The battery 1176 may be a lithium ion battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like.[0144] A battery monitor / charger 1178 may be included in the loT device 1150 to track the state of charge (SoCh) of the battery 1176. The battery monitor / charger 1178 may be used to monitor other parameters of the battery 1176 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 1176. The battery monitor / charger 1178 may include a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 front Linear Technologies, anADT7488A from ON Semiconductor of Phoenix Arizona, or an IC from the UCD90xxx family from Texas Instruments of Dallas, TX. The battery monitor / charger 1178 may communicate the information on the battery 1 176 to the processor 1152 over the interconnect 1156. The battery monitor / charger 1178 may also include an analog-to-digital (ADC) convertor that allows the processor 1152 to directly monitor the voltage of the battery 1176 or the current flow from the battery 1176. The battery parameters may be used to determine actions that the loT device 1150 may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like.[0145] A power block 1 180, or other power supply coupled to a grid, may be coupled with the battery monitor / charger 1178 to charge the battery 1 176. In some examples, the power block 1 180 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the loT device 1150. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, California, among others, may be included in the batery monitor / charger 1178. The specific charging circuits chosen depend on the size of the battery 1 176, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others.[0146] The storage 1 158 may include instructions 1182 in the for of software, firmware, or hardware commands to implement the techniques described herein. Although such instructions 1182 are shown as code blocks included in the memory 1154 and the storage 1158, it may be understood that any of the code blocks may be replaced with hardwired circuit s, for example, built into an application specific integrated circuit (ASIC).[0147] In an example the instructions 1182 provided via the memory 1154, the storage 1158, or the processor 1152 may be embodied as a non- transitory, machine readable medium 1060 including code to direct the processor 1052 to perform electronic operations in the IoT device 1150. The processor 1152 may access the non- transitory, machine readable medium 1160 over the interconnect 1156. For instance, the non-transitory, machine readable medium 1160 may be embodied by devices described for the storage 1158 of FIG , 11 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices. The non-transitory, machine readable medium 1160 may include instructions to direct the processor 1152 to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality depicted above.[0148] In still a specific example, the instructions 1288 on the processor 1252 (separately, or in combination with the instructions 1288 of the machine readable medium 1260) may configure execution or operation of a trusted execution environment (TEE) 1290. In an example, the TEE 1290 operates as a protected area accessible to the processor 1252 for secure execution of instructions and secure access to data. Variousimplementations of the TEE 1290, and an accompanying secure area in the processor 1252 or the memory 1254 may be provided, for instance, through use of Intel® Software Guard Extensions (SGX) or ARM® TrustZone® hardware security extensions, Intel® Management Engine (ME), or Intel® Converged Security Manageability Engine (CSME). Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the device 1250 through the TEE 1290 and the processor 1252.[0149] In further examples, a machine-readable mediu also includes any tangible medium that is capable of storing, encoding or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. A“machine-readable medium” thus may include, but is not limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory(EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., HTTP).[0150] It should be understood that the functional units or capabilities described in this specification may have been referred to or labeled as components or modules, in order to more particularly emphasize their implementation independence. Such components may be embodied by any number of software or hardware forms. For example, a component or module may be implemented as a hardware circuit comprising custom very- large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discretecomponents. A component or module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.Components or modules may also he implemented in software for execution by various types of processors. An identified component or module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified component or module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the component or module and achieve the stated purpose for the component or module.[0151] Indeed, a component or module of executable code may he a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices or processing systems. In particular, some aspects of the described process (such as code rewriting and code analysis) may take place on a different processing system (e.g., in a computer in a data center), than that in which the code is deployed (e.g., in a computer embedded in a sensor or robot). Similarly, operational data may be identified and illustrated herein within components or modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. The components or modules may be passive or active, including agents operable to perform desired functions.[0152] Additional examples of the presently described method, system, and device embodiments include the following, non-limiting configurations. Each of the following non- limiting examples may stand on its own, or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.[0153] Example 1 is a device, comprising: communications circuitry; processing circuitry; and a memory device including instructions embodied thereon, wherein the instructions, which when executed by the processing circuitry, configure the processing circuitry to perform operations comprising: receiving a request to perform an owner transfer method of a subject device, the subject device being associated with a device platform; verifying attestation evidence associated with the subject device, the attestation evidence provided by the device platform, wherein the attestation evidence is signed by a certificate produced using a manufacturer-embedded key, wherein the manufacturer-embedded key is provided from a trusted hardware component of the device platform; and performing device provisioning of the subject device, based on the attestation evidence, wherein the device provisioning causes the subject device to use a security profile tied to manufacturer-embedded keys.[0154] In Example 2, the subject matter of Example 1 includes:maintaining a list of owned and trusted devices of the device platform, the list including the subject device.[0155] In Example 3, the subject matter of Examples 1-2 includes, the subject matter wherein performing device provisioning includes further operations comprising: provisioning the subject device with local credentials from a local certificate authority, the local certificate authority operated by the device, wherein the local credentials indicate a verified use of the security profile tied to manufacturer-embedded keys.[0156] In Example 4, the subject matter of Examples 1-3 includes, the subject matter wherein performing device provisioning includes further operations comprising: updating a resource of the subject device to a value associated with the security profile, wherein the subject device is transitioned to use of the security profile upon completion of the device provisioning.[0157] In Example 5, the subject matter of Examples 1-4 includes, the subject matter wherein the manufacturer-embedded key is associated with a trust anchor, wherein the trust anchor is managed through use of a trust anchor management protocol.[0158] In Example 6, the subject matter of Examples 1-5 includes the subject matter wherein the manufacturer-embedded key is linked to a certificate chain, wherein the certificate chain is terminated by a trust anchor, and wherein the attestation evidence includes the trust anchor.[0159] In Example 7, the subject matter of Examples 1-6 includes, the subject matter wherein the manufacturer-embedded key is associated with a platform attribute credential of the device platform, and wherein the platform attribute credential includes platform information that is publicly verifiable at a third party data source.[0160] In Example 8, the subject matter of Examples 1-7 includes, querying a blockchain to confirm a trust anchor linked to the manufacturer- embedded key.[0161] In Example 9, the subject matter of Example 8 includes: querying the blockchain to search for a trust anchor revocation for the trust anchor linked to the manufacturer-embedded key; and causing the subject device to use another security profile for the subject device based on identifying the trust anchor revocation.[0162] In Example 10, the subject matter of Examples 1-9 includes, , the subject matter wherein the subject device conducts a trusted boot sequence of device software for operation on the subject device, and wherein the attestation evidence includes verification of the trusted boot sequence by the device platform.[0163] In Example 11, the subject matter of Examples 1—10 includes, , the subject matter wherein the device is an onboarding tool, and wherein the device and the device platform are configured according to a specification of an Open Connectivity Foundation (OCF) standards family.[0164] In Example 12, the subject matter of Examples 1-11 includes, , the subject matter wherein the trusted hardware component and the device are configured according to specification of a Trusted Computing Group (TCG) standards family.[0165] Example 13 is a method for onboarding a subject device for use with a security profile, using operations performed by an onboarding tool device comprising: receiving a request to perform an owner transfer method of the subject device, the subject device being associated with a device platform; verifying attestation evidence associated with the subject device, the attestation evidence provided by the device platform, wherein the attestation evidence is signed by a certificate produced using amanufacturer-embedded key, and wherein the manufacturer-embedded key is provided from a trusted hardware component of the device platform; and performing device provisioning of the subject device, based on the attestation evidence, wherein the device provisioning causes the subject device to use a security profile tied to manufacturer-embedded keys.[0166] In Example 14, the subject matter of Example 13 includes, maintaining a list of owned and trusted devices of the device platform, the list including the subject device.[0167] In Example 15, the subject matter of Examples 13—14 includes, the subject matter wherein performing device provisioning includes further operations comprising: provisioning the subject device with local credentials from a local certificate authority, the local certificate authority operated by the device, wherein the local credentials indicate a verified use of the security profile tied to manufacturer-embedded keys.[0168] In Example 16, the subject matter of Examples 13-15 includes, the subject matter wherein performing device provisioning includes further operations comprising: updating a resource of the subject device to a value associated with the security profile, wherein the subject device is transitioned to use of the security profi le upon completion of the device provisioning.[0169] In Example 17, the subject matter of Examples 13-16 includes, the subject matter wherein the manufacturer-embedded key is associated with a trust anchor, wherein the trust anchor is managed through use of a trust anchor management protocol.[0170] In Example 18, the subject matter of Examples 13-17 includes, the subject matter wherein the manufacturer-embedded key is linked to a certificate chain, wherein the certificate chain is terminated by a trust anchor, and wherein the attestation evidence includes the trust anchor.[0171] In Example 19, the subject matter of Examples 13—18 includes, the subject matter wherein the manufacturer-embedded key is associated with a platform attribute credential of the device platform, and wherein the platform attribute credential includes platform information that is publicly verifiable at a third party data source.[0172] In Example 20, the subject matter of Examples 13-19 includes: querying a blockchain to confirm a trust anchor linked to the manufacturer- embedded key.[0173] In Example 21, the subject matter of Example 20 includes:querying the blockchain to search for a trust anchor revocation for the trust anchor linked to the manufacturer-embedded key; and causing the subject device to use another security profile for the subject device based on identifying the trust anchor revocation.[0174] In Example 22, the subject matter of Examples 15—21 includes, the subject matter wherein the subject device conducts a trusted boot sequence of device software for operation on the subject device, and wherein the attestation evidence includes verification of the trusted boot sequence by the device platform.[0175] In Example 23, the subject matter of Examples 15-22 includes, the subject matter wherein the onboarding tool device and the device platform operate according to a specification of an Open Connectivity Foundation (QCF) standards family.[0176] In Example 24, the subject matter of Example 23 includes the subject matter wherein the trusted hardware component and the device platform are configured according to specification of a Trusted Computing Group (TCG) standards family.[0177] Example 25 is a machine-readable storage medium including instructions, wherein the instructions, when executed by a processing circuitry of a computing device, cause the processing circuitry to perform operations of any of Examples 13 to 24.[0178] Example 26 is an apparatus, comprising: means for receiving a request to perform an owner transfer method of a subject device the subject device being associated with a device platform; means for verifying attestation evidence associated whth the subject device, the attestation evidence provided by the device platform, wherein the attestation evidence is signed by a certificate produced using a manufacturer- embedded key, and wherein the manufacturer-embedded key is provided from a trusted hardware component of the device platform; and means for performing device pro visioning of the subject de vice, based on the attestation evidence, wherein the device provisioning causes the subject device to use a security profile tied to manufacturer-embedded keys.[0179] In Example 27, the subject matter of Example 26 includes, means for maintaining a list of owned and trusted devices of the device platform, the list including the subject device.[0180] In Example 28, the subject matter of Examples 26-27 includes, means for provisioning the subject device with local credentials from a local certificate authority, the local certificate authority operated by the device, wherein the local credentials indicate a verified use of the security profile tied to manufacturer-embedded keys.[0181] In Example 29, the subject matter of Examples 26-28 includes, means for updating a resource of the subject device to a value associated with the security profile, wherein the subject device is transitioned to use of the security profile upon completion of the device provisioning.[0182] In Example 30, the subject matter of Examples 26-29 includes, the subject matter wherein the manufacturer-embedded key is associated with a trust anchor, wherein the trust anchor is managed through use of a trust anchor management protocol.[0183] In Example 31, the subject matter of Examples 26-30 includes, the subject matter wherein the manufacturer-embedded key is linked to a certificate chain, wherein the certificate chain is terminated by a trust anchor, and wherein the attestation evidence includes the trust anchor.[0184] In Example 32, the subject matter of Examples 26-31 includes, the subject matter wherein the manufacturer-embedded key is associated with a platform attribute credential of the device platform, and wherein the platform attribute credential includes platform information that is publicly verifiable at a third party data source.[0185] In Example 33, the subject matter of Examples 26-32 includes, means for querying a blockchain to confirm a trust anchor linked to the manufacturer-embedded key. [0186] In Example 34, the subject matter of Example 33 includes, means for querying the blockchain to search for a trust anchor revocation for the trust anchor linked to the manufacturer-embedded key; and means for causing the subject device to use another security profile for the subject device based on identifying the trust anchor revocation[0187] In Example 35, the subject matter of Examples 26-34 includes, the subject matter wherein the subject device conducts a trusted boot sequence of de vice software for operation on the subject de vice, and wherein the attestation evidence includes verification of the trusted boot sequence by the device platform.[0188] In Example 36, the subject matter of Examples 26-35 includes, the subject matter wherein the apparatus and the device platform operate according to a specification of an Open Connectivity Foundation (OCF) standards family.[0189] In Example 37, the subject matter of Examples 26-36 includes, the subject matter wherein the trusted hardware component and the device platform are configured according to specification of a Trusted Computing Group (TCG) standards family[0190] Example 38 is an loT services platform adapted to perform the operations of any of Examples 1 to 37.[0191] Example 39 is an Open Connectivity Foundation (OCF) device, configured as a server, client, or intermediary according to an OCF specification, comprising means to implement the operations of any of Examples 1 to 37.[0192] Example 40 is a device owner transfer service management service adapted to perform the operations invoked by any of Examples 1 to 37.[0193] Example 41 is an Internet of Things (loT) network topology, the loT network topology comprising respective communication links adapted to perform communications for the operations of any of Examples 1 to 37.[0194] Example 42 is a network comprising respective devices and device communication mediums for performing any of the operations of Examples 1 to 37. [0195] Example 43 is an apparatus comprising means for performing any of the operations of Examples 1 to 37.[0196] Example 44 is a system to perform the operations of any of Examples 1 to 37.[0197] The operations and functionality described above in these examples, and in the specific embodiments described with reference to FIGS. 3 to 9, may apply in a variety of network settings such as loT networking, edge networking, fog networking, cloud networking, and all hybrids thereof. The operations and functionality of these examples and configurations may occur in a distributed fashion, including in distributed networked settings where one aspect of the functionality is performed by a first IoT edge device or edge network, another aspect of the functionality is performed by a fog network or platform, and yet another aspect of the functionality is performed by a cloud device or system. Furthercombinations which follow these shared, distributed, or grouping principles, as suggested in the examples and configurations above, can be employed. Accordingly, it will be evident that the functionality described herein maybe operable to work within many permutations of the examples and configurations above, and like variations.[0198] In the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with a claim standing on its own as a separate embodiment. |
Methods, apparatus, and computer program products for generating a derivative key for an execution environment (EE) are described. An example of a method includes obtaining a device key by a key derivation circuit, obtaining a context string by the key derivation circuit from a one-time writable bit register (OWBR), generating the derivative key for a current EE by the key derivation circuit based on the device key and on the context string from the OWBR. |
CLAIMS1. A method of generating a derivative key for an execution environment (EE) comprising:obtaining a first input value by a key derivation circuit;obtaining a second input value by the key derivation circuit, the second input value being from a one-time writable bit register (OWBR); andgenerating an output value by the key derivation circuit based on the first input value and on the second input value from the OWBR, the output value corresponding to the derivative key for a current EE.2. The method of claim 1 wherein obtaining the first input value comprises obtaining a single device key from a non-volatile memory device.3. The method of claim 1 wherein generating the output value comprises generating an EE-specific derivative key.4. The method of claim 3 wherein generating the EE-specific derivative key comprises generating the EE-specific derivative key independently from derivative keys for EEs prior to and subsequent to the current EE in a boot chain sequence.5. The method of claim 1 wherein obtaining the second input value comprises obtaining a single context string corresponding to the current EE.6. The method of claim 1 wherein obtaining the second input value comprises obtaining an initial context string of a pair of context strings corresponding to the current EE.7. The method of claim 6 wherein the output value is a first output value and further comprising:obtaining a third input value by the key derivation circuit from the OWBR, the third input value comprising a final context string of the pair of context strings corresponding to the current EE; and generating a second output value by the key derivation circuit, the second output value being a perturbed output value unequal to and unobtainable from the derivative key.8. The method of claim 1 further comprising:determining, by a processor coupled to the key derivation circuit, an existing value of the OWBR;determining, by the processor, that the second input value is less than the existing value in the OWBR; andgenerating, by the processor, an indication of a violation of a context string allocation protocol.9. An apparatus comprising:a memory comprising:a non-volatile memory device;an output register; anda one-time writable bit register (OWBR); anda processor coupled to the memory and comprising:a key derivation circuit coupled to the non-volatile memory device, the output register, and the OWBR and configured to receive a first input value from the non-volatile memory device and a second input value from the OWBR and further configured to provide an output value to the output register.10. The apparatus of claim 9 wherein the first input value from the nonvolatile memory device is a single device key.11. The apparatus of claim 9 wherein the second input value is a context string corresponding to a current execution environment (EE) and the output value is an EE-specific derivative key for the current EE.12. The apparatus of claim 9 wherein the output value is a first output value and the key derivation circuit is further configured to receive a third input value from the OWBR and to provide a second output value to the output register, wherein the second output value is a perturbed output value unequal to and unobtainable from the first output value.13. The apparatus of claim 9 comprising one or more OWBRs, a number of OWBRs corresponding to a number of parallel boot flows supported by an electronic system that includes the apparatus.14. The apparatus of claim 9 wherein the processor is configured to:determine an existing value of the OWBR;determine that the second input value is less than the existing value in the OWBR; andgenerate an indication of a violation of a context string allocation protocol.15. An apparatus comprising:means for obtaining a first input value by a key derivation circuit;means for obtaining a second input value by the key derivation circuit, the second input value being from a one-time writable bit register (OWBR); andmeans for generating an output value by the key derivation circuit based on the first input value and on the second input value from the OWBR, the output value corresponding to a derivative key for a current execution environment (EE).16. The apparatus of claim 15 wherein the means for obtaining the first input value comprises means for obtaining a single device key from a non-volatile memory device.17. The apparatus of claim 15 wherein the means for generating the output value comprises means for generating an EE-specific derivative key.18. The apparatus of claim 17 wherein the means for generating the EE- specific derivative key comprises means for generating the EE-specific derivative key independently from derivative keys for EEs prior to and subsequent to the current EE in a boot chain sequence.19. The apparatus of claim 15 wherein the means for obtaining the second input value comprises means for obtaining a single context string corresponding to the current EE.20. The apparatus of claim 15 wherein the means for obtaining the second input value comprises means for obtaining an initial context string of a pair of context strings corresponding to the current EE.21. The apparatus of claim 20 wherein the output value is a first output value and further comprising:means for obtaining a third input value by the key derivation circuit from the OWBR, the third input value comprising a final context string of the pair of context strings corresponding to the current EE; andmeans for generating a second output value by the key derivation circuit, the second output value being a perturbed output value unequal to and unobtainable from the derivative key.22. The apparatus of claim 15 further comprising:means for determining an existing value of the OWBR;means for determining that the second input value is less than the existing value in the OWBR; andmeans for generating an indication of a violation of a context string allocation protocol.23. A non-transitory processor-readable storage medium comprising processor-readable instructions comprising:instructions for obtaining a first input value by a key derivation circuit;instructions for obtaining a second input value by the key derivation circuit, the second input value being from a one-time writable bit register (OWBR); andinstructions for generating an output value by the key derivation circuit based on the first input value and on the second input value from the OWBR, the output value corresponding to a derivative key for a current execution environment (EE).24. The storage medium of claim 23 wherein the instructions for obtaining the first input value comprise instructions for obtaining a single device key from a nonvolatile memory device.25. The storage medium of claim 23 wherein the instructions for generating the output value comprise instructions for generating an EE-specific derivative key.26. The storage medium of claim 25 wherein the instructions for generating the EE-specific derivative key comprise instructions for generating the EE-specific derivative key independently from derivative keys for EEs prior to and subsequent to the current EE in a boot chain sequence.27. The storage medium of claim 23 wherein the instructions for obtaining the second input value comprise instructions for obtaining a single context string corresponding to the current EE.28. The storage medium of claim 23 wherein the instructions for obtaining the second input value comprises instructions for obtaining an initial context string of a pair of context strings corresponding to the current EE.29. The storage medium of claim 28 wherein the output value is a first output value, the instructions further comprising:instructions for obtaining a third input value by the key derivation circuit from the OWBR, the third input value comprising a final context string of the pair of context strings corresponding to the current EE; andinstructions for generating a second output value by the key derivation circuit, the second output value being a perturbed output value unequal to and unobtainable from the derivative key.30. The storage medium of claim 23 further comprising:instructions for determining an existing value of the OWBR;instructions for determining that the second input value is less than the existing value in the OWBR; and instructions for generating an indication of a violation of a context string allocation protocol. |
DERIVED KEYS FOR EXECUTION ENVIRONMENTS IN A BOOT CHAINBACKGROUND[0001] An aspect of this invention generally relates to data processing devices and more particularly to securely generating derivative keys for execution environments in a boot chain.[0002] Most secure electronic systems follow a well-established boot chain sequence for ensuring secure boot with a sequence of execution environments (EEs). The EEs may include a primary boot loader (PBL), a secondary boot loader (SBL), a high level operating system (HLOS) kernel and/or other trusted kernel code, a HLOS, and applications. Initially, the small and highly secure PBL, typically residing in read-only memory (ROM), is used for primordial boot after a power-on-reset. The PBL typically loads and verifies the SBL that resides in an external memory, for example, a flash memory. The SBL may load the HLOS kernel, or other highly trusted code on the device (e.g., ARM® TrustZone® kernel code). Subsequently, the HLOS may be loaded and verified. Finally, the applications may be loaded and executed. Each one of these EEs may require a secure key to encrypt and/or decrypt files, memory and/or other sensitive data assets. Furthermore, for security reasons, the key used by one execution environment (EE) should not be available to another EE. Moreover, for security purposes, the key used by more secure EEs (i.e., those that boot first or earlier in the boot chain sequence) should not be available to any less secure EE (i.e., those that boot later or last in the boot chain sequence).SUMMARY[0003] An example of a method of generating a derivative key for an execution environment (EE) according to the disclosure includes obtaining a device key by a key derivation circuit, obtaining a context string by the key derivation circuit from a onetime writable bit register (OWBR), and generating the derivative key for a current EE by the key derivation circuit based on the device key and on the context string from the OWBR.[0004] Implementations of such a method may include one or more of the following features. Obtaining the device key may include obtaining a single device key from a non-volatile memory device. Generating the derivative key may include generating an EE-specific derivative key. Generating the EE-specific derivative key may include generating the EE-specific derivative key independently from derivative keys for EEs prior to and subsequent to the current EE in a boot chain sequence. Obtaining the context string may include obtaining a single context string corresponding to the current EE. Obtaining the context string may include obtaining a pair of context strings corresponding to the current EE and the method may further include generating the derivative key for the current EE based on the device key and on an initial context string of the pair of context strings and generating a perturbed output based at least in part on a final context string of the pair of context strings, the perturbed output being unequal to the derivative key. The method may further include determining, by a processor coupled to the key derivation circuit, an existing value in the OWBR, determining, by the processor, that a value of the context string is less than the existing value in the OWBR, and generating, by the processor, an indication of a violation of a context string allocation protocol.[0005] An example of an apparatus according to the disclosure includes a processor, a memory coupled to the processor, the memory including a non-volatile memory device, an output register, and a one-time writable bit register (OWBR), and a key derivation circuit coupled to the non-volatile memory device, the output register, and the OWBR and configured to receive a device key from the non-volatile memory device and a context string from the OWBR and further configured to provide an output to the output register.[0006] Implementations of such an apparatus may include one or more of the following features. The device key may be a single device key. The context string may correspond to a current execution environment (EE) and the output may be an EE-specific derivative key for the current EE. The key derivation circuit may be configured to receive a pair of context strings from the OWBR and the output may include a first output and a second output, the first output being a derivative key based on an initial context string of the pair of context strings and the second output being a perturbed output unequal to the derivative key based on a final context string of the pair of context strings. The apparatus may include one or more OWBRs, a number of OWBRs corresponding to a number of parallel boot flows supported by an electronic system that includes the apparatus. The processor may be configured to determine an existing value in the OWBR, determine that a value of the context string is less than the existing value in the OWBR, and generate an indication of a violation of a context string allocation protocol.[0007] An example of an apparatus according to the disclosure includes means for obtaining a device key by a key derivation circuit, means for obtaining a context string by the key derivation circuit from a one-time writable bit register (OWBR), and means for generating a derivative key for a current execution environment (EE) by the key derivation circuit based on the device key and on the context string from the OWBR.[0008] Implementations of such an apparatus may include one or more of the following features. The means for obtaining the device key may include means for obtaining a single device key from a non-volatile memory device. The means for generating the derivative key may include means for generating an EE-specific derivative key. The means for generating the EE-specific derivative key may include means for generating the EE-specific derivative key independently from derivative keys for EEs prior to and subsequent to the current EE in a boot chain sequence. The means for obtaining the context string may include means for obtaining a single context string corresponding to the current EE. The means for obtaining the context string may include means for obtaining a pair of context strings corresponding to the current EE and the apparatus may further include means for generating the derivative key for the current EE based on the device key and on an initial context string of the pair of context strings and means for generating a perturbed output based at least in part on a final context string of the pair of context strings, the perturbed output being unequal to the derivative key. The apparatus may further include means for determining an existing value in the OWBR; means for determining that a value of the context string is less than the existing value in the OWBR, and means for generating an indication of a violation of a context string allocation protocol.[0009] An example of a non-transitory processor-readable storage medium according to the disclosure may include processor-readable instructions including instructions for obtaining a device key by a key derivation circuit, instructions for obtaining a context string by the key derivation circuit from a one-time writable bit register (OWBR), and instructions for generating a derivative key for a current execution environment (EE) by the key derivation circuit based on the device key and on the context string from the OWBR. [0010] Implementations of such a storage medium may include one or more of the following features. The instructions for obtaining the device key may include instructions for obtaining a single device key from a non-volatile memory device. The instructions for generating the derivative key may include instructions for generating an EE-specific derivative key. The instructions for generating the EE-specific derivative key may include instructions for generating the EE-specific derivative keyindependently from derivative keys for EEs prior to and subsequent to the current EE in a boot chain sequence. The instructions for obtaining the context string may include instructions for obtaining a single context string corresponding to the current EE. The instructions for obtaining the context string may include instructions for obtaining a pair of context strings corresponding to the current EE and the processor readable instructions may further include instructions for generating the derivative key for the current EE based on the device key and on an initial context string of the pair of context strings and instructions for generating a perturbed output based at least in part on a final context string of the pair of context strings, the perturbed output being unequal to the derivative key. The processor readable instructions may further include instructions for determining an existing value in the OWBR, instructions for determining that a value of the context string is less than the existing value in the OWBR, and instructions for generating an indication of a violation of a context string allocation protocol.[0011] Items and/or techniques described herein may provide one or more of the following capabilities and/or possibly one or more other capabilities not mentioned. Independent cryptographic keys can be derived for execution environments (EEs) in a boot chain sequence. The independent cryptographic keys may eliminate a need to modify the key derivation procedure of a particular EE in response to changes in the boot chain sequence. A single device key used for the primary boot loader may be extended for secure usage in multiple EEs. The single device key may reduce device design complexity and minimize provisioning overhead (e.g., provisioning a single device key may eliminate a need to store a device key per EE). In addition to the single device key, a one-time writable bit register (OWBR) can provide key input material to a hardware key derivation function (HKDF). The key input material may be based on context strings for the EEs allocated according to a context string allocation protocol. The HKDF may generate a new and EE-specific key for each EE based on the single device key. The EE-specific key for a current EE may be inaccessible to subsequent EEs in the boot chain sequence. As a result of using the single device key, the trust in the executing code established by the device key at the PBL stage may not degrade over the course of the boot chain sequence. The properties of the OWBR may prevent keys for previous EEs from being re-created by subsequent EEs. Further, the properties of the OWBR may reduce or eliminate changes to the primary boot loader key generation procedures at each power-on-reset. It may be possible for an effect noted above to be achieved by means other than those noted and a noted item/technique may not necessarily yield the noted effect.BRIEF DESCRIPTION OF THE DRAWINGS[0012] FIG. 1 is a block diagram of an example of a two-level key generation tree structure.[0013] FIG. 2 is a block diagram of hardware components of an example of an integrated circuit system.[0014] FIG. 3 is a schematic diagram of the functional operations of a key derivation circuit.[0015] FIG. 4 is a schematic diagram of an example of a register state progression.[0016] FIG. 5 is a block diagram of a method of obtaining a derivative key by an execution environment.[0017] FIG. 6 is a block diagram of a method of generating a derivative key for an execution environment.DETAILED DESCRIPTION[0018] Techniques disclosed herein are configured to improve security associated with generated keys for execution environments of data processing devices.[0019] A key input for a hardware embedded key derivation function (HKDF) is provided from a one-time writable bit register (OWBR). The key input is a context string associated with a current execution environment (EE). The current EE may be in a boot chain sequence for one or more integrated circuits, a system-on-chip (SoC), and/or an electronic system including the one or more integrated circuits and/or the SoC. A derivative key for the current EE is generated by the HKDF using the context string from the OWBR and a single device key. The derivative key is output to an output register. The derivative key for the current EE is independent from keys derived for other EEs. The current EE may use the derivative key to encrypt and/or decrypt assets. Subsequent to generating the derivative key by the HKDF and/or obtaining the derivative key by the EE, a processor may change a value of at least one bit in the OWBR. This change perturbs the value in the output register. The perturbed value in the output register clears the derivative key from the output register. The current EE may hand over control of the SoC to a subsequent EE in a boot chain sequence.[0020] Referring to FIG. 1, a block diagram of an example of a two-level key generation tree structure is shown. The single root key 120 (e.g., a device key, a device symmetric key) may be stored in an OTP memory and provide a root-of-trust. As shown schematically in FIG. 1, the same device key may be used to generate the EE-specific derivative key for each EE in the boot chain sequence. The single root key 120 may provide at least the advantage of eliminating a need to store an EE-specific root key for each EE. For example, the single root key 120 may eliminate a need to store multiple root keys in respective non-volatile memory locations that are physically accessible to only one EE. As a result, dedicated allocation of expensive memory devices (e.g., fuse devices of a one-time programmable (OTP) memory) for all of the EEs associated with the SoC system 200 may not be required. The key structure in FIG. 1 relies at least in part on the one-time writable property of the OWBR. This property provides a capability to irreversibly change keys and key input materials over the course of the boot chain sequence and to reset the key input materials to a consistent initial value at power-on reset.[0021] As shown in FIG. 1, the key material (i.e., the input to the HKDF) for each EE is a respective context string from the OWBR and the root key 120. The HKDF 130 may generate the EE-specific keys based on the root key 120 and an EE-specific context string from the OWBR (e.g., CONTEXT STRING A 150a, CONTEXT STRING B 150b, CONTEXT STRING C 150c, CONTEXT STRING D 150d, CONTEXT STRING E 150e). The context strings may be public. The subsequently derived EE-specific keys (e.g., PBL key 103, SBL key 104, HLOS kernel key 105, HLOS key 106, and application key 107) are independent leaf nodes in the two-level tree structure. At least because the key generation is occurring in hardware (e.g., the HKDF, the OWBR, the output register, the device key memory location), the EEs are not generating the key in software which may provide additional security from compromise by observation of EE execution. Neither the derived key for one EE nor a hash or other mathematically or logically manipulated version of the derived key for the one EE forms or provides key input material for any other EE. Thus, the EE-specific keys according to FIG. 1 are independent of one another because the key for one EE is not derived from the key for any other EE. The EE-specific key for the current EE in the boot chain sequence is different from the EE-specific keys for both prior and subsequent EEs in the boot-chain sequence.[0022] The context strings may be public. Additionally, the upstream EE may handle its key in plain text operations. In typical EE operations, the downstream EE has the capability to observe the execution and plain text operations of the upstream EE.Through these observed operations, the downstream EE may discover the prior key. For example, the downstream EE may sniff unencrypted external memory. However, a fundamental hardware property of the OWBR is that the bits in the OWBR cannot be rewritten during a power-cycle of the SoC. Therefore, if the subsequent EE in the boot chain sequence determines and/or obtains the key input context string for a prior EE, this subsequent EE cannot change previously written bit values of the OWBR in order to set the bit values of the OWBR back to previous key input context string. Thus the subsequent EE cannot re-generate a prior key input in the OWBR and therefore cannot re-generate a prior derivative key output from the HKDF. Merely erasing the derivative key output from the output register may not prevent the key output from being regenerated by the subsequent EE. Obtaining the key input context string from a memory device that does not have the properties of the OWBR may not prevent the key output from being re-generated by the subsequent EE. Merely erasing the derivative key output from the output register may not prevent key input material from being re-generated in a memory device that is not the OWBR.[0023] As a benefit of the independent keys, if a hack attempt is successful in regenerating an independent EE-specific, this security compromise may affect only one EE and may not be perpetuated along the boot chain sequence. Additionally, the independent EE-specific keys may also simplify the addition or deletion of one or more EEs in the boot chain sequence. The lack of co-dependence of the keys may reduce or eliminate changes to prior or subsequent EE routines to handle the key derivation changes necessitated by the addition or deletion.[0024] In contrast, dependent EE keys arise when the key for one EE is derived from the key for another EE. Typically, for dependent EE keys, a key for an upstream EE in the boot chain sequence may be used (i.e., may contribute to key material) to derive a key for a downstream EE. For the dependent keys, once one EE in the boot chain sequence has access to the key for a preceding EE, it may then take actions to compromise the dependent keys for the remainder of the EEs in the boot chain sequence.[0025] The key structure in FIG. 1 may provide additional benefits with regard to service infrastructure that may not be realized by dependent keys. EEs in the boot chain sequence are generally serviced by multiple different infrastructures. For example, a SoC manufacturer may service the PBL and SBL, an electronic device manufacturer may service the HLOS kernel (e.g., a trusted execution environment (TEE)), a particular operating system provider may service the HLOS, and particular retail and/or banking institutions may service the applications. As an advantage of the independently derived keys, in the event that key security is compromised for one EE, the security risk may be limited to the one service infrastructure for the compromised EE key. In contrast, a compromised key that is derived from a key for another EE (i.e., the dependent key), may present a security risk to multiple service infrastructures. For instance, a compromise of a key for a Google® HLOS may compromise a key used by Bank of America® for a banking application downstream in the boot chain from the Google® HLOS. For the dependent keys, some or all of the service infrastructures downstream in the boot chain sequence from the current EE may be compromised. As a further advantage of independently derived keys, the key database deployment of one service infrastructure may be generated and deployed independently of any other service infrastructure. Each instance of service infrastructure may maintain its own key database of a global population of the derived keys for a population of devices in the database. The dependent (e.g., chained or interdependent) key generation procedure may require that the key database be passed from one service infrastructure to another along the boot chain sequence. Thus, the database deployment of one service infrastructure may be dependent upon the database deployment of another service infrastructure. As an example, Google®, as service provider for the HLOS, may receive the database from the upstream electronic device manufacturer and generate keys therefrom. In turn, Google® may provide the database to Bank of America® as service provider of a downstream application.[0026] Referring to FIG. 2, a block diagram of hardware components of an example of an integrated circuit system is shown. A quantity of each component in FIG. 2 is an example only and other quantities of each, or any, component could be used. The integrated circuit system may include one or more integrated circuits (ICs) and/or discrete electronic devices and may be, for example, a SoC system 200. The integrated circuit system can be part of an electronic system, or end device ( not shown), for example, but not limited to mainframes, mini-computers, servers, workstations, set-top boxes, personal computers, laptop computers, mobile devices, hand-held devices, wireless devices, tablets, modems, electronic readers, personal digital assistants, electronic games, automobiles and automobile components, aircraft, machinery, entertainment devices, medical devices, Internet of Things (IoT)/Internet of Everything (IoE) enabled devices, manufacturing devices, and embedded systems. The integrated circuit system can operate in a variety of wired/wireless communication systems and networks. Any or all of the hardware components may be implemented as a number of separate integrated circuits and/or separate devices interconnected with each other. Components of the SoC system 200 are communicatively coupled through a system bus 95.[0027] The off-chip memory 13 is a non-volatile read-write memory. The off-chip memory 13 may include a boot table 16. The boot table 16 may include descriptors (e.g., firmware descriptors, software descriptors) configured to indicate a boot chain sequence according to which the processor 60 may access, read, authenticate, store, and/or execute firmware and/or software in the boot chain sequence. The code images 14 may include the one or more EEs of the boot chain sequence (e.g.,. a secondary boot loader (SBL), a high level operating system (HLOS) kernel, a HLOS, one or more applications, etc.). In an implementation, on-chip flash memory 34 may include the code images 14. [0028] The communications interface 80 is configured to enable the system 200 to send and receive wireless signals, for example via a wireless antenna (not shown) over one or more communications networks. The communications interface 80 can include wired/wireless interfaces enabling both wired and wireless communications (including such things as a receiver, transmitter, and/or transceiver. These enable communications across and within a variety of communication networks. Examples of suchcommunications networks include but are not limited to a wireless wide area network (WW AN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. The term "network" and "system" may be used interchangeably herein. A WW AN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), Time DivisionSynchronous Code Division Multiple Access (TD-SCDMA), to name just a few radio technologies. Here, cdma2000 may include technologies implemented according to IS- 95, IS-2000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D- AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named "3rd Generation Partnership Project" (3GPP). Cdma2000 is described in documents from a consortium named "3rd Generation Partnership Project 2" (3GPP2). 3 GPP and 3GPP2 documents are publicly available. A WLAN may include an IEEE 802.1 lx network, and a WPAN may include a Bluetooth network, an IEEE 802.15x, for example. Wireless communication networks may include so-called next generation technologies (e.g., 4G, 5G, and so on), such as, for example, Long Term Evolution (LTE), Advanced LTE, WiMax, Ultra Mobile Broadband (UMB), and/or the like. The communications interface 80 may be further configured to communicate and exchange information, including but not limited to location information, either directly or indirectly with other communications network entities, including but not limited to, access points, base stations, navigation servers, location servers, other electronic devices, etc. The communications interface 80 may also be to receive signals from satellite vehicles (SVs) belonging to one or more Satellite Positioning Systems (SPSs), such as the GPS system, the GLONASS system, the Galileo system, and/or other SPSs.[0029] The system reset circuit 85 is an electronic circuit configured to generate a reset signal. The system reset circuit 85 may also be referred to as a power-on-reset circuit. In response to the system reset signal, the SoC system 200 may power down and re-start. At power-on reset (i.e., in response to the system reset signal), various components of the SoC system 200 may initialize to known states. These known states may correspond to a first application of voltage from the power supply 90. For example, values of the registers 52 and 54 may initialize to default unprogrammed values. Execution of PBL software and/or other bootloader software may commence at power-on reset. The power-on reset may also be referred to as a cold boot reset.[0030] The processor 60 (e.g., means for determining an existing value in the OWBR, means for determining that the value of the context string is less than the existing value in the OWBR, means for generating an indication of a violation of a context string allocation protocol) is a physical processor (i.e., an integrated circuit configured to execute operations on the SoC system 200 as specified by software and/or firmware) (e.g., means for generating a key input, means for writing the generated key input, means for obtaining the derivative key, means for writing to at least one bit of an OWBR). The processor 60 may be an intelligent hardware device, e.g., a central processing unit (CPU), one or more microprocessors, a controller or microcontroller, an application specific integrated circuit (ASIC), a general-purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA) or otherprogrammable logic device, a state machine, discrete gate or transistor logic, discrete hardware components, or combinations thereof designed to perform the functions described herein and operable to carry out instructions on the SoC system 200. The processor 60 may also be implemented as a combination of computing devices, e.g., a combination of DSP and a microprocessor, a plurality of microprocessors, a plurality of control processing units (CPUs), one or more microprocessors in conjunction with a DSP core, or other such configurations. The processor 60 may include co-processors including a crypto-accelerator co-processor designed to perform computationally intensive encoding decoding of information. [0031] The memory 30 (i.e., on-chip memory) can be a non-transitory, processor- readable storage medium that stores processor-readable, processor-executable software and/or firmware instructions that are configured to, when executed, cause the processor 60 to perform various functions described herein (although the description may refer only to the processor 60 performing the functions). Alternatively, the software and/or firmware may not be directly executable by the processor 60 but configured to cause the processor 60, e.g., when compiled and executed, to perform the functions. Code stored in the memory 30 includes instructions that may be part of a program, such as a particular application program and/or an operating system. The code typically includes an executing (e.g., running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the code. The memory 30 includes, but is not limited to output register 54, one-time writable bit registers 52, on-chip flash memory 34 (i.e., a non-volatile read- write memory), non-volatile read-only memory (ROM) 32, random access memory (RAM) 36, and one-time programmable (OTP) memory 38. As used herein with regard to memory, the terms "storing," "store," and "stored" are equivalent, respectively, to the terms "writing," "write," and "written."[0032] The OTP memory 38 may be, for example, a programmable read-only memory (PROM) or a field PROM (FPROM). The OTP memory 38 includes fuse devices (e.g., fuses and/or anti-fuses), each of which represents a settable binary value. The OTP memory 38 is manufactured with all of the fuses in an unprogrammed state (e.g., a virgin state or a default state), such as all ones or all zeros. To write data into OTP memory 38, appropriate fuse devices are programmed (e.g., burned) from their default state to a non-default state. The processor 60 may program (i.e., write values to) the fuse devices of the OTP memory 38. Once programmed, the fuse device is no longer useful to write other data to OTP memory 38 as the fuse device may only be written to (i.e., programmed) one time. The value programmed to a particular fuse device is a permanent value of that fuse device. As such, the value written to the OTP memory 38 does not change in response to a power-on reset of the SoC system 200. The fuse devices may be arranged in arrays with particular fuse devices in the arrayscorresponding to particular OTP memory 38 array addresses. [0033] The one-time writable bit register (OWBR) 52 and the output register 54 are readable/writable software registers that include writable non-volatile memory (e.g., EEPROM, flash memory, ferroelectric, RAM, etc.). Each register represents a settable binary value. The processor 60 may read and/or write values to/from the OWBR 52 and the output register 54. Further, the OWBR 52 and the output register 54 are coupled via hardware to the key derivation circuit 65 and the key derivation circuit controller 68. For example, the registers 52 and 54 may be directly hardwired to the key derivation circuit 65 and/or the key derivation circuit controller 68. A binary value can be written to each bit of these registers to transition the bit from an unprogrammed value to a written value (i.e., a programmed value). For example, if the unprogrammed value of the bits in register 52 and/or 54 is "0," a "1" may be written to the bits in the registers 52 and/or 54. Similarly, if the unprogrammed value of the bits in register 52 and/or 54 is a "1," a "0" may be written the bits in the registers 52 and/or 54. The OWBR 52 and the output register 54 both support an unlimited number of read operations per bit per power cycle of the SoC system 200. The output register 54 supports an unlimited number of write operations per each bit per power cycle of the SoC system 200. Further, in response to a system reset signal or power-on event, the bits of the OWBR 52 and the output register 54 may re-set to a default unprogrammed value.[0034] In contrast to the output register 54, the OWBR 52 only supports one write operation per bit per power cycle of the SoC system 200. In other words, each individual bit of the OWBR 52 may be written to only one time per power cycle of the SoC system 200. This property of the OWBR 52 is enforced in hardware. Once a particular bit is written to and transitioned from the unprogrammed value to the written value, that particular bit cannot be re-written to return to the unprogrammed value. For example, if the unprogrammed value of the bit is "0" and a "1" is written to the bit, the value of the bit cannot be changed to "0" within the power cycle. A power cycle starts with a system reset signal or power-on event and ends at a subsequent system reset signal or power-off event. A total number of possible write operations per power cycle for the OWBR 52 is equal to a number of bits in the OWBR 52. As examples, an 8 bit OWBR allows and supports a maximum of 8 write operations per power cycle, a 128 bit OWBR allows and supports a maximum of 128 write operations per power cycle, etc. Once the value is written to the bit in the OWBR 52, that bit value is unchangeable unless there is a system reset signal or power-off event (i.e., unchangeable for the duration of the power cycle of the SoC). In response to the system reset signal, the values of the bits of the OWBR 52 return to the default unprogrammed value. The number of OWBRs 52 provisioned on the SoC system 200 may correspond to a number of parallel boot flows supported by the SoC system 200 (e.g., a number of parallel and/or embedded processors or co-processors and/or other devices on the SoC system 200 implementing a boot chain sequence). Each processing system or subsystem operating on the SoC system 200 may follow a boot flow sequence. One OWBR 52 may be dedicated to each supported boot flow. Thus the SoC system 200 may support one or more boot flows and may include one or more respective OWBRs 52. The one or more OWBRs 52 may be coupled to the key derivation circuit 65, each OWBR of the one or more OWBRs corresponding to a respective system or subsystem.[0035] The device key 40 is stored in a non-volatile memory of the SoC system 200. For example, the device key 40 may be stored in OTP memory 38 or the ROM 32. The manufacturer of the SoC system 200 may store the device key 40 in the non-volatile memory prior to shipment of the SoC system 200 to an electronic device or end device manufacturer (e.g., an original equipment manufacturer (OEM)). The device key 40 may be inaccessible to any software executing on the SoC system 200. As such the device key 40 may only be accessible via hardware. Further, the device key 40 may be a single device key for the SoC system 200. The device key 40 may be a 128 bit or a 256 bit symmetric key. The device key 40 is coupled to the key derivation circuit 65. For example, the memory devices in which the device key 40 is stored may be directly wired to the key derivation circuit 65. The ROM 32 may also include the PBL firmware for primordial booting of the SoC system 200.[0036] The device key 40 and the contents of the OWBR 52 are hardware-accessible only information. As such, these inputs are protected from direct access and direct usage by the EEs executing in the SoC system 200. The EEs cannot read the device key 40 or directly read values from the register 52 as the SoC system 200 does not provide software/firmware paths to the register interfaces or to the device key 40. The EEs may only access this information by proxy. The particular EEs that may access by proxy the contents of the OWBR 52 may be unrestricted. [0037] The key derivation circuit 65 (e.g., means for obtaining a device key, means for obtaining a context string, means for generating a derivative key, means for obtaining a pair of context strings, means for generating a perturbed output) is a hardware embedded key derivation function (HKDF) The key derivation circuit 65 includes digital logic devices configured to implement the key derivation function. The key derivation circuit controller 68 is configured to provide input to and output from the key derivation circuit 65. For example, the key derivation circuit controller 68 may include logic devices and/or electrical circuits configured to connect, in hardware, the OWBR 52, the memory location of the device key 40, the key derivation circuit 65, and the output register 54. Further, the key derivation circuit controller 68 may cause the processor 60 to write to and/or read from the OWBR 52 and/or to read from the output register 54. For example, the key derivation circuit controller 68 may generate electrical signals that cause the processor 60 to perform write and/or read operations to and/or from the registers 52, 54. Alternatively and/or additionally, EE instructions stored in the memory 30 may cause the processor 60 to perform these functions. The generated signals and/or the EE instructions may cause the processor 60 to write particular bit values to the OWBR 52.[0038] Referring to FIG. 3, with further reference to FIG. 2, a schematic diagram of the functional operations of the key derivation circuit is shown. The key derivation circuit 65 is configured to obtain a first input value and a second input value. The first input value is the device key 40 stored in the OTP memory 38 or in other non-volatile memory of the SoC system 200. The second input value is a context string 45 stored in the OWBR 52. The context string 45 may be a public (i.e., not secret) context string. For example, the context string 45 may be a binary plain text string. The context string 45 may include key material suitable for use by the key derivation circuit 65 in deriving a key according to symmetric key cryptography standards and/or procedures. The key derivation circuit 65 is configured to implement the HKDF and generate an output value in the output register 54 based on the first input value and the second input value. The output value may be a derivative key 70. The output value of the key derivation circuit 65 changes in response to changes in the second input value (i.e., changes to the contents of the OWBR 52). Therefore, changes to the bit values of the OWBR 52 change, or perturb, the value of the output register 54. [0039] The HKDF is a one-way cryptographic function. As such, it is computationally infeasible to determine the first input value and the second input value from the output value from the HKDF. Thus knowledge of the derivative key 70 does not enable derivation of any key material used to generate the derivative key 70. Furthermore, it is computationally infeasible to determine the first input value (i.e., the device key 40) from the output value even with knowledge of the second input value (i.e., the public context string 45).[0040] The context string 45 corresponds to a respective EE according to a context string allocation protocol. The context string allocation protocol is an agreement between EEs to use context string values corresponding to and reserved for each EE according to the protocol. While the context string allocation protocol may not indicate explicit trust between EEs, it may indicate the existence of the agreement between EEs to use the reserved context string values. According to the public key allocation protocol, each respective EE may correspond to a one or more context string values reserved for the respective EE. In an example, the usual or existing boot chain sequence for the electronic device may include EE A, EE B, and EE C (e.g., the PBL, the SBL, the HLOS kernel). The usual or existing boot chain sequence may change (e.g., permanently, periodically, per instruction, per implementation of the electronic device, etc.) to include one or more new, additional, and/or replacement EEs (e.g., EE X, EE Y).[0041] In an embodiment, each respective EE in the boot chain sequence may correspond to a single context string. The single context string may be input to the HKDF as key material along with the device key 40. The processor 60 may generate the single context string in the OWBR when an EE initially assumes control of the SoC system 200 and/or prior to key generation by the EE. The processor 60 may generate the single context string in response to instruction from the current EE and/or in response to a signal from the key derivation circuit controller 68. For example, EE A may correspond to the single context string "0000001," EE B may correspond to the single context string "00000011," EE C may correspond to the single context string"00000111," etc. The context string values are examples only and not limiting of the disclosure. Other values are within the scope of the disclosure. New, additional, and/or replacement EEs may also correspond to respective single context strings. The values of these single context strings may be intermediary values and/or values greater than the values corresponding to the EEs in the usual or existing boot chain sequence. For example, EE X may correspond to the single context string "00000101" and may replace EE B in the boot chain sequence. As another example, EE Y may correspond to the single context string "00001111" and may follow EE C in the boot chain sequence.[0042] In other embodiments, each respective EE in the boot chain sequence may correspond to a pair of context strings. The pair of context strings may include an initial context string and a final context string. The processor 60 may generate the initial context string in the OWBR when a particular EE initially assumes control of the SoC system 200 (i.e., when the particular EE becomes the current EE) and/or prior to key generation by the current EE. The processor 60 may generate the final context string in the OWBR subsequent to one or more key derivation events for the current EE and prior to the current EE relinquishing control of the SoC system 200 and/or handing control of the SoC system 200 to the processor 60 and/or a subsequent EE. Further, the processor 60 may generate the initial and final context strings in the OWBR in response to instructions from the current EE and/or in response to one or more signals from the key derivation circuit controller 68.[0043] In an implementation, the initial and final context strings may be adjacent and sequential values. For example, the initial context string for EE A may be "00000010" and the final context string for EE A may be "00000011." In this implementation, each EE may correspond to a respective adjacent and sequential pair of values. In another implementation, the current EE and/or the processor 60 may select the initial and final context string from a range of context string values corresponding to the current EE. For example, the range of context string values corresponding to EE A may be "00000000"- "00001111." The selected initial context string for EE A may be "00000010" and the selected final context string for EE A may be "00000011." In this implementation, each EE may correspond to a respective range of values. The quantity of context string values in the range may be the same or may be different for various EEs in the boot chain sequence.[0044] New, additional and/or replacement EEs may also correspond to pairs of context strings and/or ranges of context string values. As examples, in various implementations, the pairs of context strings and/or ranges of context string values for EE X, EE Y may be intermediary values and/or values greater than the pairs of context strings and/or ranges of context string values for EE A, EE B, and EE C. Context string values and ranges unused by the usual or existing EEs in the boot chain sequence may be referred to as buffer values and buffer ranges, respectively. Designation of buffer values and/or buffer ranges in the context string allocation protocol for new, additional and/or replacement EEs may accommodate changes to the boot chain sequence.[0045] In an embodiment, key derivation circuit controller 68 may automatically generate the single context string and/or the initial and final context strings according to a counter operation. The counter operation may increment the value of the context string for successive EEs in the boot chain sequence. The increment interval may be a value of one or more and may be the same or different for various EE successions. The automatically generated context strings may correspond to the context string allocation protocol. The key derivation circuit controller 68 may provide a signal to the processor 60 to write the automatically generated context string into the OWBR 52. In an implementation, the context strings may be generated by a combination of the counter operation and reserved context strings according to the context string allocation protocol.[0046] Referring to FIG. 4, with further reference to FIGS. 1-3, an example of a register state progression is shown. The values and quantities of OWBR bits of FIG. 4 are examples only and not limiting. In this example, the OWBR 52 is represented by an 8- bit register.[0047] In an initial state 412 of the OWBR 52 (e.g., OWBR State 1), the bit values in the OWBR 52 are existing bit values. The existing bit values are the bit values existing in the OWBR 52 when the current EE assumes control of the SoC system 200. The existing bit values in the OWBR 52 may correspond to default values of the OWBR 52 at power-on reset. For example, the current EE at power-on reset may be the PBL. The existing bit values in the OWBR 52 may correspond to the default unprogrammed values at power-on reset. The bit values of the OWBR 52 may return to the default unprogrammed values in response to the power-on and power-on-reset events. As a result, the OWBR 52 may provide at least an advantage that the provided bit pattern for the PBL may be equivalent to the context string for the PBL. This may provide consistent PBL functionality at every power-on reset. Furthermore, this capability may be provided despite changes to the value stored in the OWBR 52 subsequent to PBL execution. In contrast, for example, changes to an OTP memory device are persistent over time. The values in the OTP memory device do not reset to default values in response to a power-on or power-on-reset event. As another example, the current EE may be an EE downstream from the PBL (e.g., SBL, HLOS kernel, HLOS, applications, etc.). In this case, the existing bit values in the OWBR may correspond to a single context string or a final context string corresponding to a prior EE.[0048] At least due to the coupling of the OWBR 52, the output register 54, and the key derivation circuit 65, a state of the OWBR 52 corresponds to a state of the output register 54. Further, any state change (e.g., value change) to the OWBR 52 produces a state change of the output register 54. Accordingly, and as shown schematically in FIG. 4, the initial state 412 of the OWBR 52 corresponds to a first state 414 of the output register 54 (e.g., output register state 1). The current EE may read the existing bit values in the OWBR. Based on the existing bit values, the current EE may provide a bit pattern 452 (e.g., a first bit pattern) to a logic operation 454 (e.g., a first logic operation). The logic operation 454 may logically combine the existing bit values in the OWBR 52 with the provided bit pattern 452 to transition the OWBR 52 to a second state 422 (e.g., OWBR State 2). In the second state 422, the contents of the OWBR 52 correspond to the single context string or the initial context string for the current EE. For example, as shown in FIG. 4, the existing bit values may be "00000010." The current EE may provide the bit pattern 452 of, for example, "00000100." An XOR logic operation on the existing value in the OWBR of "00000010" and the bit pattern "00000100" provided by the current EE may generate the context string "00000110" in the OWBR 52 and transition the OWBR 52 to a second state 422 (e.g., OWBR State 2. The generated context string "00000110" may be the single context string or the initial context string corresponding to the current EE. The XOR operation shown in FIG. 4 is an example only of a logic operation and is not limiting of the disclosure.[0049] The second state 422 of the OWBR 52 corresponds to a second state 424 of the output register 54 (e.g., output register state 2). In the second state 424 of the output register 54, the contents of the output register 54 correspond to the derivative key for the current EE. The key derivation circuit 65 implements the hardware embedded key derivation function to generate the derivative key based on the contents of the OWBR 52 and the device key 40. The device key 40 (e.g., the first input value) and the contents of the OWBR 52 in the second state 422 (e.g., the second input value) are the key material provided to the key derivation circuit 65 for the current EE. The processor 60 may read the derivative key from the output register 54 and the current EE may use this operational derivative key to encrypt/decrypt assets.[0050] In a single context string implementation, once the processor 60 reads the derivative key from the output register 54, control of the SoC system 200 may transition to a subsequent EE Upon transition, the "current" EE becomes the prior EE and the "subsequent" EE becomes the current EE. Further, upon transition, the OWBR 52 is in the OWBR State 1 and the output register 54 is in the output register state 1 (e.g., as indicated by the dotted arrows 490 and 492). In this case of the prior EE corresponding to the single context string, the existing bit values in the OWBR 52 for the current EE may be the single context string from the prior EE. Further, the bit values in the output register 54 may be the derivative key for the prior EE. As such, the derivative key of the prior EE may be visible to the current EE. However, once the current EE transitions the OWBR 52 to the second state 422 in order to generate its own corresponding context string, the derivative key from the prior EE is cleared from the output register 54.[0051] In a pair of context strings implementation, once the processor 60 reads the derivative key from the output register 54, the OWBR 52 may transition to a third state 432 (e.g., OWBR state 3). The current EE may provide a bit pattern 456 (e.g., a second bit pattern) to a logic operation 458 (e.g., a second logic operation). The logic operation 458 may logically combine the initial context string in the OWBR 52 with the provided bit pattern 456 to transition the OWBR 52 to a third state 432 (e.g., OWBR State 3). In the third state 432, the contents of the OWBR 52 correspond to the final context string for the current EE. The value of OWBR 52 may correspond to a value with at least one bit changed from the second state 422. For example, in the third state 432 the final context string is "00000111." The changed bit of the OWBR 52 changes the value stored in the output register 54. Therefore, the third state 432 of the OWBR 52 corresponds to a third state 434 of the output register 54 (e.g., output register state 3). The output register 54 transitions to the third state 434 concurrently with the state change of the OWBR 52. The value of the output register 54 in the third state 434 is a perturbed value. The perturbed value may be a non-operational derivative key unused for encryption/decryption operations. The perturbed value effectively clears the derivative key from the output register 54. Thus, in the third state 434 of the output register 54, the contents of output register 54 correspond to a cleared derivative key. In other words, the final context string in the OWBR 52 changes the value stored in the output register 54 so that the value stored in the output register 54 no longercorresponds to the derivative key for the current EE. Furthermore, the value of the output register 54 is not equal to the derivative key for any other EE. Subsequent to the generation of the final context string in the OWBR 52, the control of the SoC system 200 may transition from the current EE to the subsequent EE. Upon transition, the "current" EE becomes the prior EE and the "subsequent" EE becomes the current EE. Further, upon transition, the OWBR 52 is in the OWBR State 1 and the output register 54 is in the output register state 1. (e.g., as indicated by the dotted arrows 496 and 498). In this case, the derivative key of the current EE may be invisible to the subsequent EE. As such, the pair of context string values may provide enhanced security over the single context string value. However, the single context string value may provide a simplified implementation over the pair of context string values. Thus, variations in the context string allocation protocol may provide an ability to adjust the security of the derivative keys based on particular security requirements for the electronic device.[0052] Referring to FIG. 5, with further reference to FIGS. 1-4, a method 500 of obtaining a derivative key by an execution environment includes the stages shown. The method 500 is, however, an example only and not limiting. The method 500 can be altered, e.g., by having stages added, removed, rearranged, combined, performed concurrently and/or having stages split into multiple stages. The processor 60 may execute instructions of the current EE in order to perform the method 500.[0053] At stage 510, the method 500 includes reading existing bit values in a one-time writable bit register (OWBR). For example, the processor 60 may read the existing bit values in the OWBR 52. The existing bit values may correspond to one of an unprogrammed value of the OWBR 52 at power-on reset, the value of the single context string corresponding to a prior EE, or the value of the final context string corresponding to the prior EE. [0054] The stage 510 may include verifying, by the processor 60, that the existing bit values conform to the context string allocation protocol. For example, if a prior EE, or other EE, is actively malicious and writes an unauthorized value (e.g., a value in violation of the context string allocation protocol) to the OWBR 52, the existing bit values of the OWBR 52 may provide forensic evidence of such a security breach. Any or all of the default unprogrammed value of the OWBR 52, the single context string value for the prior EE, and the final context string value for the prior EE may be less than a value of one or more context strings corresponding to the current EE according to the context string allocation protocol. The forensic evidence may be a value stored in the OWBR 52 that exceeds the value of one or more context strings corresponding to the current EE. Violation of the context string protocol may indicate malicious intent, for example, due to a hack attack, by the prior or other EE. In response to adetermination that the context string allocation protocol has been violated, the processor 60 may halt execution of the current EE, generate an error message, and/or provide another indication of the protocol violation.[0055] At stage 520, the method 500 includes generating a context string in the OWBR based on existing bit values in the OWBR, the context string corresponding to a current execution environment (EE). For example, the processor 60 may generate the context string corresponding to the current EE. The stage 520 may include providing a bit pattern (e.g., a first bit pattern) to the processor 60 based on the existing bit values in the OWBR 52. Further, the stage 510 may include logically combining the provided bit pattern with the existing bit values in order to generate the context string in the OWBR 52. For example, the processor 60 may implement a logic operation on the provided bit pattern and the existing bit values in the OWBR 52. The stage 520 may include writing the generated context string to the OWBR 52. For example, the processor 60 may write the generated context string to the OWBR 52. Writing the generated context string may include writing to one or more previously unwritten bits of the OWBR 52. Further, writing the generated context string may include writing the single context string corresponding to the current EE or writing the initial context string corresponding to the current EE.[0056] At stage 530, the method 500 includes obtaining an EE-specific key from a key derivation circuit coupled to the OWBR. For example, the processor 60 may obtain a current EE-specific derivative key output by the key derivation circuit 65 and stored in the output register 54. The current derivative key may be based on the generated key input written to the OWBR 52 (i.e., the context string corresponding to the current EE) and on the device key 40. The EE-specific key may be derived according to the HKDF as implemented by the key derivation circuit 65. However, the current derivative key is not based on a derivative key from any other EE. Neither another derivative key nor a hash or other mathematically altered version of another derivative key or the result of a logical combination of the derivative key with another key form or contribute to an input to the HKDF. For at least this reason, the derivative key is the EE-specific key and is derived independently from keys for EEs executing prior to and subsequent to the current EE. The stage 530 may include using the EE-specific key for one or more of encrypting assets, decrypting assets, or a combination thereof. For example, the processor 60 may execute instructions from the current EE to encrypt and/or decrypt assets using the current derivative key.[0057] The stage 530 may further include providing a second bit pattern to the processor 60 and generating a final context string corresponding to the current EE in the OWBR 52. For example, the processor 60 may generate the final context string corresponding to the current EE based on the logical combination of the initial context string and the provided second bit pattern. The processor 60 may write the final context string into the OWBR 52. Writing the final context string into the OWBR 52 may include writing to one or more previously unwritten bits of the OWBR 52. Once the at least one bit is written to, that bit cannot be reset to a previous value in the absence of a power-on-reset. Writing the final context string to the OWBR 52 the value of the OWBR 52. The changed value of the OWBR 52 changes, or perturbs, the output value from the HKDF stored in the output register 54. This effectively clears the current derivative key from the output register 54. Additionally, writing to the at least one bit of the OWBR 52 irreversibly changes the value stored in the OWBR 52 for the duration of the SoC power cycle. Therefore, the generated initial context string of the stage 520 (i.e., the key input material) is irreproducible in the OWBR 52 by a subsequent EE. In other words, during execution of subsequent EEs, the processor 60 cannot re-write bits in the OWBR 52 to reproduce the generated key input from any prior EE. In this manner, the OWBR 52 may block re-creation of the derived key for the prior EE by the subsequent EE. Due to the hardware properties of the OWBR 52, even with knowledge of the key input for the prior EE, the processor 60 cannot recreate the value of the prior key input in the OWBR 52.[0058] The stage 530 may additionally include handing over control of the SoC system 200 by the current EE to a subsequent EE in the boot chain sequence. For example, the processor 60 may execute instructions for the current EE to hand over control directly to the subsequent EE. Alternatively, the processor 60 may execute instructions for the current EE to hand control to the processor 60 and execute instructions for the processor 60 to hand control to the subsequent EE. The method 500 may return to the stage 510 in order to generate a respective EE-specific key for the subsequent EE.[0059] Referring to FIG. 6, with further reference to FIGS. 1-5, a method 600 of generating a derivative key for an EE includes the stages shown. The method 600 is, however, an example only and not limiting. The method 600 can be altered, e.g., by having stages added, removed, rearranged, combined, performed concurrently and/or having stages split into multiple stages. The key derivation circuit 65, the key derivation circuit controller 68, and/or the processor 60 may perform the various stages and/or portions thereof of the method 600. Alternatively or additionally, the processor 60 may execute instructions stored in the memory 30 that cause the key derivation circuit 65, the key derivation circuit controller 68, and/or the processor 60 to perform the various stages and/or portions thereof of the method 600.[0060] At stage 610, the method 600 includes obtaining a device key by a key derivation circuit. For example, the key derivation circuit 65 may obtain the device key (e.g., a first input value) from a non-volatile memory device (e.g., the OTP memory 38). The device key may be the single device key 40. The key derivation circuit controller 68 may cause the key derivation circuit 65 to obtain the device key.[0061] At stage 620, the method 600 includes obtaining a context string by the key derivation circuit from a one-time writable bit register (OWBR). For example, the key derivation circuit 65 may obtain the context string (e.g., a second input value) from the OWBR 52. The context string may be the context string corresponding to the particular EE currently executing in the SoC system 200. The key derivation circuit controller 68 may cause the key derivation circuit 65 to obtain the context string from the OWBR 52. In an implementation, obtaining the context string may include obtaining a single context string corresponding to the current EE. In another implementation, obtaining the context string may include obtaining an initial context string and a final context string (e.g., a pair of context strings). The pair of context strings (e.g., the initial context string and the final context string) may correspond to the current EE. Obtaining the pair of context strings may include obtaining the pair of context strings sequentially. For example, the key derivation circuit controller 68 may cause the key derivation circuit 65 to obtain the initial context string prior to generating the derivative key for the current EE. After generating the derivative key (e.g., subsequent to generating the derivative key), the key derivation circuit controller 68 may cause the key derivation circuit 65 to obtain the final context string.[0062] The stage 620 may include writing the context string to the OWBR 52. For example, the key derivation circuit controller 68 may provide a signal to the processor 60 indicative of a write operation to the OWBR 52 (i.e., a first write operation). The signal may include the context string corresponding to the current EE.[0063] The stage 620 may include determining, by the processor 60 and/or the key derivation circuit controller 68, the existing value of the OWBR 52. For example, the key derivation circuit controller 68 may provide a signal to the processor 60 indicative of a read operation from the OWBR 52. In response to the read operation, the processor 60 may provide a bit pattern and the existing bit values to a logic operation to generate the context string corresponding to the current EE. In further response to the read operation, the stage 620 may further include verifying, by the processor 60 and/or the key derivation circuit controller 68, that the existing value of the OWBR 52 follows the context string allocation protocol and does not violate the context string allocation protocol. For example, the processor 60 and/or the key derivation circuit controller 68 may determine that the existing value of the OWBR 52 follows or violates the context string allocation protocol. If the context string corresponding to the current EE is greater than the existing value in the OWBR 52, then the method 600 may include determining that the context string allocation protocol has not been violated. In this case, the boot chain sequence may proceed and the processor 60 may not generate an indication of a violation of the protocol. If the context string corresponding to the current EE is less than the existing value in the OWBR 52, then the method 600 may include determining that the context string allocation protocol has been violated. The existing value of the OWBR may correspond to the single context string or the final context string corresponding to the prior EE. For the PBL, the corresponding context string may be greater than an existing unprogrammed value of the OWBR 52. Violation of the context string protocol may indicate malicious intent, for example, due to a hack attack, by the prior or other EE. The stage 620 may include, in response to determining that the context string allocation protocol has been violated, halting execution of the current EE, generating an error message, and/or generating another indication of a violation of the context string allocation protocol. The processor 60 may halt execution, generate the error message, and/or provide the protocol violation indication. In an implementation, the key derivation circuit controller 68 may cause the processor to perform one or more of these actions.[0064] At stage 630, the method 600 includes generating the derivative key for the current EE by the key derivation circuit based on the device key and on the context string from the OWBR. For example, the key derivation circuit 65 may generate the derivative key based on the device key 40 and the contents of the OWBR 52. The key derivation circuit controller 68 may cause the key derivation circuit 65 to provide the derivative key to the output register 54. In an embodiment, the key derivation circuit controller 68 may provide a signal to the processor 60 indicative of a read operation from the output register 54. In response to such a signal, the processor 60 may read the derivative key from the output register 54 and provide the derivative key to the current EE for use in encryption and/or decryption of assets.[0065] In an embodiment, the context string is the single context string corresponding to the current EE. In a further embodiment, the context string is a pair of context strings corresponding to the current EE. In such an embodiment, generating the derivative key includes generating the derivative key for the current EE based on the device key and on an initial context string of the pair of context strings. The derivative key based at least in part on the initial context string may be an operational derivative key for use by the EE in encryption and/or decryption operations. The derivative key may be a first output value of the key derivation circuit and the method 600 may include generating a second output value by the key derivation circuit. The second output value may be a perturbed output based at least in part on a final context string of the pair of context strings corresponding to the current EE. The perturbed output may be unequal to the derivative key. The perturbed output may be a non-operational derivative key unused by the current EE for encryption and/or decryption operations. This non-operational derivative key may replace the derivative key in the output register 54 and, thereby, clear the value of the derivative key from the output register 54. As discussed above, the key derivation circuit 65 may obtain the final context string and generate the perturbed output subsequent to the generating the derivative key. For example, the processor 60 may write to at least one bit of the OWBR 52 (e.g., a second write operation). Writing the final context string (e.g., a third input value to the key derivation circuit 65) to the OWBR 52 may change the value of at least one previously unprogrammed bit in the OWBR 52. In response to the change of the at least one bit in the OWBR 52, the output value of the key derivation circuit 65, as stored in the output register 54, may change from the derivative key for the current EE to the perturbed output (i.e., the non- operational derivative key). The perturbed output value may be unequal to and unobtainable from the derivative key (i.e., the non-operational derivative key may not be derivable from the operational derivative key). The contents of the output register 54 only correspond to the derivative key as long as the values of the bits of the OWBR 52 remain unchanged from the context string used to derive the derivative key (i.e., the single context string or the initial context string). The perturbed output value does not correspond to the derivative key for the current EE or to any derivative key for usage by any EE. As such, the perturbed output value merely reflects the write operation to the OWBR 52 of the final context string and serves to effectively clear, or erase, the derivative key from the output register 54. The method 600 may return to the stage 610 in response to processor operations for a subsequent EE.[0066] OTHER CONSIDERATIONS[0067] Other embodiments are within the scope of the invention. For example, due to the nature of software, functions described above can be implemented using software, hardware, firmware, hardwiring, or combinations thereof. Features implementing functions may also be physically located at various locations, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, "or" as used in a list of items prefaced by "at least one of or "one or more of indicates a disjunctive list such that, for example, a list of "at least one of A, B, or C" means A or B or C or AB or AC or BC or ABC (i.e., A and B and C), or combinations with more than one feature (e.g., AA, AAB, ABBC, etc.). Also, as used herein, unless otherwise stated, a statement that a function or operation is "based on" an item or condition means that the function or operation is based on the stated item or condition and may be based on one or more items and/or conditions in addition to the stated item or condition. As used herein, including in the claims, unless otherwise stated, a statement that a function or operation is "based on" an item or condition means that the function or operation is based on the stated item or condition and may be based on one or more items and/or conditions in addition to the stated item or condition.[0068] Substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed.[0069] The terms "machine-readable medium" and "processor-readable storage medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. Using a computer system, various processor-readable media (e.g., a computer program product) might be involved in providing instructions/code to processor(s) for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a processor-readable storage medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media include, for example, optical and/or magnetic disks. Volatile media include, without limitation, dynamic memory.[0070] Common forms of physical and/or tangible processor-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH- EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer or processor can read instructions and/or code (i.e., processor-readable).[0071] Various forms of processor-readable media may be involved in carrying one or more sequences of one or more instructions to one or more processors for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by a computer system.[0072] Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, and symbols that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0073] The methods, systems, and devices discussed above are examples. Various alternative configurations may omit, substitute, or add various procedures or components as appropriate. Configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional stages not included in the figure.[0074] Specific details are given in the description to provide a thorough understanding of example configurations (including implementations). However, configurations may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configurations of the claims. Rather, the preceding description of the configurations will provide those skilled in the art with an enabling description for implementing described techniques. Various changes may be made in the function and arrangement of elements without departing from the scope of the disclosure. [0075] Also, configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional stages or functions not included in the figure. Furthermore, examples of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the tasks may be stored in a non-transitory processor-readable medium such as a storage medium. Processors may perform the described tasks.[0076] Components, functional or otherwise, shown in the figures and/or discussed herein as being connected or communicating with each other are communicatively coupled. That is, they may be directly or indirectly connected to enable communication between them.[0077] Having described several example configurations, various modifications, alternative constructions, and equivalents may be used without departing from the disclosure. For example, the above elements may be components of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of operations may be undertaken before, during, or after the above elements are considered. Also, technology evolves and, thus, many of the elements are examples and do not bound the scope of the disclosure or claims.Accordingly, the above description does not bound the scope of the claims. Further, more than one invention may be disclosed.[0082] WHAT IS CLAEVIED IS: |
A compiler-controlled technique for scheduling threads to execute different regions of a program. A compiler analyzes program code to determine a control flow graph for the program code. The control flow graph contains regions and directed edges between regions. The regions have associated execution priorities. The directed edges indicate the direction of program control flow. Each region has a thread frontier which contains one or more regions. The compiler inserts one or more update predicate mask variable instructions at the end of a region. The compiler also inserts one or more conditional branch instructions at the end of the region. The conditional branch instructions are arranged in order of execution priority of the regions in the thread frontier of the region, to enforce execution priority of the regions at runtime. |
1.A method for scheduling threads to execute different zones of a program, the method comprising:A control flow diagram is analyzed, the control flow diagram being based on program code and comprising a plurality of zones, wherein each zone represents a different portion of the program code, is assigned an execution priority, and has one or more thread boundary zones a thread boundary, each thread boundary area being one of the plurality of areas in the control flow graph;Inserting one or more update assertion mask variable instructions at an end of the first region included in the plurality of regions based on the control flow graph and the program code;Inserting one or more conditional sentence branch instructions at the end of the first zone, the conditional sentence branching instructions being arranged to reflect the one or more thread boundary zones in the thread boundary of the first zone Execution priority.2.The method of claim 1, further comprising determining to include a branch instruction at an end of the first region and replacing the branch with an instruction configured to calculate a branch condition bitmask variable for the first region instruction.3.The method of claim 2 wherein each zone in said control flow graph has one or more successor zones and one or more predecessor zones, each successor zone being a zone in said control flow diagram, And each predecessor area is an area in the control flow graph, and wherein the branch instruction at the end of the first area has a branch adoption target area and a branch unused target area, and the branch adoption target area is a zone in the plurality of zones in the control flow graph, the branch unoccupied target zone is a zone in the plurality of zones in the control flow graph, and one or more update assertion masks are inserted The variable instructions further include:Determining that the branch adoption target zone has a plurality of predecessor zones, and inserting an instruction configured to merge a thread that employs the branch in the first zone with a thread that is waiting to execute the branch to adopt a target zone; orDetermining that the branch adoption target zone does not have a plurality of predecessor zones, and inserting an instruction to assign a thread that employs the branches in the first zone to the branch adoption target zone.4.The method of claim 3 further comprising:Determining that the branch unoccupied target zone has a plurality of predecessor zones, and inserting an instruction configured to merge a thread that does not employ the branch in the first zone with a thread that is waiting to execute the branch not using the target zone; orDetermining that the branch unoccupied target zone does not have a plurality of predecessor zones, and inserting an instruction configured to assign a thread that does not employ the branch in the first zone to an instruction that the branch does not employ a target zone.5.The method of claim 1 wherein each zone in said control flow graph has one or more successor zones and one or more predecessor zones, each successor zone being a zone in said control flow diagram, and Each predecessor area is an area in the control flow graph, and the instruction set in which the first update assertion mask variable is inserted further includes:Determining that the branch instruction is not included at the end of the first zone; and, or determining that the successor zone of the first zone has a plurality of predecessor zones, and inserting a thread configured to execute the first zone and waiting for execution An instruction of thread merge of the successor region, or determining that the successor region of the first region does not have a plurality of predecessor regions, and inserting a thread configured to perform execution of the first region to the successor region of the first region Instructions.6.The method of claim 1 wherein inserting the one or more conditional sentence branch instructions further comprises:Inserting a plurality of conditional sentence branch instructions at the end of the first area according to an execution priority order of the thread boundary areas in the thread boundary of the first area, each conditional clause branch instruction having a respective Target thread boundary area, each conditional clause branch instruction is configured as:Determining whether a thread is waiting to execute the respective target thread boundary region in the thread boundary in the first region, and branching to the target thread boundary region if a thread waits to execute the respective target thread boundary region .7.The method of claim 1 further comprising inserting one or more instructions at the beginning of the first zone to set an assertion mask for the first zone.8.The method of claim 1 further comprising optimizing said one or more update assertion mask variable instructions and said one or more conditions by implementing one or more useless code elimination and looping invariant code movements Sentence branch instruction.9.The method of claim 1 further comprising:Inserting one or more update assertion mask variable instructions at the end of the second of the plurality of zones;One or more conditional sentence branch instructions are inserted at the end of the second region, the conditional sentence branch instructions being arranged in an order of execution priority of the thread boundary regions in a thread boundary of the second region.10.A computing device for scheduling threads to execute different zones of a program, the computing device comprising:Processor;a memory coupled to the processor, wherein the memory includes a compiler having instructions that, when executed by the processor, cause the processor to:A control flow diagram is analyzed, the control flow diagram being based on program code and comprising a plurality of zones, wherein each zone represents a different portion of the program code, is assigned an execution priority, and has one or more thread boundary zones a thread boundary, each thread boundary area being one of the plurality of areas in the control flow graph;Inserting one or more update assertion mask variable instructions at an end of the first region included in the plurality of regions based on the control flow graph and the program code;Inserting one or more conditional sentence branch instructions at the end of the first zone, the conditional sentence branching instructions being arranged to reflect the one or more thread boundary zones in the thread boundary of the first zone Execution priority. |
Compiler Control Area Scheduling for SIMD Execution of ThreadsTechnical fieldThe present invention relates generally to parallel computing and, more particularly, to compiler control region scheduling for SIMD (Single Instruction Multiple Data) execution of threads.Background techniqueA single instruction multiple data (SIMD) processor is a processor that executes an instruction set, with each instruction of the instruction set operating simultaneously on multiple different data values. The application written for the SIMD processor can be logically divided into "warps", where each warp is a "thread" group that cooperates on the SIMD processor and executes simultaneously. Typically, each thread in the warp executes instructions for different data values, but executes the same instructions as other threads in the warp.The execution of threads in the warp can diverge. A thread in a warp diverges if the program instruction indicates that one or more threads in the warp take the first path and one or more other threads in the warp take the second path. Thread divergence can occur for a variety of reasons. For example, because of the possibility of conditional branching in the warp, where each thread can or cannot branch based on the result of the branching condition, and because the evaluation of the branching conditions can be based on data values that can be different for each thread, evaluation Threads that branch conditions can get different branch condition results and may diverge. The execution of such divergence herein may be referred to as a "divergent control flow."Because all threads in a warp generally execute the same instruction, execution of a program with a divergent control flow involves execution on all control flow paths to which each thread flows. Execution on all control flow paths in this manner may involve performing an execution down multipath, where some threads are "active" (current execution) while others are "inactive" (waiting for execution) . Executing inefficient multipathing may cause (and generally do) the execution time of the entire warp to be greater than the execution time for any single thread. There are techniques for determining which divergent threads should be executed at which time. However, some existing techniques may not be associated with an optimized stream, may not efficiently schedule threads, or may not ensure early re-aggregation threads.As mentioned above, more efficient techniques are needed for managing the execution of threads in warp through different zones of the program.Summary of the inventionOne embodiment of the present invention sets forth a method for modifying program code to schedule threads to execute different regions of a program. The method comprises the steps of: analyzing a control flow graph based on program code and comprising a plurality of zones, wherein each zone represents a portion of program code, each zone is assigned an execution priority and each zone has one or more thread boundary zones Thread boundary; insert one or more update predicate mask variable instructions at the end of the first region included in the plurality of regions; and insert one or more at the end of the first partition A conditional clause branch instruction arranged to reflect an execution priority of one or more thread boundaries in a thread boundary of the first zone.One of the advantages of the disclosed technology is that the disclosed techniques can be used to determine which threads should execute which regions of the program at what time. Another advantage is that the disclosed technology enables threads to reconverge at an early stage.DRAWINGSAccordingly, the above-described features of the present invention can be understood in detail, and a more detailed description of the present invention as set forth in the <RTIgt; It is to be understood, however, that the appended claims are in the1 is a block diagram showing a computer system configured to implement one or more aspects of the present invention;2 is a block diagram of a parallel processing subsystem for the computer system of FIG. 1 in accordance with one embodiment of the present invention;3 is a block diagram of a portion of a stream multiprocessor within the general processing cluster of FIG. 2, in accordance with one embodiment of the present invention;4 is a flow diagram of method steps for providing instructions to schedule threads for executing different regions of a program, in accordance with one embodiment of the present invention;5A is a control flow diagram illustrating prioritized execution of different regions of a program, in accordance with one embodiment of the present invention;5B is a conceptual diagram of an exemplary assertion mask variable, in accordance with one embodiment of the present invention;5C is a conceptual diagram of an exemplary layout of a check box including a mask update instruction and a control flow transfer instruction, in accordance with one embodiment of the present invention;6 is a flow diagram of method steps for inserting an instruction to update an assertion mask variable, in accordance with one embodiment of the present invention;Figure 7 is a flow diagram of method steps for determining check and conditional clause branch instructions in program code to be inserted into a program, in accordance with one embodiment of the present invention.Detailed waysIn the following description, numerous specific details are set forth However, it is apparent to those skilled in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.System OverviewFIG. 1 is a block diagram showing a computer system 100 configured to implement one or more aspects of the present invention. Computer system 100 includes a central processing unit (CPU) 102 and system memory 104 that communicate via an interconnect path that can include a memory bridge 105. The memory bridge 105 can be, for example, a north bridge chip connected to an I/O (input/output) bridge 107 via a bus or other communication path 106 (e.g., a HyperTransport link). I/O bridge 107, which may be, for example, a south bridge chip, receives user input from one or more user input devices 108 (e.g., a keyboard, mouse) and forwards the input to CPU 102 via communication path 106 and memory bridge 105. Parallel processing subsystem 112 is coupled to memory bridge 105 via a bus or second communication path 113 (e.g., Peripheral Component Interconnect (PCI) Express, accelerated graphics port or hypertransport link); in one embodiment, parallel processing subsystem 112 is a graphics subsystem that passes pixels to display device 110, which may be any conventional cathode ray tube, liquid crystal display, light emitting diode display, or the like. System disk 114 is also coupled to I/O bridge 107 and can be configured to store content, applications, and data for use by CPU 102 and parallel processing subsystem 112. System disk 114 provides non-volatile storage for applications and data, and may include fixed or removable hard drives, flash devices and CD-ROMs, DVD-ROMs (Digital Versatile Discs - ROM), Blu-ray Disc, HD-DVD (HD DVD) or other magnetic, optical or solid state storage devices.Switch 116 provides a connection between I/O bridge 107 and other components such as network adapter 118 and various add-in cards 120 and 121. Other components (not explicitly shown), including Universal Serial Bus (USB) or other port connections, compact disc (CD) drives, digital versatile disc (DVD) drives, film recording devices, and the like, can also be connected to I/ O bridge 107. The various communication paths shown in Figure 1 including specifically named communication paths 106 and 113 can be implemented using any suitable protocol, such as PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol. And, as is known in the art, connections between different devices may use different protocols.In one embodiment, parallel processing subsystem 112 includes circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, parallel processing subsystem 112 includes circuitry optimized for general purpose processing while preserving the underlying computing architecture, as will be described in greater detail herein. In yet another embodiment, parallel processing subsystem 112 may be integrated with one or more other system components in a single subsystem, such as in conjunction with memory bridge 105, CPU 102, and I/O bridge 107 to form a system on chip (SoC). .The compiler 101 can be embedded in the device driver 103. The compiler 101 compiles the program instructions in accordance with the needs of the execution of the parallel processing subsystem 112. During such compilation, the compiler 101 can apply the transformations to the program instructions at various stages of compilation. In another embodiment of the invention, compiler 101 can be a standalone application.It should be understood that the systems shown herein are exemplary and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, can be modified as needed. For example, in some embodiments, system memory 104 is directly connected to CPU 102 rather than through a bridge, and other devices communicate with system memory 104 via memory bridge 105 and CPU 102. In other alternative topologies, parallel processing subsystem 112 is coupled to I/O bridge 107 or directly to CPU 102 instead of to memory bridge 105. In other embodiments, I/O bridge 107 and memory bridge 105 may be integrated onto a single chip rather than being present as one or more discrete devices. Large embodiments may include two or more CPUs 102 and two or more parallel processing subsystems 112. The specific components shown in this article are optional; for example, any number of cards or peripherals may be supported. In some embodiments, switch 116 is removed and network adapter 118 and add-in cards 120, 121 are directly connected to I/O bridge 107.FIG. 2 illustrates a parallel processing subsystem 112 in accordance with one embodiment of the present invention. As shown, parallel processing subsystem 112 includes one or more parallel processing units (PPUs) 202, each coupled to local parallel processing (PP) memory 204. Typically, the parallel processing subsystem includes U PPUs, where U ≥ 1. (In this context, multiple instances of similar objects are required to be represented by reference numerals identifying the object and numbers in parentheses identifying the instance.) PPU 202 and parallel processing memory 204 may be implemented using one or more integrated circuit devices, such as A programmable processor, an application specific integrated circuit (ASIC), or a memory device, or implemented in any other technically feasible manner.Referring again to FIGS. 1 and 2, in some embodiments, some or all of the PPUs 202 in the parallel processing subsystem 112 are graphics processors having rendering pipelines that can be configured to implement various operations related to: via memory Bridge 105 and second communication path 113 generate pixel data from graphics data supplied by CPU 102 and/or system memory 104, and interact with local parallel processing memory 204 (which may be used as graphics memory, including, for example, conventional frame buffers). To store and update pixel data, pass pixel data to display device 110, and the like. In some embodiments, parallel processing subsystem 112 may include one or more PPUs 202 operating as graphics processors and one or more other PPUs 202 for general purpose computing. These PPUs 202 can be the same or different, and each PPU 202 can have one or more dedicated parallel processing memory devices or no dedicated parallel processing memory devices. One or more PPUs 202 in parallel processing subsystem 112 may output data to display device 110, or each of PPUs 202 in parallel processing subsystem 112 may output data to one or more display devices 110.In operation, CPU 102 is the main processor of computer system 100, controlling and coordinating the operation of other system components. Specifically, the CPU 102 issues a command to control the operation of the PPU 202. In some embodiments, CPU 102 writes a command stream for each PPU 202 into a data structure (not explicitly shown in FIG. 1 or FIG. 2), which may be located in system memory 104, parallel processing memory 204, or Other storage locations accessible to both CPU 102 and PPU 202. A pointer to each data structure is written to a pushbuffer to initiate processing of the command stream in the data structure. The PPU 202 reads the command stream from one or more push buffers and then executes the commands asynchronously with respect to the operation of the CPU 102. The execution priority can be assigned to each pushbuffer by the application via device driver 103 to control the scheduling of different pushbuffers.Referring now back to Figures 2 and 1, each PPU 202 includes an I/O that communicates with the remainder of computer system 100 via a communication path 113 that is coupled to memory bridge 105 (or, in an alternative embodiment, directly to CPU 102). O (input/output) unit 205. The connection of PPU 202 to the rest of computer system 100 can also vary. In some embodiments, parallel processing subsystem 112 can be implemented as a card that can be inserted into an expansion slot of computer system 100. In other embodiments, PPU 202 can be integrated on a single chip with a bus bridge such as memory bridge 105 or I/O bridge 107. In other embodiments, some or all of the components of PPU 202 can be integrated with CPU 102 on a single chip.In one embodiment, communication path 113 is a PCI Express link, as is known in the art, where a dedicated channel is assigned to each PPU 202. Other communication paths can also be used. I/O unit 205 generates packets (or other signals) for transmission over communication path 113, and also receives all incoming packets (or other signals) from communication path 113, directing the incoming packets to the appropriate PPU 202. component. For example, commands related to processing tasks can be directed to host interface 206, while commands related to memory operations (e.g., reads or writes to parallel processing memory 204) can be directed to memory crossbar unit 210. The host interface 206 reads each of the push buffers and outputs the command stream stored in the push buffer to the front end 212.Advantageously, each PPU 202 implements a highly parallel processing architecture. As shown in detail, PPU 202(0) includes a processing cluster array 230 that includes C general processing clusters (GPCs) 208, where C > Each GPC 208 is capable of executing a large number (eg, hundreds or thousands) of threads concurrently, where each thread is an instance of a program. In various applications, different GPCs 208 can be assigned to handle different types of programs or to implement different types of calculations. The allocation of GPC 208 may vary depending on the amount of work generated by each type of program or calculation.The GPC 208 receives the processing tasks to be performed from the work distribution unit within the task/work unit 207. The work distribution unit receives a pointer to a processing task encoded as task metadata (TMD) (not shown) and stored in the memory. The pointer to the TMD is included in the command stream stored as the push buffer and received by the front end unit 212 from the host interface 206. Processing tasks that can be coded as TMD include an index of the data to be processed, as well as state parameters and commands that define how the data will be processed (eg, what program will be executed). Task/work unit 207 receives tasks from front end 212 and ensures that GPC 208 is configured to be in a valid state before the processing specified by each TMD is initiated. The priority of the execution of the processing task can be specified for each TMD. Processing tasks can also be received from the processing cluster array 230. Alternatively, the TMD may include parameters that control the addition of the TMD to the head or tail of the processing task list (or a list of pointers to the processing tasks) to provide another level of control in addition to the priority.Memory interface 214 includes D partition units 215, each of which is directly coupled to a portion of parallel processing memory 204, where D > As shown, the number of partition units 215 is generally equal to the number of dynamic random access memories (DRAMs) 220. In other embodiments, the number of partition units 215 may also not be equal to the number of memory devices. One of ordinary skill in the art will appreciate that DRAM 220 can be replaced with other suitable storage devices and can be of a conventional design. Therefore, the detailed description is omitted. Render targets such as frame buffers or texture maps can be stored across DRAM 220, which allows partition unit 215 to write portions of each render target in parallel to effectively use the available bandwidth of parallel processing memory 204.Any one of GPCs 208 can process data to be written to any DRAM 220 within parallel processing memory 204. The crossbar unit 210 is configured to route the output of each GPC 208 to the input of any of the partition units 215 or to another GPC 208 for further processing. The GPC 208 communicates with the memory interface 214 via the crossbar unit 210 to read or write to various external memory devices. In one embodiment, crossbar unit 210 has a connection to memory interface 214 to communicate with I/O unit 205, and a connection to local parallel processing memory 204 such that processing cores within different GPCs 208 can interface with system memory 104. Or other memory communication that is not local to the PPU 202. In the embodiment shown in FIG. 2, the crossbar unit 210 is directly coupled to the I/O unit 205. The crossbar unit 210 can use a virtual channel to separate the traffic between the GPC 208 and the partition unit 215.Additionally, GPC 208 can be programmed to perform processing tasks associated with a wide variety of applications including, but not limited to, linear and non-linear data transformation, video and/or audio data filtering, modeling operations (eg, applying physical laws to determine objects) Position, velocity, and other properties), image rendering operations (eg, tessellation shaders, vertex shaders, geometry shaders, and/or pixel shader programs), and more. PPU 202 can transfer data from system memory 104 and/or local parallel processing memory 204 to internal (on-chip) memory, process the data, and write the resulting data back to system memory 104 and/or local parallel processing memory 204, where The data may be accessed by other system components, including the CPU 102 or another parallel processing subsystem 112.The PPU 202 can be equipped with any amount of local parallel processing memory 204, including no local memory, and can use local memory and system memory in any combination. For example, in a unified memory architecture (UMA) embodiment, PPU 202 can be a graphics processor. In such an embodiment, dedicated graphics (parallel processing) memory will not be provided or hardly provided, and the PPU 202 will use the system memory in an exclusive or almost exclusive manner. In a UMA embodiment, the PPU 202 can be integrated into a bridge chip or a processor chip, or provided as a discrete chip with a high speed link (eg, PCI Express) via a bridge chip or other means of communication Connect the PPU 202 to the system memory.As indicated above, any number of PPUs 202 can be included in parallel processing subsystem 112. For example, multiple PPUs 202 may be provided on a single card, or multiple cards may be connected to communication path 113, or one or more PPUs 202 may be integrated into a bridge chip. The PPUs 202 in a multiple PPU system may be the same or different from each other. For example, different PPUs 202 may have different numbers of processing cores, different amounts of local parallel processing memory, and the like. Where multiple PPUs 202 are present, those PPUs can be operated in parallel to process data at a higher throughput than is possible with a single PPU 202. A system containing one or more PPUs 202 can be implemented in a variety of configurations and form factors, including desktop computers, laptop or handheld personal computers, servers, workstations, game consoles, embedded systems, and the like.Multiple processing tasks can be executed concurrently on GPC 208 and processing tasks can generate one or more "child" processing tasks during execution. Task/work unit 207 receives tasks and dynamically schedules processing tasks and sub-processing tasks for execution by GPC 208.3 is a block diagram of a Streaming Multiple Processor (SM) 310 within the GPC 208 of FIG. 2, in accordance with one embodiment of the present invention. Each GPC 208 can be configured to execute a large number of threads in parallel, where the term "thread" refers to an instance of a particular program executing on a particular input data set. In some embodiments, single instruction, multiple data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single instruction, multi-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads using common instruction units configured to issue instructions to a set of processing engines within each of GPCs 208. Unlike all SIMD execution mechanisms where the processing engine typically executes the same instructions, SIMT execution allows different threads to more easily follow the decentralized execution path by a given thread program. One of ordinary skill in the art will appreciate that the SIMD processing mechanism represents a subset of the functionality of the SIMT processing mechanism.The operation of GPC 208 is advantageously controlled via a pipeline manager (not shown) that distributes processing tasks to one or more Streaming Multiple Processors (SMs) 310, where each SM 310 is configured to process one or more thread groups. Each SM 310 includes an instruction L1 cache 370 that is configured to receive instructions and constants via an L1.5 cache (not shown) within GPC 208. The warp scheduler and instruction unit 312 receives instructions and constants from the instruction L1 cache 370 and controls the native register file 304 and SM310 functional units in accordance with the instructions and constants. The SM 310 functional unit includes N exec (execution or processing) units 302 and P load-store units (LSU) 303. As is known in the art, the SM functional unit can be pipelined, allowing new instructions to be issued before the previous instruction is completed. Any combination of functional execution units can be provided. In one embodiment, the functional unit supports a wide variety of operations, including integer and floating point operations (eg, addition and multiplication), comparison operations, Boolean operations (AND, OR, XOR), shifts, and various algebraic functions. Calculations (eg, plane interpolation, trigonometric functions, exponential functions, and logarithmic functions, etc.); and the same functional unit hardware can be used to implement different operations in a balanced manner.As defined previously herein, a series of instructions communicated to a particular GPC 208 constitute a thread, and a set of concurrent execution threads across a parallel processing engine (not shown) within SM 310 is referred to herein as a "warp ( Warp)" or "thread group". As used herein, "thread group" refers to a group of threads that concurrently execute the same program on different input data, one thread of which is assigned to a different processing engine within SM 310. A thread group may include fewer threads than the number of processing engines within the SM 310, in which case some processing engines will be idle during the period in which the thread group is being processed. The thread group may also include more threads than the number of processing engines within the SM 310, in which case processing will occur in successive clock cycles. Because each SM 310 can concurrently support up to G thread groups, the result is that in GPC 208 including M stream multiprocessors 310, systems of up to G*M thread groups can be executed in GPC 208 at any given time.In addition, multiple related thread groups can be active simultaneously within the SM 310 (at different stages of execution). This set of thread groups is referred to herein as a "cooperative thread array" ("CTA") or "thread array." The size of a particular CTA is equal to m*k, where k is the number of concurrent execution threads in the thread group and is typically an integer multiple of the number of parallel processing engines within SM 310, and m is the number of concurrently active thread groups within SM 310. The size of the CTA is typically determined by the programmer and the amount of hardware resources available to the CTA, such as memory or registers.In an embodiment of the invention, it may be desirable to use a PPU 202 of a computing system or other processor to perform general purpose computations using a thread array. Each thread in the thread array is assigned a unique thread identifier ("thread ID") that is accessible to the thread during execution of the thread. Thread IDs, which can be defined as one-dimensional or multi-dimensional values, control aspects of thread processing behavior. For example, the thread ID can be used to determine which portion of the input data set the thread will process and/or determine which portion of the output data set the thread will generate or write.The per-thread instruction sequence can include at least one instruction that defines a cooperative behavior between a representative thread of the thread array and one or more other threads. For example, a per-thread instruction sequence may include an instruction to suspend operation for a representative thread at a particular point in the sequence until an instruction such as one or more times of other threads arrives at the particular point, for representative threads to data Instructions stored in one or more shared memory of other threads, for a representative thread to atomically read and update one or more shared memories that are accessed by other thread-based thread IDs Instructions for data in and so on. The CTA program may also include instructions to calculate an address in the shared memory from which the data will be read, which is a function of the thread ID. By defining the appropriate function and providing synchronization techniques, one thread of the CTA can write data to a given location in shared memory and read data from that location by different threads of the same CTA in a predictable manner. Therefore, any desired mode of data sharing between threads can be supported, and any thread in the CTA can share data with any other thread in the same CTA. If there is data sharing between threads of the CTA, its scope is determined by the CTA program; therefore, it should be understood that in a particular application using CTA, the threads of the CTA may or may not actually share data with each other, which The terms "CTA" and "thread array" are used synonymously herein depending on the CTA program.The SM 310 provides on-chip (internal) data storage with different levels of accessibility. Special registers (not shown) are readable to the LSU 303 but are not writable and are used to store parameters that define the "location" of each thread. In one embodiment, the special register includes a register that stores a thread ID per thread (or per exec unit 302 within the SM 310); each thread ID register is only accessible by the respective exec unit 302. The special register may also include an additional register readable by all threads (or all LSUs 303) that perform the same processing task represented by the TMD, which stores the CTA identifier, the CTA dimension, and the grid to which the CTA belongs. Dimension (or queue location if the TMD encodes the queue task instead of the grid task), and the identifier of the TMD to which the CTA is assigned.If the TMD is a Grid TMD, execution of the TMD initiates and executes a fixed number of CTAs to process a fixed amount of data stored in the queue. Specify the number of CTAs as the product of the grid width, height, and depth. A fixed amount of data can be stored in the TMD or the TMD can store a pointer to the data to be processed by the CTA. The TMD also stores the start address of the program executed by the CTA.If the TMD is a queue TMD, the queue characteristics of the TMD are used, which means that the amount of data to be processed is not necessarily fixed. The queue entries store data for processing by the CTA assigned to the TMD. Queue entries can also represent subtasks generated by another TMD during thread execution, providing nested parallelism. Usually the execution of a thread or a CTA including a thread is suspended until the execution of the subtask is completed. The queue can be stored in the TMD or stored separately from the TMD, in which case the TMD stores a queue pointer to the queue. Advantageously, the data generated by the subtasks can be written to the queue while the TMD representing the subtask is being executed. The queue can be implemented as a circular queue so that the total amount of data is not limited to the size of the queue.The CTAs belonging to the grid have implicit grid width, height and depth parameters that indicate the location of the respective CTA within the grid. The special registers are written during initialization during response to commands received from the device driver 103 via the front end 212 and the special registers are not changed during execution of the processing task. The front end 212 schedules each processing task for execution. Each CTA is associated with a specific TMD for concurrent execution of one or more tasks. In addition, a single GPC 208 can perform multiple tasks concurrently.A parameter memory (not shown) stores runtime parameters (constants) that can be read by, but not written by, any thread within the same CTA (or any LSU 303). In one embodiment, device driver 103 provides these parameters to the parameter store before booting SM 310 begins executing tasks that use parameters. Any thread within any CTA (or any exec unit 302 within the SM 310) can access the global memory through the memory interface 214. Portions of the global memory can be stored in the L1 cache 320.Each thread uses the local register file 304 as a scratch space; each register is allocated for one thread, and the data in any portion of the local register file 304 is only accessible to the thread to which the register is allocated. The local register file 304 can be implemented as a register file physically or logically divided into P channels, each channel having a certain number of entries (where each entry can store, for example, a 32-bit word). One channel is assigned to each of the N exec units 302 and the P download-storage units LSU 303, and the corresponding entries in the different channels are populated with data for executing different threads of the same program to assist SIMD execution. Different portions of the channel can be assigned to different thread groups in the G concurrent thread groups such that a given entry in the local register file 304 is only accessible to a particular thread. In one embodiment, some entries in the local register file 304 are reserved for storing thread identifiers, implementing one of the special registers. In addition, the consistent L1 cache 320 stores consistent values or constant values for each of the N exec units 302 and the P download-store units LSU 303.Shared memory 306 is accessible to threads within a single CTA; in other words, any location in shared memory 306 is accessible to any thread within the same CTA (or to any processing engine within SM 310). Shared memory 306 can be implemented as a shared register file or shared on-chip cache memory having interconnections that allow any processing engine to read or write to any location in the shared memory. In other embodiments, the shared state space may be mapped to each CTA area of off-chip memory and cached in the L1 cache 320. The parameter memory can be implemented as a designated portion within the same shared register file or shared cache memory that implements shared memory 306, or as a separate shared register or on-chip cache memory to which LSU 303 has read-only access. In one embodiment, the area that implements the parameter memory is also used to store the CTA ID and task ID, as well as the CTA and grid dimensions or queue locations, implementing portions of the special registers. Each LSU 303 in SM 310 is coupled to a uniform address mapping unit 352 that translates the address provided for the load and store instructions specified in the unified memory space into an address in each distinct memory space. Therefore, instructions can be used to access any of the local, shared, or global memory spaces by specifying addresses in the unified memory space.The L1 cache 320 in each SM 310 can be used to cache private per-thread local data as well as per-application global data. In some embodiments, each CTA shared data may be cached in L1 cache 320. LSU 303 is coupled to shared memory 306 and L1 cache 320 via a memory and cache interconnect 380.It should be understood that the kernel architecture described herein is exemplary and that variations and modifications are possible. Any number of processing units, such as SM 310, may be included within GPC 208. Further, as shown in FIG. 2, PPU 202 can include any number of GPCs 208 that are advantageously functionally similar to each other such that execution behavior does not depend on which GPC 208 receives a particular processing task. Further, each GPC 208 advantageously uses separate and distinct processing units, L1 caches, to operate independently of other GPCs 208 to perform tasks for one or more applications.Those skilled in the art will appreciate that the architecture depicted in Figures 1-3 is in no way intended to limit the scope of the invention and that the techniques taught herein can be implemented on any suitably configured processing unit without departing from the scope of the invention. The processing units include, but are not limited to, one or more CPUs, one or more multi-core CPUs, one or more PPUs 202, one or more GPCs 208, one or more graphics or dedicated processing units, and the like.Compiler Control Area Scheduling for SIMD Execution of ThreadsAs mentioned above, the warp is a thread group that is executed on the SM 310. Sometimes the warp may diverge, meaning that threads in the warp are instructed to follow different control flow paths. These different control flow paths are generally not executable simultaneously on the SM310. Therefore, different control flow paths are executed at different times. Techniques are generally used to schedule threads that execute on different control flow paths. The technique provided herein is a compiler control technique that modifies program code to insert instructions that schedule threads by using assertions (optionally inhibiting execution of threads) and branch instructions. Assertions and branch instructions facilitate the implementation of a predetermined prioritized control flow through the program.4 is a flow diagram of a method 400 having steps for a compiler 101 that inserts instructions into program code to schedule threads executing on different control flow paths, in accordance with one embodiment of the present invention. A description of the method 400 shown in FIG. 4 may refer to FIGS. 5A-5C, which depict an exemplary control flow diagram 500, an exemplary assertion mask variable 520, and an exemplary checkbox 510. Although the method steps are described in conjunction with Figures 1-3 and 5A-C, those skilled in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the invention. Throughout this disclosure, compiler 101 is described as implementing certain steps for the zone being analyzed. As used throughout this disclosure, the symbol "R" refers to the region being analyzed, also referred to herein as the "analysis region."As shown, method 400 begins at step 402 where, as is known, compiler 101 can analyze the program code using control flow analysis to generate a control flow graph, such as control flow graph 500 shown in Figure 5A. Control flow diagram 500 conceptually illustrates a control flow for program code that includes a particular sequence of instructions in a program.Referring to Figure 5A, control flow diagram 500 includes one or more "zones", such as zones 1-8502-1 - 502-8 having an execution boundary 504 and connected by directed edges 506. A control flow graph is a representation of the use of graphical symbols by all paths that a thread may pass through during its execution. The basic box is a finite sequence of instructions with an entry point and an exit point. A zone is an associated subgraph of a control flow graph. The zone contains at least one basic box. Threads that enter a single zone 502 together typically do not diverge, at least until they have completed execution of a single zone 502. If divergent code (such as a conditional sentence branch instruction) occurs at the end of the region 502, the threads that have performed the region 502 together may diverge. For example, the divergence of threads at the end of zone 1502-1 is possible. As indicated by directed edges 506-1 and 506-2, some threads may proceed to zone 2502-2 while other threads may proceed to zone 3 502-3. In general, threads travel in the direction indicated by the arrows of directed edges 506-1 - 506-10. While some of the arrows in control flow graph 500, such as arrows 506-1 and 506-2, indicate divergent paths, other arrows such as 506-9 and 506-10 indicate a converged path. Threads leaving from both zones 7502-7 and 4502-4 converge in zone 8502-8. A district can have a predecessor and a successor. If the directed edge points from the first zone to the second zone, then the first zone is the predecessor of the second zone. In addition, if the directed edge points from the second zone to the first zone, then the first zone is the successor of the second zone. Predecessors and successors may also be referred to herein as "predecessor zones" and "subsequent zones."Preferably, the zone is as large as possible. However, as is known in the art, the zone created by control flow analysis may include a single basic frame, which may be smaller than is desirable. To create large regions from a single basic box, a unified analysis is used to identify the unified branch. The area for the increase is then selected and analyzed. If all of the basic boxes of the predecessor for the increased area end with the unified branch and are all parts of the same area (referred to as the "predecessor area"), the predecessor area is added to the area for the enlargement. This enlargement process can be repeated recursively for all zones to obtain a large zone size. Preferably, the control flow diagram 500 depicted in Figure 5A includes zones that are as large as possible in this manner.Execution priorities 504-1 - 504-8 determine the order in which the zones are executed. In the control flow diagram 500 shown in Figure 5A, a larger number indicates a higher execution priority. In other words, if a decision is made, the thread in the zone with the higher priority is executed before the thread in the zone with the lower priority. Therefore, in the example shown in FIG. 5A, if there is a thread waiting to execute the area 2 having the priority 7, and waiting for the thread of the area 3 having the priority 6, the thread waiting for the execution area 2 is waiting for the execution area 3 The thread is executed before. Any conventional algorithm can be used to determine the execution priority of the regions in the control flow graph.The control flow required to enforce the execution priority of the zone may require the use of implicit branches such as branches 508-1 - 508-3. These branches 508 ensure that the regions 502 are executed in order of priority by providing paths between regions 502 that are not necessarily linked together by the original program code. For example, if some threads have passed zone 2502-2, which has just completed execution, but there are some threads waiting for both execution zone 4502-4 and zone 3502-3, then zone 3 502-3 is typically executed first. However, there is no directed edge from zone 2502-2 to zone 3502-3. Thus, implicit branch 508-1 can be used to transfer control from zone 2502-2 to zone 3502-3.Referring back to FIG. 4, at step 404, compiler 101 determines a thread boundary for each zone 502. For each zone, the thread boundary includes all zones 502 in which the thread may be waiting to execute, and considers the directed edge 506 and the execution priority 506 in the figure. Region 502 in the thread boundary herein may be referred to as a "thread boundary region."For example, referring again to FIG. 5A, at the top of control flow graph 500, at the beginning of zone 1502-1, the thread boundary does not include a zone. This is because when a thread is executing zone 1, it is unlikely that any thread has reached any other zone, as zone 1502-1 is the location where control flow graph 500 begins. However, at the end of zone 1502-1, it is possible for the thread to proceed to zone 2502-2 or zone 3502-3. Therefore, the thread boundary at the end of zone 1 includes zone 2502-2 and zone 3502-3.At the end of zone 2502-1, the thread boundary includes zones 4502-4, 5502-5, and 3502-3. This is due to two reasons: the divergent control flow at the end of zone 2502-2 as indicated by arrows 506-3 and 506-4, and the zone 504-2 being higher than zone 3 due to zone 2-502. Priority 504-3 so some threads may wait for execution area 3502-3. Therefore, there are three places where the thread may wait for execution: the area 3502-3 where the thread branching from the area 1502-1 may wait, and the branch targets 502-4 and 502-5.Continuing with the example, at the end of zone 3502-3, the thread boundary includes zones 4502-4, 5502-5, and 6502-6, but does not include zone 2502-2. The area 2502-2 is not included because it has a higher priority (7) than the priority (6) of the area 3 502-3. Thus, at the end of zone 3502-3, all threads that have entered zone 2502-2 have completed execution and there are no threads waiting for execution zone 2 502-2. However, since 3502-3 has a higher priority than the two branch target areas of the area 2502-2 including the areas 4502-4 and 5502-5, some threads can be transferred before the control is transferred to the area 3503-3. From the zone 2502-2 flow zones 4502-4 and 5502-5, and while the thread is executing the zone 3502-3, there may be threads waiting for the execution zone 4502-4 or 502-5. In addition, because region 6502-6 is the branch target of zone 3-502-3, the thread can flow from zone 3502-3 to zone 6502-6 and thus the thread may wait for execution zone 6506-6. Thus, the thread boundary of zone 3502-3 includes zones 4502-4, 5502-5, and 6502-6.Skip the front, the thread boundaries for region 7502-7 include regions 4502-4 and 8502-8. Region 4 502-4 is included in the thread boundary because there may be threads branching from region 2502-2 waiting for execution region 4 502-4. Because region 4504-4 has priority 2, which is lower than the priority of region 7 502-7 with priority 3, any thread waiting for execution region 4502-4 has no chance to execute until execution of region 7502-2. Region 8502-8 is the successor to region 7502-7, but its priority is lower than the priority of region 4502-4, so threads may also wait in region 8502-8. The thread boundaries can be determined in any known way.Referring back to Figure 4, at step 406, compiler 101 inserts an instruction to initialize the assertion mask variable to zero. The assertion mask exists for each zone and indicates which threads are executing or waiting to execute the zone corresponding to the assertion mask variable. An assertion mask variable with a value of 0 indicates that no thread is currently executing or waiting to execute the corresponding region. When the program starts, all assertion mask variables are initialized to 0 because no threads have entered any area. The dedicated assertion mask variable "m", also referred to herein as the current region assertion mask, is used to set the assertion mask for the region currently being executed. When control flows to the new zone, the variable m is set equal to the assertion mask variable for the new zone, and the instruction is asserted based on the variable m.FIG. 5B is an illustration of a conceptual version of an exemplary assertion mask variable 520A, 520B. Assertion mask variables 520A, 520B include a bit mask having a number of bits 524A, 524B equal to the number of threads in the warp. In Figure 5B, each assertion mask variable 520A, 520B has four bits 524A-1 - 524A-4, 524B-1 - 524B-4. Each bit in the bit mask represents a different thread in the warp. Thus, assertion mask variables 520A, 520B can be used in conjunction with a warp with 4 threads. An indication of which thread each bit 524 corresponds to is shown above each bit in Figure 5B with the word "(Tx)", where "x" is the number identifying the thread.A value of 1 in the bit mask means that the corresponding thread is currently executing or waiting to execute the corresponding region, while a value of 0 means that the corresponding thread is not currently executing and is not waiting to execute the corresponding region.In Figure 5B, each of bits 524A, 524B has a value of 522A-1 - 524A-4, 522B-1 - 522B-4. When all of the bits 524A in the variable 520A are set to 0, the variable 520A represents an area in which no thread is executing or waiting to be executed. In the variable 520B, the bits 522B-1 and 522B-4 are set to 0, and the bits 522B-2 and 522-3 are set to 1. Thus, threads T2 and T3 are currently executing or waiting to execute the region represented by assertion mask variable 520B, while threads T1 and T4 are currently neither executing nor waiting to execute the region represented by assertion mask variable 520B.Referring back to FIG. 4, at step 407, the compiler 101 inserts an instruction at the beginning of each zone to set the bitmask variable "m" equal to the assertion mask variable for the corresponding zone. The bit mask variable "m" contains information about which threads are executing the current execution region and is placed in a format similar to the bit mask variables 520A and 520B shown in Figure 5B. Specifically, a "1" in m indicates that the respective thread is executing the current execution region, and a "0" in m indicates that the respective thread is not currently executing the current execution region.At step 408, the compiler 101 analyzes the zone, referred to herein as the "analysis zone" or referenced to the symbol "R", and inserts an instruction into the checkbox 510 in the analysis zone R shown in Figure 5A for updating for The assertion mask variable 520 shown in Figure 5B of the successor of the analysis region R. At step 408, an instruction is inserted into checkbox 510 (shown in Figure 5A) for all regions in the control flow graph (such as control flow graph 500 in Figure 5A) being analyzed.Referring again to FIG. 5A, the successor of the zone, such as zone 2502-2, includes the zone of the target of the branch instruction at the end of zone 2-502-2, and the directed edge 506 indicates any zone to which the control should flow, but not The area indicated by the implicit branch 508 is included. This is because branch 508 only represents the current control transfer to the thread waiting in another zone, but since the thread may only flow through the directed edge 506 in the figure, it does not represent that the thread actually moved to that zone. The successor of zone 2502-2 includes zones 4502-4 and 5502-5, but does not include zone 3503-3.Still referring to FIG. 5A, check box 510 is present at the end of zone 502. Figure 5C shows an exemplary check box 510 that includes a mask update instruction 530 (described below for step 408) followed by a control flow transfer instruction 532 (described below for steps 410 and 412). The instruction to update the assertion mask variable 520 (also referred to herein as "mask update instruction" 530) allows the assertion mask variable 520 to be updated during runtime, i.e., during program execution.Referring back to Figure 4, at steps 410 and 412, compiler 101 inserts a check instruction and a conditional clause branch instruction, which is a conditional statement on the check instruction and points to the area waiting to be executed. These check instructions and conditional sentence branch instructions are inserted into the control flow transfer instruction 532 of the check box 510 shown in FIG. 5C. Steps 410 and 412 are described in more detail below with respect to FIG. 7, which depicts a method for inserting check and branch instructions for an analysis region. The method depicted in Figure 7 can be repeated for each zone in which the check and branch instructions are to be inserted.At step 414, the instructions inserted in method 400 are optimized for efficiency. Once the compiler 101 replaces the branch instruction with an instruction that manipulates the assertion mask, the code can be further optimized. Many optimization techniques can be used, such as "common sub-expression elimination", "constant propagation", "strength reduction", "cyclic invariant code movement", and "partial redundancy elimination". These optimization techniques can remove some of the instructions inserted at the end of the zone.Figures 6 and 7 illustrate an exemplary method by which the compiler 101 analyzes the zones and performs steps 404, 410, and 412 shown in Figure 4. More specifically, Figure 6 shows an exemplary method for performing step 408 for a zone, and Figure 7 presents an exemplary method for performing steps 410 and 412 for a zone. Since the compiler 101 executing the method steps depicted in Figures 6 and 7 inserts instructions in a region, the compiler 101 can repeat the methods depicted in Figures 6 and 7 for all regions.The disclosure provided with respect to Figures 6 and 7 can refer to certain pseudo code expressions. These pseudocode expressions are intended to conceptually represent program instructions, but do not necessarily represent instructions in any particular language or set of instructions. The description provided with respect to Figures 6 and 7 can refer to certain symbols. Such symbols include: "R" - also known as "analysis zone" and which is the zone where the method described in Figures 6 and 7 is analyzed by the compiler, "s" - for successors or analysis zones The branch uses the assertion mask variable of the target area, "n" - the assertion mask variable of the target area with the branch-not-taken region, and the "C" - used for A branch conditional bitmask variable for any branch at the end of the analysis area.6 is a flow diagram of method steps for inserting an instruction to update an assertion mask variable for a particular zone, in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with Figures 1-3 and 5A-5C, those skilled in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention.The method 600 depicted in Figure 6 determines an instruction to be inserted into a single zone, which is analyzed by the compiler 101, also referred to as an analysis zone or zone R. In order to determine the instructions to be inserted into each zone in the control map, the method of Figure 6 is repeated for all such zones.Using the steps of method 600 shown in Figure 6, compiler 101 analyzes the analysis area, i.e., region R, and determines to insert an instruction into check box 510 at the end of R. Control flow graphs and thread boundaries have been generated in steps 402 and 404, respectively. These control flow graphs and thread boundaries can be used by compiler 101 to perform steps 601-622 shown in FIG.As shown, method 600 begins in step 601 where compiler 101 selects the region to be analyzed - the analysis region, also referred to as region R. At step 602, compiler 101 determines if region R ends at a branch instruction. The path followed by the "yes" and "no" arrows extending from step 602 in Figure 6 reflects the fact that if the analysis area does end up with a branch instruction, the analysis area has two successors and is therefore used for two Different zones - two successors, also known as branches, use the assertion mask variable of the target and branch not used target, but if the analysis zone does not end with the branch instruction, then the analysis zone has only one successor, and therefore Only one assertion mask variable is updated. Steps 604, 606, and 608 represent flow logic for setting a break mask variable for the branch adoption target, while steps 610, 612, and 614 represent flow logic for setting an assertion mask variable for the branch unused target. Steps 618 and 620 represent flow logic for setting an assertion mask variable for a non-branch successor.If the analysis area ends with a branch instruction, the compiler 101 proceeds to step 603 where the compiler 101 removes the branch instruction and the compiler 101 inserts an instruction to determine the value of the branch condition bit mask variable, the branch condition bit mask variable. The reference is the symbol "C". A branch conditional bit mask variable is a bit mask containing bits whose each bit corresponds to a different thread. The branch conditional bit mask variable is formatted in a similar manner to the asserted bitmask variables 520A and 520B in Figure 5B. For each bit in the branch conditional bitmask variable, the thread that evaluates the branch condition of the removed branch instruction is true is reflected by the value "1", and the thread that evaluates the branch condition to false is reflected by the value "0". . Therefore, C contains information about which threads "take" branches (indicated by 1) and which threads are "not taken" (indicated by 0).At step 604, the compiler 101 checks the branch adoption target of the analysis area to see if the branch has multiple predecessors. If the branch employs a target that does have multiple predecessors, then method 600 proceeds to step 606. If the branch uses the target to have multiple predecessors, then there is a possible convergence of threads from different zones. In other words, threads executing the analysis area and threads of any other predecessor executing the branch adoption target may converge at the branch adoption target. Thus, at step 606, the compiler 101 inserts instructions to merge the threads from the analysis region that employed the branch with the threads that are waiting to execute the branch with the target.The compiler 101 may implement step 606 by inserting an instruction corresponding to the pseudo code s|=m&C, where "s" is an assertion mask variable for the branch adoption target (similar to one shown in FIG. 5B), "|= " is a bitwise OR compound setting operator, m is an assertion mask variable for parsing region R (similar to one described in Figure 5B), "&" is a bitwise AND operator, and "C" is used The branch conditional bit mask variable of the branch detected in step 602.The expression "m&C" provides a result bit mask, where "&" is a bitwise AND operator, for each thread represented by the bits in the resulting bitmask, only the corresponding thread is executing R and step 602 is employed There is a "1" in the branch referenced in . Thus, the expression "m & C" produces a bit mask indicating which threads from R employ branches.The bitwise OR operator in the expression s|=m&C indicates that the thread taking the branch removed from the analysis area is merged with the thread from the other predecessor of the branch using the target. The bitwise OR operator holds all of the previous threads indicated by the assertion mask variable (also referred to as the symbol "s") for the branch to take the target, and adds new threads from the analysis area. The compiler 101 executes the method 600 represented in the flow chart of Fig. 6 for all other zones, including reference to the predecessor of the branch adoption target described above. Therefore, the instruction exists in all predecessors of the branch adoption target and is used to update the assertion mask variable for the branch adoption target. The bitwise OR ensures that all threads indicated by the update mask variable instruction in all predecessors of the branch adoption target are merged at the branch adoption target.If the branch employs a target without multiple predecessors, then method 600 proceeds to step 608 and inserts an instruction to assign a thread to the branch adoption target using the branch removed from the analysis area.The compiler 101 can implement step 608 by inserting an instruction corresponding to the pseudo code s=m&C. At step 608, there is no aggregation of threads from other predecessor zones that branch the target. Thus, compiler 101 may not require a bitwise OR operator, and compiler 101 may simply assign a value of "m&C" (which indicates which threads from the analysis region are taking the branch) to the assertion bitmask variable for the branch adoption target. , which should be in the symbol "s".The compiler 101 next proceeds to step 610. At step 610, the compiler 101 checks to see if the branch has no predecessors for the target. If the branch does not employ the target does have multiple predecessors, then the compiler 101 proceeds to step 612 and inserts an instruction to merge the thread that did not take the branch with the thread that waited for the branch to take the target.The compiler may implement step 612 by inserting an instruction corresponding to the pseudo code n|=m&~C, where "n" is the assertion mask variable for the branch not taking the target, and "~" is the bitwise NOT operator. The expression "m&~C" represents a thread from the current zone where the branch is not taken. The expression "~C" is a bitwise NOT of the branch condition bitmask variable C, and indicates a thread that does not employ a branch. Conceptually, since the bitwise NOT operation reverses all bits in the bitmask (ie from 0 to 1 and from 1 to 0), and since all bits in the bitmask C indicate which threads are using branches, the expression " ~C" indicates all threads that do not use branches. For example, if the second bit of C is 1, indicating that the second thread is using a branch, then the second bit of C's bitwise NOT, ~C, is equal to 0, indicating that the second thread is not taking the branch. Therefore, the expression "m&~C" indicates which threads of the execution R are not taking branches. The expression n|=m&~C merges threads from the branch unused target and threads from the analysis region that does not take the branch, in a similar manner as described for steps 606 and 608.If the branch employs a target without multiple predecessors, then compiler 101 proceeds to step 614 and inserts an instruction to assign an analysis area thread from the untaken branch to the branch unoccupied target.The compiler 101 can implement step 614 by inserting an instruction corresponding to the pseudo code n=m&~C. As explained above for steps 606, 608, and 612, the expression indicates that the threads (m&~C) that are not in the branch are simply assigned to the branch unoccupied target.Referring back to step 602, if the compiler 101 does not detect a branch at the end of the analysis area at step 602, then the analysis area has only one successor and the method proceeds to step 616. Since there are no branches, all threads from zone R are passed to their successor zone. Thus, compiler 101 inserts instructions to merge or assign all threads from the current zone to the successor zone. In addition, since there is only one successor, the compiler 101 only inserts an instruction for assigning a value of a bit mask variable, i.e., a bit mask variable s for the successor.At step 616, the compiler 101 checks to see if the successor of R has multiple predecessors. The compiler 101 assigns the value of s at step 618 or 620, depending on whether the successor of R has multiple predecessors. The logic is similar to steps 606 and 612. In other words, if there are multiple predecessors for the successor of R, the compiler 101 inserts an instruction to merge the threads from R with the threads from multiple predecessors. If there are no predecessors for the successor of R, then compiler 101 simply assigns the thread executing R to the thread in the successor.At step 616, if the successor of the analysis zone does have multiple predecessors, then method 600 proceeds to step 618 and inserts an instruction to merge the threads executing the analysis zone with the threads waiting to execute the successor zone. At step 618, since the successor has multiple predecessors, the compiler 101 inserts an instruction to use a bitwise OR operation to thread the thread that is waiting for the successor of the analysis area, ie, the variable s, and the currently executing analysis area. The thread of the successor, that is, the thread indicated by the variable m, is combined. The compiler 101 can implement step 618 by inserting an instruction corresponding to the pseudo code s|=m.If, at step 616, the compiler 101 determines that the successor does not have multiple predecessors, then the method 600 proceeds to step 620. At step 620, since the successor has only one predecessor, zone R, there may be no other threads waiting to execute the follower, and all threads from R are passed to the successor. The compiler 101 can insert step 620 by inserting an instruction corresponding to the expression s=m. After the compiler 101 has completed steps 612, 614, 618, or 620, the method 600 terminates at step 622.For exemplary purposes, the method 600 of Figure 6 is applied to the region 2502-2 shown in the control flow diagram of Figure 5A, and with reference to both Figures 5A and 6. Exemplary pseudo code expressions are used to help illustrate the steps that compiler 101 implements.At step 602, compiler 101 checks region 2502-2 to see if region 2502-2 ends at the branch. As can be seen by arrows 506-3 and 506-4, zone 2502-2 does end up at the branch. Therefore, the compiler 101 proceeds to step 604. At step 604, the compiler 101 checks the branch of the zone 2502-2 to adopt the target to see if the branch employs the target with multiple predecessors. The branch of zone 2502-2 takes the target zone 4502-4 (for the sake of brevity, in Figure 5A, all branch targets on the left side of the zone are considered to be branches adopting the target, and all branch targets on the right side of the zone are It is considered that the branch does not use the target). As can be seen, the branch uses target zone 4504-4 with only one predecessor, the predecessor being zone 2502-2. Therefore, the compiler 101 proceeds to step 608.At step 608, compiler 101 inserts an instruction corresponding to pseudocode s=m&C into checkbox 510-2 of zone 2 502-2. The statement sets the assertion mask variable of region 4502-4, here indicated as "s", to be equal to the condition that the bitwise AND is used for the assertion mask of region 2502-2 and the branch at the end of region 2502-2. Logically, this means that all threads that take the branch at the end of zone 2502-2 flow to zone 4502-4.At step 610, compiler 101 checks region 5502-5 to see if region 5502-5 has multiple predecessors. Since both zone 3502-3 and zone 2502-2 are predecessors of zone 5502-5, zone 5 502-5 does have multiple predecessors. The compiler 101 proceeds to step 612 and inserts an instruction corresponding to the pseudo code n|=m&~C. These instructions will assert the assertion mask variable of region 5502-5, here denoted "n", set equal to the value before "n" and the assertion mask to be used for region 2502-2, ie "m" and the branch condition The value of the bitwise NOT of C is bitwise ORed by the result of bitwise AND. Logically, this corresponds to merging the threads that are waiting for execution region 5502-5 with the threads passed from region 2502-2 to region 5502-5.The compiler 101 proceeds to step 622 and ends the method illustrated in FIG. The instructions of the check box 510-2 inserted by the compiler 101 into the area 2502-2 in this example include the following:s=m&Cn|=m&~CIn accordance with these instructions, check block 510-2 updates the assertion mask variables for regions 4 and 5 when the program represented by control flow graph 500 is running. These instructions are placed in the mask update instruction of check box 510-2 shown in Figures 5A and 5C.Figure 7 is a flow diagram of method steps for determining check and conditional branch instructions described with respect to steps 410 and 412 of Figure 4, in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with Figures 1-3 and 5A-5C, those skilled in the art will appreciate that any system that implements the method steps in any order is within the scope of the invention.The method 700 depicted in Figure 7 determines check and conditional clause branch instructions for a single zone, which is the zone being analyzed by the compiler 101, also referred to as the analysis zone or zone R. In order to determine the check and conditional clause branch instructions for each zone in the control map, the method of Figure 7 is repeated for all such zones.To avoid confusion, the area other than R analyzed by the compiler 101 implementing the steps shown in FIG. 7 is referred to herein as a "thread boundary" or referenced as the symbol "[Rx]", and is used for the assertion of the thread boundary area. The mask variable is called "[x]", where x can be replaced by the area code of the area in Figure 5A.As shown, method 700 begins in step 701 where compiler 101 selects an analysis region from a control flow graph (such as control flow graph 500 depicted in Figure 5A). The compiler 101 already knows the thread boundaries for the analysis area because step 404 in Figure 4 has been performed. At step 702, the compiler 101 determines the thread boundary area as the highest priority area among the thread boundaries of the analysis area.At step 704, if the assertion mask variable for the boundary region having the highest priority indicates that there is a thread waiting to execute in the thread boundary region having the highest priority, the compiler 101 inserts the instruction to branch to the thread boundary region. The compiler 101 can use the instructions corresponding to the pseudo code if[x] goto[Rx] for step 704. Conceptually, this states that "if there is any thread waiting in the thread boundary area, the transfer program controls to the thread boundary area." The statement "if[x]" can be implemented by taking all bits in the variable [x] by bit OR, if there is any 1 in [x], then this produces 1 (true), and if in [x] If there is no 1, then this produces 0 (false).At step 706, compiler 101 looks to see if there are any regions in the thread boundary that are still not analyzed. If so, the method 700 proceeds to step 708, otherwise the method ends at step 710.For purposes of example, the method illustrated in Figure 7 is now applied to zone 2502-2 shown in control flow diagram 500 of Figure 5A, and with reference to Figures 5A and 7. Exemplary pseudo code expressions are used to help illustrate the steps employed by compiler 101.For step 702, compiler 101 examines the thread boundaries of region 2502-2 in the exemplary control flow graph 500 shown in Figure 5A. The boundary of zone 2502-2 includes zone 3503-3, zone 4502-4, and zone 5502-5. In addition to this, the zone with the highest priority is zone 3502-3, which has priority 6 504-3.The compiler 101 proceeds to step 704 and inserts a check and branch instruction at check box 510 at the end of the area 2502-2. In the exemplary control flow diagram 500 shown in Fig. 5, since the zone 3 has the highest priority, an instruction corresponding to the zone 3 is inserted. The corresponding pseudo code is "if[3]goto [R3]".Next, at step 706, compiler 101 looks to see if any threads exist in the thread boundaries that are still not analyzed. In the exemplary control flow diagram 500, there are still two regions in the unconstrained, boundary 5502-5 thread boundary, two regions being regions 4502-4 and 5502-5. In addition to this, zone 5502-5 has the highest priority. The method proceeds to step 704 where compiler 101 inserts the corresponding if and goto instructions: if[5]goto[R5]. The method proceeds to step 706 and determines that there is still one thread in the thread boundary. Thereafter, the method returns to step 704 and the instruction if[4]goto[R4] is inserted. Finally, because there are no regions left in the thread boundary, the method proceeds to step 710 and ends the analysis of the analysis region.The pseudo code generated from this process for the exemplary control flow graph 500 is as follows:If[3]goto[R3]If[5]goto[R5]If[4]goto[R4]As can be seen, when the program represented by the control flow graph 500 is running, check box 510-2 first checks to see if there is a waiting thread in zone 3, and if so, goes to zone 3; then checks in zone 5 Whether there is a waiting thread, and if so, goes to area 5; and then checks if there is a waiting thread in area 4, and if so, goes to area 4. The execution priority of the zone is enforced by sorting the statements in order of priority.The last "if" statement can be removed. This is because if there are no waiting threads in all other areas examined, then control flows to the last remaining area. This produces pseudo code:If[3]goto[R3]If[5]goto[R5]Goto[R4]Furthermore, if all successors of R have a higher priority than all other regions in the thread boundary of R, then control transfers to one of the successors of R, and for regions in the thread boundary that are not successors of R, Need if and goto statements.It should be understood that the pseudo code referenced in Figures 6 and 7 conceptually represents computer program instructions, but does not necessarily represent instructions in any particular programming language or set of instructions or the like.In summary, this article provides a compiler implementation technique for forcing the priority order of regions in a program structure. Using these techniques, the compiler modifies the program code by inserting instructions to implement a series of checks and branches in each zone. For each zone, the series checks and branches determine where the control should flow next. The checks and branches are assigned in order of priority, which helps to enforce the priority order. The compiler also inserts instructions to update the assertion mask variables for each region to ensure that the correct thread is being executed in each region. The instructions that update the assertion mask variable also help enforce the priority order of execution.The advantage of the techniques provided herein is that the regions are executed in order of priority. An additional advantage of the techniques disclosed herein is that the diverging threads are re-aggregated early. Another advantage is that no special hardware support is required because the technology is implemented by the compiler.One embodiment of the invention can be implemented as a program product for use with a computer system. The program of the program product defines various functions of the embodiments, including the methods described herein, and can be embodied on a variety of computer readable storage media. Exemplary computer readable storage media include, but are not limited to: (i) a non-writable storage medium (eg, a read only memory device within a computer, such as a compact disk read only memory (CD-ROM) disk readable by a CD-ROM drive) , flash memory, read only memory (ROM) chip or any type of solid state non-volatile semiconductor memory) on which permanent information is stored; and (ii) writable storage medium (eg, within a disk drive or hard drive) A floppy disk or any type of solid state random access semiconductor memory) on which changeable information is stored.The invention has been described above with reference to specific embodiments. However, it will be understood by those skilled in the art that various modifications and changes may be made without departing from the spirit and scope of the invention as set forth in the appended claims. Accordingly, the foregoing description and drawings are to be regarded asTherefore, the scope of the embodiments of the invention is set forth in the claims that follow. |
To provide an architecture to improve error handling overhead of a memory device.SOLUTION: A memory device that performs internal ECC (error checking and correction) treats an N-bit channel as two N/2-bit channels for application of ECC. Each N/2-bit portion of ECC can be separately correctable when treated as two N/2-bit portions. The memory device includes additional hardware for the application of ECC to the channel as two sub-channels. The memory device includes an additional subarray to store ECC bits for internal ECC to enable the application of ECC to two sub-channels of the N-bit channel. The memory device includes an additional driver to access the additional subarray when applied.SELECTED DRAWING: Figure 1 |
A memory device with a hardware interface coupled to N data signal lines to exchange data with the host, and inside the memory device, the ECC is N / 2 bits into the N data bits. A memory device comprising error inspection and correction hardware (ECC hardware) applied as two groups.The memory device according to claim 1, wherein N is equal to 64.The memory according to claim 2, wherein the memory device includes hardware for prefetching 128 bits of data and transferring only 64 bits of the data to an I / O (input / output) circuit of the hardware interface. device.The memory device further includes a memory array, the memory array includes a plurality of subarrays for providing the N bits, and the memory array contains the N bits for storing additional ECC data. The memory device according to any one of claims 1 to 3, which includes an additional subarray that exceeds.The memory device according to claim 4, wherein the memory device includes a driver associated with the sub-array, and the memory device includes an additional driver for controlling access to the additional sub-array.5. The hardware interface is for coupling to a data bus of x4 or x8, and the additional subarray and additional driver are applied only when the hardware interface is coupled to the x4 data bus. The memory device described in.The N data bits include channels, the ECC hardware treats the channels as two subchannels, each with N / 2 data bits, and the host treats the channels as system-level ECC. The memory device according to any one of claims 1 to 6, which is processed as an N-bit channel for the purpose.The memory device according to any one of claims 1 to 7, wherein the memory device includes a synchronous dynamic random access memory (SDRAM) device compatible with a double data rate (DDR) standard.A system having memory, wherein the system is a plurality of memory devices, wherein the memory device has a hardware interface for coupling to N data signal lines for exchanging data with a host, and the above. With the plurality of memory devices having an error check correction hardware (ECC hardware) for applying ECC to the N data bits as two groups of N / 2 bits inside the memory device. A system comprising the memory controller coupled to the memory device, wherein the memory controller provides system level ECC for data bits received from the memory device.The memory device includes a memory array, the memory array contains a plurality of subarrays for providing the N bits, and the memory array contains the N bits for storing additional ECC data. 9. The system of claim 9, which includes additional subarrays that exceed.10. The system of claim 10, wherein the memory device includes a driver associated with the sub-array, the memory device comprising an additional driver for controlling access to the additional sub-array.The N data bits include channels, the ECC hardware treats the channels as two subchannels, each with N / 2 bits, and the host treats the channels for system-level ECC. The system according to any one of claims 9 to 11, which is processed as an N-bit channel.One or more of a host processor device coupled to the memory controller, a display communicably coupled to the host processor, a network interface communicably coupled to the host processor, or a battery powering the system. The system according to any one of claims 9 to 12, further comprising.A method for a memory device, in which a memory access command from a memory controller is received, wherein the memory device is coupled to the memory controller by N data signal lines, and the memory. At the stage of applying error inspection correction (ECC) by hardware inside the memory device in response to an access command, the ECC is assigned to the N data bits as two groups of N / 2 bits. A method that comprises steps, including steps of application.The step of applying ECC to the N data bits as two groups of N / 2 bits is the step of accessing the N bits from multiple subarrays, with more than N / 2 + 1 drivers. The method according to claim 14, further comprising a step of driving the N bits.The N data bits include a channel, and the step of applying the ECC to the N data bits as two groups of N / 2 bits is to apply the ECC to the channel, each N / 2. The method of claim 14 or 15, wherein the memory controller has a step of applying as two subchannels having bits, and the memory controller applies ECC to the channel as an N-bit channel for system level ECC.N is equal to 64, and the method further comprises a step of prefetching 128 bits of data and a step of transferring only 64 bits of the data to the memory controller. The method described in the section. |
Memory wordline isolation for improved reliability, availability, and scalability (RAS)Priority This application is a non-provisional application and claims the benefit of priority of US Provisional Application No. 62 / 927,116 filed on October 28, 2019.The description generally relates to memory devices, and more specifically to the architecture for improving RAS (reliability, availability, and scalability) by error handling.The overhead required to handle errors in memory channels continues to increase as narrower channels are used. The error handling overhead can be referred to as the RAS (reliability, availability, usefulness) overhead and refers to the fact that error handling is used to meet the RAS expectation. RAS prospects often include prospects for maximum SDDC (single device data correction) capability, which can correct errors due to maximum device failure.Legacy SDDC operations have 8 ECC (also often referred to as error correction coding) bits for every 64 data bits and an overhead of 12.5%. Newer memory systems with narrower channels still require 8 ECC bits for SDDC operations, but with 32-bit channels the overhead is 25%.The following description includes a discussion of figures with illustrations given as examples of implementation. The drawings should be understood as an example and should not be understood as a limitation. As used herein, reference to one or more examples should be understood as describing a particular feature, structure, or property contained in at least one implementation of the invention. Words such as "in one example" or "in an alternative" appearing herein provide examples of implementations of the invention and may not necessarily all refer to the same implementation. However, they are not necessarily mutually exclusive.It is a block diagram of an example of a memory architecture having an additional driver for separating data from a conventional architecture.FIG. 5 is a block diagram of an example data architecture for the memory architecture of FIG.FIG. 5 is a flow chart of an example process for applying ECC for a read command in a system with ECC subchannel isolation.FIG. 5 is a flow chart of an example process for applying ECC for write commands in a system with ECC subchannel isolation.FIG. 3 is a block diagram of an example of an on-die error check correction (ECC) subsystem for implementing ECC subchannel isolation.FIG. 3 is a block diagram of an example memory subsystem in which ECC subchannel isolation can be implemented.FIG. 3 is a block diagram of an example computing system in which ECC subchannel isolation can be implemented.It is a block diagram of an example of a mobile device in which ECC subchannel isolation can be implemented.A non-limiting description of the figure which may show some or all examples, as well as specific details and a description of the implementation, including other possible implementations, follow.As described herein, the memory device is divided into multiple separate parts for ECC (Error Check Correction) separation for internal or on-die ECC application. The different parts can still be processed by the memory controller at the system level as one segment for external or system level ECC. Thus, the internal ECC can correct the two subchannels individually, while the system level ECC will correct the entire channel. Internal ECC isolation allows memory devices to use less overhead to provide the same level of ECC. The additional ECC bits available can be used as additional metadata.A memory device that performs internal ECC can treat the N-bit channel as two N / 2-bit channels for ECC application. N-bit channel division refers to processing the data bits for the N signal lines of the channel as each of the two groups of N / 2 bits or as two parts of the N / 2 signal lines. Can be done. The memory device applies ECC to the N bits of ECC data corresponding to the N signal lines of the channel. ECC for N / 2 bit channels is simpler than ECC for N bits, so each N / 2 bit portion may be individually correctable when treated as two N / 2 bit portions. The memory device can include additional hardware for applying ECC to the channel as two subchannels. For example, the memory device can include an additional subarray to store ECC bits for internal ECC, which allows ECC to be applied to the two subchannels of the N-bit channel. Memory devices can include additional drivers to access additional subarrays when applied.For example, a x4 memory device can be internally treated as two x2 devices. Error correction for x2 devices requires less RAS (reliability, availability, and maintainability) overhead than for x4 devices. The RAS code can be aimed at determining a common memory error type, such as a single bit, a single subword line (SWL) driver, or one arm of a SWL driver. References to subwordlines can refer to an architecture that subdivides the wordline to reduce driver load and is sometimes referred to as the local wordline (LWL). Experiments have shown that whole device (eg whole die) failures are very rare for the above failures. The architecture provided enables the correction of device-wide failures while still enabling the correction of more common failures at low cost.The SWL or LWL can be considered part of the Master Wordline (MWL) or Global Wordline (GWL), depending on the terminology used. Generally, the MWL / GWL is divided into smaller chunks within the memory device until the driver can meet the latency requirements for the memory device. Device partitioning can vary by memory type, memory manufacturer, or driver design.The RAS overhead on the overall SDDC (single device data correction) continues to increase as memory systems use narrower channels or chip kill solutions. The RAS overhead can optionally be referred to as an ECC (Error Check Correction) overhead. The expression RAS overhead refers to the overall goal for error correction, while the expression ECC overhead more specifically refers to the correction mechanism for achieving the desired RAS goal.The ECC isolation described can reduce ECC overhead by half relative to internal ECC and reduce DIMM (dual inline memory module) power. A memory device that implements ECC isolation can be described as having a failure mode that is isolated into a limited number of I / O (input / output) pins. For example, a DRAM device in an x4 implementation can be limited to having fault modes that are separated in two DQs, instead of having faults across all 4DQ signal lines. In one example, different implementations can have different isolation granularities from device to device. Separation granularity can be defined by the standard.The channel width for DDR5 is half that of DDR4 (eg 32 bits vs. 64 bits), allowing the same internal cycle time to be maintained in the memory core while transferring to the outside at higher spades. .. Changing the inner core cycle time is significantly more costly than adjusting the I / O cycle time. DDR5 has a burst length of BL18 and transmits 64 bits in addition to ECC bits for each device. The entire interface will be a 32-bit wide channel divided between 4 devices for x8 implementation or 8 devices for x4 implementation.In general, to run an SDDC, the system requires twice as many ECC bits as the device interface. Therefore, the x4 implementation requires 8 ECC bits on the channel and the x8 interface requires 16 ECC bits. SDDC can be impractical for x8, but more manageable for x4 implementations. As the channel goes from 64b data + 8bECC to 32b data + 8bECC, additional ECC devices will be needed to meet the same RAS performance as the legacy system.FIG. 1 is a block diagram of an example of a memory architecture with an additional driver for separating data relative to the conventional architecture. The memory segment 102 represents a portion of the memory array for providing data to the data channel to which the memory device is connected. The DDR5 (Double Data Rate Version 5) data channel has a 32b data bus width with 16b for ECC, which will result in a RAS / ECC overhead of 25%. DDR4 (Double Data Rate Version 4) data channels, on the other hand, have a data bus width of 64b with 8 bits for ECC, which results in an overhead of 12.5%.Segment 102 may include a subword line (SWL) driver driving 8 bits of data on each side. As shown, the dashed box indicates 16 bits of data driven by the driver (DRVR) in the center of the bits. The bits on the left side of the driver can be considered as the driver's left subarray or left arm. Similarly, the bits to the right of the driver can be considered the driver's right subarray or right arm. It will be understood that right and left are relative terms and refer only to the orientation of the figure. In a practical implementation, the bits that are considered left or right can be switched. Therefore, the description of the left and right arms is that the driver between groups of bits or memory cells or memory locations drives the bits at any physical location on the driver circuit side, and the driver circuitry and memory cells or memory arrays. It refers only to the fact that it reduces the length of the signal line that needs to be driven to and from the bit cells of. There are bits driven in each direction from the active component of the driver. As shown, it will be understood that failure of one arm (SWL arm) results in 8b failure as one arm fails and one subarray becomes inaccessible. .. A driver failure (SWL driver failure) results in a 16b failure as both arms become inaccessible.Segment 104 shows a channel similar to the channel for segment 102, but with additional drivers and additional subarrays. Segment 104 can be considered to have a separation between subchannels and be further divided or partitioned into subchannels A and B. Each driver in segment 104 includes at least a left or right subarray, and most drivers have both a left and right subarray. The driver at the end may have only one arm. The central separation can be considered a logical separation and does not have to be a physical separation of the hardware. Thus, for example, isolation refers to the fact that for ECC purposes, one ECC circuit provides ECC to one subchannel and a separate ECC circuit provides ECC to other subchannels. it can. In isolated hardware, memory can provide ECC protection for the entire channel as two separate subparts, simplifying ECC overhead.Segment 104 provides isolation of data fetched from the memory array. In one example, the additional driver allows a single channel to be treated as two separate parts. Adding a driver can allow access to data in two separate parts for ECC operations. Additional drivers allow the channel to be subdivided for the purpose of applying internal ECC to smaller parts of the data, thus limiting errors to sub-parts of the overall memory interface. FIG. 106 shows further details of a driver having a left arm on one side and a right arm on the other side. The right arm drives the right subarray and the left arm drives the left subarray.In one example, the memory segment 104 can be part of a memory device with a common die implementation. A common die implementation refers to a memory device designed to be configurable as either a x4 or x8 device. Such a device would have internal logic to route the bits to the selected I / O pins. Internal logic can include control logic, along with hardware circuitry, to route bits to segments of memory. In a common die implementation, in one example, no additional driver is needed if ECC isolation is not used, for example in a device x8 implementation. The memory array can be designed to allow the selective use of additional drivers for ECC isolation without significant waste of addressable memory space. Therefore, additional subarrays can be used in other ways without the need to activate the driver. Alternatively, the selected driver can be designed as a dual driver for different implementations, which drives only one arm instead of two.FIG. 2 is a block diagram of an example data architecture for the memory architecture of FIG. Figure 202 represents a core memory architecture, such as a DDR5 implementation. As shown, the memory core performs an internal prefetch of 128b and contains 8b ECC for internal ECC. The x8 implementation uses all 128 bits of data, while the x4 implementation prefetches 128 bits and only 64 bits can be used in any given memory access (eg, read or write) operation. Will be understood. An x4 implementation refers to an implementation in which a memory device having an array of figures 202 includes interfaces to four data signal lines or four bits of the data bus, where M = 4 if M represents the number of signal lines. It can be called an implementation. An x8 implementation refers to an implementation in which a memory device having an array of graphics 202 includes interfaces to eight data signal lines or eight bits of a data bus, which can be referred to as an implementation with M = 4.In the x8 implementation, 128 bits is 8 x 16 = 128, so for each memory access transaction, with the host or associated memory controller, based on eight signal lines with a burst length of 16 (BL16). Can be exchanged. Thus, as shown, both the upper half of the array and the lower half of the array each provide 64b of data. The upper half can be based on the memory architecture and addressing structure and is not specifically shown in Figure 202. Where the slightly shaded block of figure 202 represents the upper half, the unshaded block represents the lower half and vice versa. The dark shaded bits represent ECC bits. For x4 implementation, 64 bits may be exchanged with a host or associated memory controller for each memory access transaction based on 4 signal lines via BL16 (4 x 16 = 64). it can. As shown, 64b of data can be fetched from the bottom half or from the top half, or split between the top and bottom halves. Multiple halves imply the utilization of only 4b of each 8b from the subarray. In such an implementation, ECC will not target SWL drivers or MWL failures.Figure 204 exhibits a similar architecture to Figure 202, but includes separation. Figure 204 specifically shows the application of a memory device 210 having four DQ (data) signal lines, an I / O circuit 250 for interfacing with DQ [3: 0]. The memory device 210 includes the memory array 220 shown in the prefetch 230 that is separated into a plurality of different subarrays.Figure 204 can correspond to the memory segment 104 of FIG. As shown, figure 204 includes a memory array that provides data for prefetching. The prefetch includes 128 bits of data as in the figure 202, and includes either 4 bits or 8 bits of ECC data for each separated portion. As shown, 8 bits of ECC data are shown for each subpart or subarray. In one example, only 4 bits of ECC are provided for each subpart. In one example, additional ECC bits can be used for other purposes such as directory information, two-level memory (2LM) metadata, data integrity features, or some other purpose. In another example, the ECC bit can be used for on-die single bit error correction and is not forwarded to the host. In such an example, the data can be transferred to the host via BL16.It is understood that if not all of the prefetched data is used, then all of the prefetched data will be placed in the sense amplifier and then only the selected data will be used for memory access. Will be. Thus, the prefetch 230 represents the data in the sense amplifier and the selection 240 represents the data from the sense amplifier that is transmitted to the I / O for the read operation.For example, the data is placed in a sense amplifier, and the addressing then acts to select a particular portion of the data that is written for a write operation or read for a read operation. Addressing allows the data to be selected in any manner that makes sense for the architecture. In one example, additional hardware for isolation (eg drivers and other logic) in each bank (eg edge subword line drivers) can only add a die size of approximately 1-2 percent.In one example, some of the prefetched data is selected for access operations. As shown, 4 of the 8 bits are selected from each subarray. Per-subarray selection allows internal ECC operations to correct errors that occur in the driver as well as the driver arm. Failure of the SWL arm or SWL driver will only affect two of the DQ bits (either the upper or lower 2DQ). Such a failure does not result in a loss in bank resources due to the x4 implementation.For write operations, the arrows in figure 204 can be reversed, where the selected data is received from the I / O circuit 250 and at 230 the selection of the sense amplifier circuit from the I / O circuit. It is provided to the designated position. Instead of being prefetched into the sense amplifier for write operations, the sense amplifier array can be driven by the corresponding selected subarray of memory array 220. Similar to the read operation, the addressing of the sense amplifier elements can determine what is written to the memory array.In one example, the routing in the spine of the memory device 210 for a dedicated x4 device is 72b vs. 136b for a common x4 / x8 device, resulting in a 2-3% die savings. Implementation of the dedicated part can therefore offset the die area cost for separation.As shown in Figure 204, the selected data and ECC bits can be routed to the I / O circuit 250 of memory device 210. The arrows point down the illustrated reads, but for writes, it will be understood that the data will be the selected bits that are written back to the memory device 210 and back to the memory array 220. In one example, 36 bits per separated surface are transferred via BL18 for each of the two DQ signal lines. The figure of BL18 is only an example. In another example, the system transfers bits via BL16. The total for the device is 72 bits over the 4DQ signal line and is treated internally as two x2 interfaces. Although not specifically shown, the I / O circuit can include an ECC circuit, or the ECC circuit can be located on the input / output path between the sense amplifier and the I / O circuit 250. it can. The result of the architecture of figure 204 is 8b ECC per 64b of data available in BL18, with a RAS overhead of 12.5%, similar to the legacy system.It will be appreciated that the embodiment comprises a particular number of data and ECC bits for a particular number of I / O signal lines. These specific examples are exemplified without limitation. In general, N-bit channels can be subdivided into different parts due to internal ECC, such as two N / 2-bit subparts or subchannels. The N bits can be the total amount of data transferred over the burst length (eg, 64 bits, which is pure data bits excluding ECC bits). Subdivision can instead be considered for interface width (eg, x4 channels are treated as two separate x2 subchannels). Therefore, an N-bit channel generally refers to N-bit data transmitted over a plurality of M signal lines over a burst length. In one example, the channel interface refers to an alternative reference, an M-bit interface to which data bits are transmitted or received. Normally, ECC bits apply to the entire payload of the data bits received over all signal lines for burst length, and references to the application of ECC bits to channels are generally covered by all ECC bits. It will be understood to refer to a bit.Subdivision can handle bits or interfaces to the data bus so that multiple different parts are separated, which can refer to separation for the purpose of performing ECC operations. A system-level ECC implemented by an associated memory controller or host displays all bits or signal lines instead of treating the bits or signal lines as two separate separate channels, as shown in Figure 204. It can be treated as a single channel. Thus, for example, a memory device can process N bits as two parts of N / 2 bits with separate ECC, while at the same time the host treats N bits as N bits for system level ECC. To process. Such an approach allows corrections within some memory devices of the channel and allows the data to be reconstructed with less ECC overhead. Separation allows the memory device to more specifically isolate the error for ECC correction purposes. As another example, the memory device can process the N signal line of the data bus interface as two parts of the N / 2 signal line with separate ECC, while at the same time the host can process the N signal line of the data bus interface. Treat the line as an N-bit channel for system-level ECC. Implementations may vary depending on the size of the different interfaces and internal arrays, but the result will be to provide the ability to perform internal ECC with higher performance while reducing ECC overhead at the system level. System-level ECC refers to ECC provided by a host or associated memory controller, which provides ECC operations for data from multiple memory devices in parallel.FIG. 3 is a flow chart of an example of processing for applying ECC for a read command in a system with ECC subchannel isolation. Process 300 provides an example of performing a read operation on a memory device having ECC including subchannel isolation. The process 300 can be implemented, for example, by the memory device 210 of the figure 204 of FIG.The memory device receives a read command from the host at block 302. In one example, the memory device prefetches data at block 304 with an amount of data equal to or greater than N bits to help read. In one example, at block 306, the memory device selects a portion of the prefetched data for the read operation, where the amount of data selected is N bits.In one example, the memory device can be configured to either apply subchannel isolation for ECC or not. At block 308, the system can determine the configuration of the memory device. If subchannel isolation is not applied, at the NO branch of block 310, the memory device can perform ECC on the N bits as an N bit channel at block 312.When subchannel isolation is applied, at the YES branch of block 310, the memory device can perform ECC on the N bits in block 314 as two N / 2 bit channels. Similar to that described above, the application of ECC to the N / 2 bits can be to themselves of the total bits or to some of the bits of the data bus.When ECC is applied as either one channel or two subchannels, the memory device provides data to the I / O circuit at block 316 to send the data to the host. In one example, at block 318, the host applies system ECC to the N bits of data as a single N bit channel.FIG. 4 is a flow chart of an example of processing for applying ECC for a write command in a system with ECC subchannel isolation. Process 400 provides an example of performing a write operation on a memory device having ECC including subchannel isolation. The process 400 can be implemented, for example, by the memory device 210 of the figure 204 of FIG.In one example, the host or associated memory controller applies ECC to the N bits of data addressed to the memory device in block 402. At block 404, the host sends a write command received by the memory device. At block 406, the memory device receives N bits of data from the host associated with the write command, either with the write command or with some time delay after that command.In one example, the memory device can be configured to either apply subchannel isolation for ECC or not. The system can determine the configuration of the memory device at block 408. If subchannel separation is not applied, at the NO branch of block 410, the memory device can calculate the ECC for the N bits of data as the N bit channel at block 412.When subchannel separation is applied, at the YES branch of block 410, the memory device can calculate ECC for N bits of data as two N / 2 bit channels in block 414. Similar to that described above, the application of ECC to N / 2 bits can be to the total bits themselves or to some of the bits of the data bus. When ECC is calculated as either one channel or two subchannels, the memory device stores data and associated ECC bits in the memory array of the memory device at block 418.FIG. 5 is a block diagram of an example of an on-die error check correction (ECC) subsystem for implementing ECC subchannel isolation. System 500 provides an example of an on-die ECC circuit for a system according to a system compatible with Figure 202. Host 510 includes a memory controller or equivalent or alternative circuit or component that manages access to memory 520. Host 510 executes external ECC on the data read from memory 520. Memory 520 implements on-die ECC to inspect and correct data before sending it to host 510.System 500 shows a write path 532 in memory 520, which represents the path of data written from host 510 to memory 520. Host 510 provides data 542 to memory 520 for writing to memory array. In one example, memory 520 generates check bits 544 with a check bit generator 522 for storage in memory with data. The check bit 544, referred to as the ECC bit, can allow the memory 520 to correct any errors that may occur in writing to and reading from the memory array. Data 542 and check bit 544 can be included as codeword input 546, which is written to a memory resource.The read path 534 represents a path for data read from memory 520 to host 510. In one example, at least certain hardware components of write path 532 and read path 534 are the same hardware. In one example, memory 520 fetches codeword output 552 in response to a read command from host 510. The codeword can include data 554 and check bits 556. The data 554 and the check bit 556 can correspond to the data 542 and the check bit 544 written in the write path 532, respectively. Therefore, the read can access the data and ECC bits. Error correction in read path 534 can include applying an XOR (Exclusive OR) tree to the corresponding H matrix to detect the error and selectively correct the error (in the case of a single bit error). Will be understood. As understood in the art, the H matrix refers to a Hamming code parity check matrix that shows how a linear combination of digits in a codeword is equal to zero. Therefore, the rows of the H matrix identify the coefficients of the parity check equation that must be filled for the components or digits that are part of the codeword. In one example, the memory 520 includes a syndrome decode 524 that allows the memory to apply check bits 556 to the data 554 to detect errors in the read data. Syndrome decoding 524 can generate syndrome 558 for use in generating appropriate error information about the read data. Data 554 can also be transferred to error correction 528 for correction of detected errors.In one example, the syndrome decode 524 passes the syndrome 558 to the syndrome generator 526 to generate an error vector. In one example, the check bit generator 522 and the syndrome generator 526 are fully identified by the corresponding H matrix for the memory device. In one example, if there is no error in the read data (eg zero syndrome 558), the syndrome generator 526 does not generate the error signal 562. In one example, if the read data has multiple errors (eg, non-zero syndrome 558 that does not match any of the columns in the corresponding H matrix), the syndrome generator 526 sends a DUE (detected uncorrected error) signal 564. Generated, which indicates the uncorrected error detected. The DUE signal 564 can indicate a multi-bit error that memory 520 could not be corrected by internal ECC.In one example, if there is a single-bit error (eg, a non-zero syndrome 558 that matches one of the columns in the corresponding H matrix), the syndrome generator 526 will generate a CE (corrected error) signal at error position 560. This is a corrected error indicator for error correction logic 528. The error correction 528 can apply the corrected error to the identified position in the data 554 and generate the corrected data 566 for output to the host 510. In one example, error correction 528 also generates check bits 568, which include check bits for read data.The check bit 568 can be regarded as an error vector indicating an error state in the read data transmitted to the host 510. It will be appreciated that the zero syndrome (error-free 562) condition, and the corrected SBE resulting in the corrected data 566, have the same check bit 568, indicating no error to host 510. Therefore, the check bit 568 provides only multi-bit errors, not information about SBE in memory 520. In one example, system 500 writes the corrected data back to the memory array.In one example, system 500 includes an internal ECC write path 532 and an internal ECC read path 534 for each part of the array. According to a system compatible with FIG. 202, the memory device 520 comprises one path for half of its I / O pins and a second path for the other half of its I / O pins. Can be done. Therefore, the memory 520 can perform ECC separation on hardware resources to separate the application of ECC to individual subparts of the entire data provided by the memory device.FIG. 6 is a block diagram of an example memory subsystem in which ECC subchannel isolation can be implemented. System 600 includes elements of a memory subsystem in a computing device and a processor. The system 600 is an example of a system in which a system compatible with the graphic 202 can be incorporated.In one example, memory device 640 includes ECC separation 680 in memory array 660. ECC Separation 680 represents the hardware and logic for implementing ECC Separation within a memory device within the channel subdivision according to any example herein. ECC isolation includes additional hardware resources to provide more driver circuitry for managing multiple parts of the memory array as separate subchannels for internal ECC operations. The ECC separation 680 can control the application of ECC by the on-die ECC circuit.The processor 610 represents a processing unit of a computing platform capable of executing an operating system (OS) and an application, and the processing units can be collectively referred to as a memory host or user. The OS and applications perform operations that result in memory access. Processor 610 can include one or more individual processors. Each individual processor can include a single processing unit, a multi-core processing unit, or a combination. The processing unit may be a primary processor such as a CPU (central processing unit), a peripheral processor such as a GPU (graphics processing unit), or a combination thereof. Memory access can also be initiated by a device such as a network controller or hard disk controller. Such devices may be integrated with the processor of some systems, or may be connected to the processor via a bus (eg, PCI Express), or a combination thereof. The system 600 can be implemented as a SOC (System on Chip) or as a stand-alone component.References to memory devices can be applied to different memory types. Memory devices often refer to volatile memory technology. Volatile memory is a memory whose state (and thus the data stored in it) becomes indefinite if the power to the device is cut off. Non-volatile memory refers to memory whose state is fixed even when power to the device is cut off. Dynamic volatile memory requires refreshing the data stored in the device to maintain its state. An example of a dynamic volatile memory is some modification such as a DRAM (Dynamic Random Access Memory) or a Synchronous DRAM (SDRAM). Memory subsystems such as those described herein are DDR4 (Double Data Rate (DDR) Version 4, JEDEC79-4, originally published by JEDEC in September 2012), LPDDR4 (Low Power DDR Version 4, JESD209-). 4. Initially released by JEDEC in August 2014), WIO2 (initially released by Wide I / O2 (WideIO2), JESD229-2, JEDEC in August 2014), HBM (High Bandwidth Memory DRAM, JESD235A, by JEDEC) Initially released in November 2015), DDR5 (DDR version 5, currently being discussed by JEDEC), LPDDR5 (LPDDR version 5, JESD209-5, initially released in February 2019 by JEDEC), HBM2 (HBM version 2) , Currently being discussed by JEDEC), or other, or a combination of memory technologies, and may be compatible with a number of memory technologies, such as technologies based on derivatives or extensions of such specifications.In one example, in addition to or alternative to volatile memory, reference to a memory device can refer to a non-volatile memory device whose state is fixed even when power to the device is cut off. In one example, the non-volatile memory device is a block addressable memory device such as NAND or NOR technology. Thus, memory devices can also include future generations of non-volatile devices, such as three-dimensional cross-point memory devices, other byte-addressable non-volatile memory devices. The memory device can include a non-volatile byte addressable medium that stores data based on the resistance state of the memory cell or the phase of the memory cell. In one example, the memory device can use a chalcogenide phase change material (eg chalcogenide glass). In one example, the memory device is a multi-threshold level NAND flash memory, NOR flash memory, single or multi-level phase change memory (PCM) or switchable phase change memory (PCMS), resistance memory, nanowire memory, strong dielectric transistor. Can it be a random access memory (FeTRAM), a magnetoresistive random access memory (MRAM) memory incorporating memory star technology, or a spin transfer torque (STT) -MRAM, or a combination of any of the above, or any other memory? , Or they can be included.Memory controller 620 represents one or more memory controller circuits or devices for system 600. The memory controller 620 represents control logic that generates memory access commands in response to the execution of operations by the processor 610. The memory controller 620 accesses one or more memory devices 640. The memory device 640 can be a DRAM device according to any of those referenced above. In one example, memory devices 640 are configured and managed as different channels, with each channel coupled to buses and signal lines that are coupled in parallel to multiple memory devices. Each channel can operate independently. Therefore, each channel is accessed and controlled independently, and timing, data transfer, command and address exchange, and other operations are separate for each channel. The bond can refer to an electrical bond, a communication bond, a physical bond, or a combination thereof. The physical bond can include direct contact. Electrical coupling includes interfaces or interconnects that allow electrical flow between components, signal transmission between components, or both. Communication coupling includes connections, including wired or wireless, that allow components to exchange data.In one example, the settings for each channel are controlled by individual mode registers or other register settings. In one example, each memory controller 620 manages a separate memory channel, but the system 600 may have multiple channels managed by a single controller, or may have multiple controllers on a single channel. Can be configured. In one example, the memory controller 620 is part of a host processor 610, such as logic mounted on the same die or in the same package space as the processor.The memory controller 620 includes I / O interface logic 622 for coupling to a memory bus such as the memory channels mentioned above. The I / O interface logic 622 (and the I / O interface logic 642 of the memory device 640) includes pins, pads, connectors, signal lines, traces, or wires, or other hardware that connects the devices, or a combination thereof. Can include. The I / O interface logic 622 may include a hardware interface. As shown, the I / O interface logic 622 includes at least a driver / transceiver for the signal line. Generally, the wires in an integrated circuit interface are combined with pads, pins, or connectors to interface signal lines or traces or other wires between devices. The I / O interface logic 622 includes drivers, receivers, transceivers, or terminations, or other circuits or combinations of circuits that can exchange signals on the signal lines between devices. The exchange of signals involves at least one of transmission or reception. While the memory controller 620 is shown to couple the I / O 622 to the I / O 642 of the memory device 640, in the implementation of the system 600 where a group of the memory devices 640 are accessed in parallel, the plurality of memory devices are the memory controller 620. It will be appreciated that it may include an I / O interface for the same interface. In an implementation of the system 600 that includes one or more memory modules 670, the I / O 642 can include interface hardware for the memory modules in addition to the interface hardware on the memory device itself. The other memory controller 620 includes a separate interface to the other memory device 640.The bus between the memory controller 620 and the memory device 640 can be implemented as a plurality of signal lines connecting the memory controller 620 to the memory device 640. The bus typically includes at least a clock (CLK) 632, a command / address (CMD) 634, write data (DQ) and read data (DQ) 636, and zero or more other signal lines 638. In one example, the bus or connection between the memory controller 620 and the memory can be referred to as the memory bus. In one example, the memory bus is a multi-drop bus. The CMD signal line can be referred to as the "C / A bus" (or ADD / CMD bus, or any other name indicating the transfer of command (C or CMD) and address (A or ADD) information) and is written. And the read DQ signal line can be referred to as the "data bus". In one example, the independent channels have different clock signals, C / A buses, data buses, and other signal lines. Thus, the system 600 can be considered to have multiple "buses" in the sense that independent interface paths can be considered as separate buses. In addition to the explicitly indicated lines, it is understood that the bus can include at least one of a strobe signaling line, a warning line, an auxiliary line, or another signal line, or a combination thereof. There will be. It will also be appreciated that serial bus technology can be used for the connection between the memory controller 620 and the memory device 640. An example of serial bus technology is 8B10B encoding and transmission of high speed data with an embedded clock via a single differential pair of signals in each direction. In one example, CMD634 represents a signal line shared in parallel with a plurality of memory devices. In one example, multiple memory devices share a CMD634 encoding command signal line, each with a separate chip selection (CS_n) signal line for selecting individual memory devices.In the example of system 600, it will be appreciated that the bus between the memory controller 620 and the memory device 640 includes a subordinate command bus (CMD634) and an auxiliary bus to carry write and read data, DQ636. In one example, the data bus can include bidirectional lines for read data and write / command data. In another example, the auxiliary bus DQ636 may include a one-way write signal line for host-to-memory write and data, and may include a one-way line for memory-to-host data read. it can. According to the selected memory technology and system design, the other signal 638 may accompany a bus or subbus such as strobe line DQS. Based on the design or implementation of the system 600, the data bus can have more or less bandwidth per memory device 640 if the design supports multiple implementations. For example, the data bus can support memory devices that have either x4, x8, x16, or other interfaces. In convention "xW", W is an integer indicating the interface size or width of the interface of the memory device 640 and represents the number of signal lines for exchanging data with the memory controller 620. The interface size of a memory device is a controlling factor as to how many memory devices can be used simultaneously for each channel in the system 600 or coupled in parallel to the same signal line. In one example, a high bandwidth memory device, wide interface device, or stacked memory configuration, or a combination thereof, has a wider interface, such as x128 interface, x256 interface, x512 interface, x1024 interface, or other data bus interface width. Can be enabled.In one example, the memory device 640 and the memory controller 620 exchange data over the data bus in bursts or in a series of continuous data transfers. Burst corresponds to the number of transfer cycles associated with the bus frequency. In one example, the transfer cycle can be the entire clock cycle of the transfer occurring at the same clock or strobe signal edge (eg, rising edge). In one example, every clock cycle that refers to a system clock cycle is divided into multiple unit intervals (UIs), where each UI is a transfer cycle. For example, double data rate transfers are triggered at both edges of the clock signal (eg, rising and falling). The burst can last for a configured number of UIs, which can be register-stored or on-the-fly triggered. For example, a series of eight consecutive transfer periods can be considered as burst length 8 (BL8), and each memory device 640 can transfer data on each UI. Therefore, the x8 memory device operating in BL8 can transfer 64-bit data ([8 data signal lines] x [8 data bits transferred line by line via burst]). It will be appreciated that this simple example is merely an example and is not limiting.Memory device 640 represents a memory resource for system 600. In one example, each memory device 640 is a separate memory die. In one example, each memory device 640 can interface with multiple (eg, two) channels per device or per die. Each memory device 640 includes an I / O interface logic 642 having a bandwidth determined by the device implementation (eg, x16 or x8 or some other interface bandwidth). The I / O interface logic 642 allows the memory device to interface with the memory controller 620. The I / O interface logic 642 can include a hardware interface and may be at the memory device end according to the I / O 622 of the memory controller. In one example, multiple memory devices 640 are connected in parallel with the same command and data bus. In another example, the plurality of memory devices 640 are connected in parallel to the same command bus and connected to different data buses. For example, the system 600 can consist of a plurality of memory devices 640 coupled in parallel, each memory device responding to a command and accessing each internal memory resource 660. In the case of a write operation, each memory device 640 can write a portion of the entire data word, and in the case of a read operation, the individual memory device 640 can fetch a portion of the entire data word. The remaining bits of the word are provided or received in parallel by other memory devices.In one example, the memory device 640 is located directly on the motherboard or host system platform of the computing device (eg, the PCB (Printed Circuit Board) on which the processor 610 is located). In one example, the memory device 640 can be configured in the memory module 670. In one example, the memory module 670 represents a dual inline memory module (DIMM). In one example, the memory module 670 may be a separate circuit, a separate device, or a separate board from the host system platform, another organization of multiple memory devices that share at least a portion of the access or control circuit. Represents. The memory module 670 can include a plurality of memory devices 640, and the memory module can include support for a plurality of individual channels to the included memory devices located therein. In another example, the memory device 640 is packaged in the same package as the memory controller 620 by technologies such as multi-chip modules (MCMs), package-on-packages, through silicon vias (TSVs), or other technologies or combinations thereof. obtain. Similarly, in one example, a plurality of memory devices 640 may be incorporated into the memory module 670, but they themselves may be incorporated into the same package as the memory controller 620. For these and other implementations, it will be appreciated that the memory controller 620 can be part of the host processor 610.The memory device 640 includes one or more memory arrays 660, respectively. The memory array 660 represents an addressable memory location or storage location for data. Typically, the memory array 660 is managed as a row of data and is accessed via wordline (row) and bitline (individual bits within the row) control. The memory array 660 can be configured as separate channels, ranks, and banks of memory. The channel can point to an independent control path to a storage location within the memory device 640. Rank can refer to a common location across multiple parallel memory devices (eg, addresses of the same row within different devices). The bank may refer to a subarray of memory locations within the memory device 640. In one example, a bank of memory is divided into subbanks that have at least some of the shared circuitry for the subbanks (eg drivers, signal lines, control logic), allowing individual addressing and access. It will be appreciated that other configurations of channels, ranks, banks, subbanks, bank groups, or memory locations, and combinations of those configurations, can overlap in application to physical resources. For example, the same physical memory location can be accessed via a particular channel as a particular bank, which can also belong to a rank. Therefore, the configuration of memory resources will be understood in a comprehensive way, not in an exclusive way.In one example, memory device 640 includes one or more registers 644. Register 644 represents one or more storage devices or storage locations that provide configurations or settings for the operation of memory devices. In one example, register 644 can provide a storage location for memory device 640 to store data for access by memory controller 620 as part of a control or management operation. In one example, register 644 includes one or more mode registers. In one example, register 644 includes one or more multipurpose registers. The configuration of the position in register 644 can configure the memory device 640 to operate in different "modes", and the command information can trigger different operations in the memory device 640 based on the mode. Additional or alternative, different modes can also trigger different operations from address information or other signal lines, depending on the mode. The register 644 settings can indicate the configuration of the I / O settings (eg, timing, termination or ODT (On-Ditermination) 646, driver configuration, or other I / O settings).In one example, the memory device 640 can include the ODT 646 as part of the interface hardware associated with the I / O 642. The ODT646 can be configured as described above and provides a setting for the impedance applied to the interface for a particular signal line. In one example, ODT646 is applied to the DQ signal line. In one example, ODT646 is applied to the command signal line. In one example, ODT646 is applied to the address signal line. In one example, ODT646 can be applied to any of the above combinations. The ODT settings can be changed based on whether the memory device is the selected target or non-target device for access operations. The setting of ODT646 can affect the timing and reflection of signaling on the terminating line. Careful control of the ODT646 can improve the integrity of the applied impedance and load and allow for high speed operation. The ODT646 can be applied to specific signal lines of I / O interfaces 642,622 (eg, ODT for DQ lines or ODT for CA lines), not necessarily all signal lines.The memory device 640 includes a controller 650 that represents control logic within the memory device for controlling internal operations within the memory device. For example, the controller 650 decodes the command transmitted by the memory controller 620 and generates an internal operation to execute or satisfy the command. The controller 650 can be referred to as the internal controller and is separate from the host memory controller 620. The controller 650 determines which mode is selected based on register 644 and configures an operation for access to the memory resource 660 or an internal execution for other operations based on the selected mode. be able to. Controller 650 generates control signals that control the routing of bits in memory device 640 to provide the appropriate interface for the selected mode and send commands to the appropriate memory location or address. Controller 650 includes command logic 652 capable of decoding the command encoding received on the command and address signal lines. Therefore, the command logic 652 can be a command decoder or can include it. Command logic 652 allows a memory device to identify a command and generate an internal operation to execute the requested command.With reference to the memory controller 620 again, the memory controller 620 includes a command (CMD) logic 624, which represents a logic or circuit for generating a command to be transmitted to the memory device 640. Command generation refers to commands prior to scheduling or preparation of queued commands that are ready to be sent. In general, signaling in a memory subsystem includes address information within or associated with a command in order for the memory device to indicate or select one or more memory locations to execute the command. In response to the transaction scheduling of the memory device 640, the memory controller 620 can issue a command via the I / O 622 to cause the memory device 640 to execute the command. In one example, the controller 650 of the memory device 640 receives and decodes the command and address information received from the memory controller 620 via the I / O 642. Based on the received command and address information, the controller 650 can control the timing of logic and circuit operations in the memory device 640 to execute the command. The controller 650 serves to comply with standards or specifications within the memory device 640, such as timing and signaling requirements. Memory controller 620 can implement standards or specification compliance by scheduling and controlling access.The memory controller 620 includes a scheduler 630, which represents the logic or circuit for generating and ordering transactions for transmission to the memory device 640. From one point of view, it can be said that the main function of the memory controller 620 is to schedule memory access to the memory device 640 and other transactions. Such scheduling can include generating the transaction itself, implementing a request for data by the processor 610, and maintaining data integrity (eg, using commands related to refresh). A transaction can include one or more commands, so that commands and / or data are transferred over one or more timing cycles such as clock cycles or unit intervals. Transactions are for access such as read or write, or related commands, or combinations thereof, and other transactions are configuration, configuration, data integrity, or memory management commands for other commands or combinations thereof. Can be included.The memory controller 620 typically includes logic such as the scheduler 630 that allows the selection and ordering of transactions to improve the performance of the system 600. Therefore, the memory controller 620 can choose which of the outstanding transactions should be sent to the memory device 640 in what order, which is much more complicated than the simple first-in first-out algorithm. Usually achieved in logic. The memory controller 620 manages the transmission of the transaction to the memory device 640 and manages the timing associated with the transaction. In one example, the transaction is managed by a memory controller 620 and has deterministic timing that can be used in deciding how to schedule the transaction using the scheduler 630.In one example, memory controller 620 includes refresh (REF) logic 626. The refresh logic 626 is volatile and can be used for memory resources that need to be refreshed to hold a deterministic state. In one example, refresh logic 626 indicates the location of the refresh and the type of refresh to perform. The refresh logic 626 performs an external refresh (which can be referred to as an auto-refresh command) by triggering a self-refresh in the memory device 640 or by sending a refresh command, or a combination thereof. be able to. In one example, the controller 650 in the memory device 640 includes refresh logic 654 to apply the refresh in the memory device 640. In one example, the refresh logic 654 can generate an internal operation to perform a refresh according to an external refresh received from the memory controller 620. The refresh logic 654 can determine that the refresh targets the memory device 640 and which memory resource 660 is refreshed in response to the command.FIG. 7 is a block diagram of an example computing system in which ECC subchannel isolation can be implemented. System 700 represents a computing device according to any of the examples herein, such as a laptop computer, desktop computer, tablet computer, server, gaming or entertainment control system, embedded computing device, or other electronic device. Can be. System 700 provides an example of a system in which a system compatible with graphic 202 can be incorporated.In one example, memory subsystem 720 includes ECC separation 790 in memory 730. ECC isolation represents the hardware and logic for implementing ECC isolation within a memory device within the channel subdivision according to any example herein. ECC isolation includes additional hardware resources to provide more driver circuitry for managing multiple parts of the memory array as separate subchannels for internal ECC operations. ECC Separation 790 can control the application of ECC by the on-die ECC circuit.The system 700 is a microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware, or any of them, for providing processing or execution of instructions for the system 700. It has a processor 710 that includes any type of combination. The processor 710 controls the entire operation of the system 700 and is one or more programmable general purpose or special purpose microprocessors, digital signal processors (DSPs), programmable controllers, application specific integrated circuits (ASICs), programmable logic devices (PLDs). ) Etc., or a combination of devices such as them, or may include it.In one example, system 700 includes interface 712 coupled to processor 710, which has higher speed for system components that require higher throughput connectivity, such as memory subsystem 720 or graphics interface component 740. Interface or high throughput interface can be represented. Interface 712 represents an interface circuit that can be a stand-alone component or can be integrated into a processor die. Interface 712 may be integrated into the processor die as a circuit or system-on-chip as a component. If so, the graphics interface 740 interfaces with the graphics component to provide a visual display to the user of the system 700. The graphics interface 740 can be a stand-alone component or can be integrated into a processor die or system-on-chip. In one example, the graphics interface 740 can drive a high definition (HD) display or an ultra high definition (UHD) display that provides output to the user. In one example, the display can include a touch screen display. In one example, the graphics interface 740 produces a display based on data stored in memory 730, based on operations performed by processor 710, or both.Memory subsystem 720 represents the main memory of system 700 and provides storage for code to be executed by processor 710 or data values to be used to execute routines. Memory subsystem 720 is a variant of one or more variants of read-only memory (ROM), flash memory, random access memory (RAM) such as DRAM, 3DXP (three-dimensional crosspoint), or other memory devices, or them. It can include one or more memory devices 730, such as a combination of devices such as. The memory 730, among other things, stores the operating system (OS) 732 and acts as its host, providing a software platform for executing instructions within the system 700. In addition, application 734 can be run from memory 730 on the software platform of OS 732. Application 734 represents a program having these own operational logics for performing the execution of one or more functions. Process 736 represents an agent or routine that provides ancillary functionality to OS 732 or one or more applications 734, or a combination thereof. OS 732, application 734, and process 736 provide software logic that provides functionality for system 700. In one example, memory subsystem 720 includes memory controller 722, which is a memory controller that generates and issues commands to memory 730. It will be appreciated that the memory controller 722 can be a physical part of the processor 710 or a physical part of the interface 712. For example, the memory controller 722 can be an integrated memory controller and can be integrated into a circuit having a processor 710 such as integrated into a processor die or system on chip.Although not specifically shown, it will be appreciated that the system 700 may include one or more buses or bus systems, such as memory buses, graphics buses, or interface buses, between devices. The bus or other signal line can connect the components communicably or electrically to each other, or can connect the components both communicably and electrically. Buses may include circuits such as physical communication lines, point-to-point connections, bridges, adapters or controllers, or combinations thereof. Buses include, for example, system buses, peripheral component interconnect (PCI) buses, hypertransport or industry standard architecture (ISA) buses, small computer system interface (SCSI) buses, universal serial buses (USB), or other buses. Alternatively, one or more of those combinations can be included.In one example, system 700 includes interface 714 that can be coupled to interface 712. Interface 714 can be a slower interface than interface 712. In one example, interface 714 represents an interface circuit that can include stand-alone components and integrated circuits. In one example, multiple user interface components and / or peripherals are coupled to interface 714. The network interface 750 provides the system 700 with the ability to communicate with a remote device (eg, a server, or other computing device) over one or more networks. Network interface 750 can include Ethernet® adapters, wireless interconnect components, cellular network interconnect components, USB (Universal Serial Bus), or other wired or wireless standard based or dedicated interfaces. The network interface 750 can exchange data with a remote device that can include transmitting data stored in memory or receiving data stored in memory.In one example, the system 700 includes one or more input / output (I / O) interfaces 760. The I / O interface 760 may include one or more interface components through which the user interacts with the system 700 (eg, audio, alphanumeric, tactile / touch, or other interface scheme). Peripheral interface 770 can include any hardware interface not specifically mentioned above. Peripherals generally refer to a plurality of devices that are subordinately connected to the system 700. A subordinate connection is one in which the system 700 provides a software platform, a hardware platform, or both that operations perform and interact with.In one example, system 700 includes a storage subsystem 780 for storing data in a non-volatile manner. In one example, in a particular system implementation, at least certain components of storage 780 can overlap with components of memory subsystem 720. The storage subsystem 780 can be any conventional medium for storing large amounts of data in a non-volatile manner, such as one or more magnetic, solid state, 3DXP, or optical base disks, or a combination thereof. , Or a storage device 784 that can include it. The storage 784 holds the code or instruction and data 786 in a permanent state (that is, the value is held even if the power to the system 700 is cut off). Although memory 730 is typically the memory that executes or operates to provide instructions to processor 710, storage 784 can be generally considered to be "memory." Although the storage 784 is non-volatile, the memory 730 may include a volatile memory (ie, the value or state of the data is uncertain if power to the system 700 is cut off). In one example, the storage subsystem 780 includes a controller 782 that interfaces with the storage 784. In one example, controller 782 can be a physical part of interface 714 or processor 710, or can include circuits or logic in both processor 710 and interface 714.The power supply 702 powers the components of the system 700. More specifically, the power supply 702 usually interfaces with one or more power supply devices 704 in the system 700 to power the components of the system 700. In one example, the power supply 704 includes an AC-DC (alternating current to direct current) adapter that connects to a wall outlet. Such AC power can be a renewable energy (eg, photovoltaic) power source 702. In one example, the power supply 702 includes a DC power supply, such as an external AC-DC converter. In one example, the power supply 702 or power supply 704 includes wireless charging hardware for charging in close proximity to the charging field. In one example, the power source 702 may include an internal battery or fuel cell power source.FIG. 8 is a block diagram of an example of a mobile device in which ECC subchannel isolation can be implemented. System 800 represents a mobile computing device such as a computing tablet, mobile phone or smartphone, wearable computing device, or other mobile device, or embedded computing device. It will be appreciated that some of the components are generally shown and not all components of such devices are shown in the system 800. System 800 provides an example of a system in which a system compatible with FIG. 202 can be incorporated.In one example, memory subsystem 860 includes ECC separation 890 in memory 862. ECC isolation represents the hardware and logic for implementing ECC isolation within a memory device within the channel subdivision according to any example herein. ECC isolation includes additional hardware resources to provide more driver circuitry for managing multiple parts of the memory array as separate subchannels for internal ECC operations. ECC Separation 890 can control the application of ECC by the on-die ECC circuit.The system 800 includes a processor 810 that performs the main processing operations of the system 800. Processor 810 may include one or more physical devices such as microprocessors, application processors, microcontrollers, programmable logic devices, or other processing means. Processing operations performed by processor 810 include running the operating platform or operating system on which application and device functions are performed. Processing operations are operations related to I / O (input / output) with human users or other devices, operations related to power management, operations related to connecting the system 800 to other devices, or them. Including combinations of. Processing operations may also include operations related to audio I / O, display I / O, or other interfaces, or a combination thereof. Processor 810 may execute data stored in memory. Processor 810 may write or edit data stored in memory.In one example, system 800 includes one or more sensors 812. Sensor 812 represents an interface to an embedded sensor or an external sensor, or a combination thereof. Sensor 812 allows the system 800 to monitor or detect the state of one or more of the environments or devices in which the system 800 is mounted. Sensors 812 include environmental sensors (temperature sensors, motion detectors, optical detectors, cameras, chemical sensors (eg, carbon monoxide sensors, carbon dioxide sensors, or other chemical sensors), pressure sensors, accelerators, gyros, etc.). Scopes, medical or physiological sensors (eg, biosensors for detecting physiological attributes, heart rate monitors, or other sensors), or other sensors, or combinations thereof can be included. Sensor 812 can also include sensors for biometric systems such as fingerprint recognition systems, face detection or recognition systems, or other systems that detect or recognize user features. The sensor 812 should be widely understood and is not limited to many different types of sensors that can be implemented in the system 800. In one example, one or more sensors 812 couple to processor 810 via a front-end circuit integrated into processor 810. In one example, one or more sensors 812 couple to processor 810 via other components of system 800.In one example, system 800 includes an audio subsystem 820 that represents hardware (eg, audio hardware and audio circuits) and software (eg, drivers, codecs) components associated with providing audio functionality to a computing device. .. Audio features may include speaker or headphone output, as well as microphone input. Devices for such functions may be integrated into or connected to system 800. In one example, the user interacts with the system 800 by providing audio commands received and processed by the processor 810.The display subsystem 830 represents hardware components (eg, display devices) and software components (eg, drivers) that provide a visual display for presentation to the user. In one example, the display includes tactile components or touch screen elements for the user to interact with the computing device. The display subsystem 830 includes a display interface 832, which includes a particular screen or hardware device used to provide a display to the user. According to one example, the display interface 832 includes logic circuits separate from the processor 810 (such as a graphics processor) for performing at least some display-related processing. In one example, the display subsystem 830 includes a touch screen device that provides both output and input to the user. In one example, the display subsystem 830 includes a high definition (HD) or ultra high definition (UHD) display that provides output to the user. In one example, the display subsystem includes or drives a touch screen display. In one example, the display subsystem 830 generates display information based on data stored in memory and / or based on operations performed by processor 810.The I / O controller 840 represents hardware devices and software components associated with user interaction. The I / O controller 840 may operate to manage the hardware that is part of the audio subsystem 820 and / or the display subsystem 830. In addition, the I / O controller 840 indicates a connection point for additional devices that connect to the system 800 where the user can interact with the system. For example, devices that can be attached to system 800 are specific such as microphone devices, speakers or stereo systems, video systems, or other display devices, keyboard or keypad devices, buttons / switches, or card readers or other devices. Can include other I / O devices for use with the application of.As mentioned above, the I / O controller 840 can interact with the audio subsystem 820 and / or the display subsystem 830. For example, inputs through a microphone or other audio device may provide inputs or commands for one or more applications or functions in System 800. Further, an audio output may be provided in place of or in addition to the display output. In another example, if the display subsystem includes a touch screen, the display device also operates as an input device that can be at least partially managed by the I / O controller 840. Additional buttons or switches may be present on the system 800 to provide I / O functionality managed by the I / O controller 840.In one example, the I / O controller 840 is a device such as an accelerometer, camera, optical sensor or other environmental sensor, gyroscope, Global Positioning System (GPS), or other hardware that may be included in System 800. , Or manage the sensor 812. The input can be part of a direct user dialogue and its operation of environmental input to the system (noise filtering, adjusting the display for brightness detection, applying a flash for the camera, or other features. Etc.) can be provided to affect.In one example, system 800 includes power management 850 that manages functions related to battery power usage, battery charging, and power saving operations. The power management 850 manages the power from the power source 852 and provides power to the components of the system 800. In one example, the power supply 852 includes an AC-DC (alternating current to direct current) adapter that connects to a wall outlet. Such AC power can be renewable energy (eg, photovoltaic, motion-based power). In one example, the power supply 852 includes only a DC power supply that can be provided by a DC power supply, such as an external AC-DC converter. In one example, the power supply 852 includes wireless charging hardware for charging in close proximity to the charging field. In one example, the power source 852 may include an internal battery or fuel cell power source.The memory subsystem 860 includes a memory device 862 for storing information within the system 800. The memory subsystem 860 is a non-volatile (state does not change when power to the memory device is cut off) or volatile (state is indefinite when power to the memory device is cut off) memory device or them. Can include combinations of. The memory 860 stores application data, user data, music, photographs, documents, or other data, as well as system data (whether long-term or temporary) relating to the execution of multiple applications and multiple functions of the system 800. Can be done. In one example, the memory subsystem 860 includes a memory controller 864, which can also be considered part of the control of the system 800 and potentially part of the processor 810. The memory controller 864 includes a scheduler for generating and issuing commands that control access to the memory device 862.The connectivity 870 provides hardware devices (eg, wireless or wired connectors and communication hardware, or a combination of wired and wireless hardware) and software components (eg, a combination) and software components that allow the system 800 to communicate with external devices. Includes driver, protocol stack). External devices can be other computing devices, individual devices such as wireless access points or base stations, and peripherals such as headset printers or other devices. According to one example, the system 800 exchanges data with an external device for storage in memory or for display on a display device. The data exchanged may include data that should be stored in memory for reading, writing or editing data, or data that is already stored in memory.The connection function 870 can include a plurality of different types of connection functions. In generalization, the system 800 is shown with the cellular connection function 872 and the wireless connection function 874. The cellular connectivity function 872 is generally GSM® (global system for mobile communications) or variants or derivatives, CDMA (code division multiple access) or variants or groups. Biology, TDM (time division multiplexing) or variants or derivatives, LTE (long term evolution—also known as “4G”), provided via 5G, or other cellular service standards. Refers to cellular network connectivity provided by wireless carriers, such as those provided. Wireless connection function 874 refers to a wireless connection function that is not cellular, and refers to a personal area network (such as Bluetooth®), a local area network (such as WiFi), a wide area network (such as WiMax), or other wireless communication, or Combinations of them can be included. Wireless communication refers to the transfer of data through the use of modulated radio radiation over a non-solid medium. Wired communication is performed via a solid-state communication medium.Peripheral connection 880 includes hardware interfaces and connectors, as well as software components (eg, drivers, protocol stacks) for making peripheral connections. It is understood that the system 800 can be a peripheral device ("outside" 882) to another computing device or can have a peripheral device ("outside" 884) connected to the system 800. There will be. The system 800 generally has a "docking" connector for connecting to other computing devices for purposes such as managing (eg, downloading, uploading, modifying, synchronizing) content on the system 800. Further, the docking connector may allow the system 800 to connect to specific peripherals that allow the system 800 to control, for example, audiovisual or content output to other systems.In addition to dedicated docking connectors or other dedicated connection hardware, the system 800 can make peripheral connections 880 via general or standards-based connectors. Common types include a universal serial bus (USB) connector (which can include any of several different hardware interfaces), a display port including a Mini DisplayPort (MDP), and a high definition multimedia interface (HDMI). (Registered Trademarks)), or other types may be included.In general, with respect to the description herein, in one example, the memory device includes additional drivers and additional subarrays for each channel, and the channels as a whole at the system level on the channel, error checking for each subpart inside the memory device. The correction treats it as two subchannels.In one example, the additional subarray stores additional ECC (error check and correction) data. In one example, the data bus is x4 or x8 and the additional subarrays and additional drivers are for x4 implementations only. In one example, the channel is 64 bits over the burst length. In one example, 128 bits are prefetched and only 64 bits are transferred to the I / O (input / output) circuit. In one example, the memory device includes a dynamic random access memory (DRAM) device. In one example, DRAM devices include synchronous DRAM (SDRAM) devices that are compatible with the Double Data Rate (DDR) standard.Generally, with respect to the description herein, in one example, the system has a controller and a memory device, per channel, containing additional drivers and additional subarrays, where the channels are generally on the channel at the system level, of the memory device. It is processed as two sub-channels by error inspection correction for each internal sub-part.In one example, the additional subarray stores additional ECC (error check and correction) data. In one example, the data bus is x4 or x8 and the additional subarrays and additional drivers are for x4 implementations only. In one example, the channel is 64 bits over the burst length. In one example, 128 bits are prefetched and only 64 bits are transferred to the I / O (input / output) circuit. In one example, the memory device includes a dynamic random access memory (DRAM) device. In one example, DRAM devices include synchronous DRAM (SDRAM) devices that are compatible with the Double Data Rate (DDR) standard. In one example, the system is further one of a host processor device coupled to a memory device, a display communicably coupled to the host processor, a network interface communicably coupled to the host processor, or a battery that powers the system. Or includes a plurality.Generally, with respect to the description herein, in one example, the memory device has a hardware interface that couples to a data signal line for exchanging data with the host, and inside the memory device, the ECC is N to N data bits. It includes error inspection correction (ECC) hardware for application as two groups of / 2 bits.In one example, N is equal to 64. In one example, the memory device includes hardware for prefetching 128 bits of data and transferring only 64 bits of data to the I / O (input / output) circuit of the hardware interface. In one example, the memory device further includes a memory array, the memory array contains multiple subarrays for providing N data bits, and the memory array contains N data bits for storing additional ECC data. Includes additional subarrays that exceed. In one example, the memory device includes a driver associated with the sub-array, and the memory device contains an additional driver for controlling access to the additional sub-array. In one example, the hardware interface is for coupling to a data bus that is x4 or x8, and additional subarrays and additional drivers are applied only if the hardware interface is coupled to the x4 data bus. In one example, N data bits include the channel, ECC hardware treats the channel as two subchannels, each with N / 2 bits, and the host treats the channel for system-level ECC. Treat as an N-bit channel. In one example, N data bits include channels, ECC hardware treats the channels as two subchannels, each with N / 2 bits, and each subchannel can be corrected individually. In one example, the memory device includes a Synchronous Dynamic Random Access Memory (SDRAM) device compatible with the Double Data Rate (DDR) standard.Generally, with respect to the description herein, in one example, the system is a plurality of memory devices, the memory device being a hardware interface for coupling to a data signal line for exchanging data with the host, and a memory device. Internally, multiple memory devices with error check correction (ECC) hardware for applying ECC to N data bits as two groups of N / 2 bits, and a memory controller coupled to the memory devices. The memory controller provides system-level ECC for the data bits received from the memory device.In one example, N is equal to 64. In one example, the memory device includes hardware for prefetching 128 bits of data and transferring only 64 bits of data to the I / O (input / output) circuit of the hardware interface. In one example, the memory device includes a memory array, the memory array contains multiple sub-arrays for providing N data bits, and the memory array contains N data bits for storing additional ECC data. Includes additional subarrays that exceed. In one example, the memory device includes a driver associated with the sub-array, and the memory device contains an additional driver for controlling access to the additional sub-array. In one example, the hardware interface is for coupling to a data bus that is x4 or x8, and additional subarrays and additional drivers are applied only if the hardware interface is coupled to the x4 data bus. In one example, N data bits include channels, ECC hardware treats the channels as two subchannels, each with N / 2 bits, and the memory controller is system level for N data bits. Provide ECC. In one example, N data bits include channels, ECC hardware treats the channels as two subchannels, each with N / 2 bits, and each subchannel can be corrected individually. In one example, the memory device includes a Synchronous Dynamic Random Access Memory (SDRAM) device compatible with the Double Data Rate (DDR) standard. In one example, the system is further one of a host processor device coupled to a memory controller, a display communicably coupled to the host processor, a network interface communicably coupled to the host processor, or a battery that powers the system. Or includes a plurality.In general, with respect to the description herein, in one example, the method is to receive a data access command for accessing the N data bits of the memory array of the memory device, and error check correction (ECC), the memory device. Inside, N data bits are provided with a stage of applying them as two groups of N / 2 bits.In one example, N is equal to 64. In one example, the step of internally applying ECC to N data bits includes the step of applying ECC to N data bits transmitted in response to a read command, for individual groups of N / 2 bits. Includes the stage of performing error correction. In one example, the internally applying ECC to N data bits includes applying the ECC to the N data bits received with the write command, resulting in an error for individual groups of N / 2 bits. Includes the stage of writing code. In one example, 128 bits of data are prefetched and only 64 bits of data are transferred to the I / O (input / output) circuit of the memory device. In one example, the memory array includes a plurality of sub-arrays to provide N bits, and the memory array contains additional sub-arrays exceeding N bits to store additional ECC data. In one example, the memory device includes a driver associated with the sub-array, and the memory device contains an additional driver for controlling access to the additional sub-array. In one example, the memory device includes a hardware interface that binds to a data bus that is x4 or x8, and additional subarrays and additional drivers are applied only if the hardware interface binds to the x4 data bus. In one example, the N data bits include the channel, and the ECC application step involves treating the channel as two subchannels, each with N / 2 bits, and the host coupled to the memory device. Treats the channel as an N-bit channel for system-level ECC. In one example, N data bits include channels, ECC hardware treats the channels as two subchannels, each with N / 2 bits, and each subchannel can be corrected individually. In one example, the memory device includes a Synchronous Dynamic Random Access Memory (SDRAM) device compatible with the Double Data Rate (DDR) standard.The flow charts shown herein provide examples of a series of various processing operations. A flow diagram may show the operations to be performed by a software or firmware routine and the physical operations. The flow diagram can show an example of a finite state machine (FSM) state implementation that can be implemented in hardware and / or software. Although indicated in a particular order or order, the order of operations may be modified unless otherwise specified. Therefore, the illustrated charts should be understood as just an example, the operations can be performed in different orders, and some actions can be performed in parallel. Moreover, one or more actions can be omitted, and therefore not all implementations perform all actions.As long as various actions or functions are described herein, they may be described or defined as software code, instructions, configurations, and / or data. Content can be directly executable (“object” or “executable” form), source code, or delta code (“delta” or “patch” code). The software content described herein can be provided via a product in which the content is stored or via a method of operating a communication interface for transmitting data via a communication interface. A machine-readable storage medium allows a machine to perform a function or operation described and stores information in a form accessible by the machine (eg, computing device, electronic system, etc.), recordable / non-recordable. It includes any mechanism such as a medium (eg, read-only memory (ROM), random access memory (RAM), magnetic disk storage medium, optical storage medium, flash memory device, etc.). The communication interface is an arbitrary mechanism such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, etc. that interfaces with any of a medium connected by wiring, a wireless medium, an optical medium, etc. and communicates with another device. including. The communication interface can be configured by providing multiple configuration parameters and / or transmitting signals to prepare the communication interface to provide data signals that describe the software content. The communication interface may be accessed via one or more commands or signals transmitted to the communication interface.The various components described herein can be means for performing the operations or functions described. Each component described herein includes software, hardware or a combination thereof. The components include software modules, hardware modules, dedicated hardware (eg, application-specific hardware, application-specific integrated circuits (ASICs), digital signal processors (DSPs), etc.), embedded controllers, and wiring-connected circuits. Can be implemented as such.In addition to the contents described herein, various modifications may be made to the disclosed contents and the implementation of the present invention without departing from their scope. Therefore, the illustrated examples and examples herein should be construed in an exemplary sense and not in a limited sense. The scope of the invention should only be determined by reference to the following claims. Another possible claim [Item 1] A hardware interface coupled to a data signal line for exchanging data with a host in a memory device, and N data bits of ECC inside the memory device. A memory device with error inspection correction (ECC) hardware, which is applied as two groups of N / 2 bits. [Item 2] The memory device according to item 1, wherein N is equal to 64. [Item 3] Item 2 includes hardware for prefetching 128 bits of data and transferring only 64 bits of the data to the I / O (input / output) circuit of the hardware interface. Described memory device. [Item 4] The memory device further includes a memory array, the memory array includes a plurality of sub-arrays for providing the N data bits, and the memory array for storing additional ECC data. The memory device according to item 1, which includes an additional subarray that exceeds the N data bits. [Item 5] The memory device according to item 4, wherein the memory device includes a driver associated with the sub-array, and the memory device includes an additional driver for controlling access to the additional sub-array. [Item 6] The hardware interface is for coupling to a data bus of x4 or x8, and the additional subarray and additional driver are applied only when the hardware interface is coupled to the x4 data bus. Item 5. The memory device according to item 5. [Item 7] The N data bits include channels, the ECC hardware applies ECC to the channels in subchannels, each of which is an N / 2 data bit, and each subchannel is individually corrected. The memory device according to item 1, which is possible. [Item 8] The memory device according to item 1, wherein the memory device includes a synchronous dynamic random access memory (SDRAM) device compatible with a double data rate (DDR) standard. [Item 9] A plurality of memory devices, wherein the memory device has a hardware interface for coupling to a data signal line for exchanging data with a host, and N ECCs inside the memory device. A memory controller coupled to the plurality of memory devices and the memory devices, comprising error check correction (ECC) hardware for applying to the data bits of the N / 2 bits as two groups. The memory controller is a system including the memory controller that provides system level ECC for data bits received from the memory device. [Item 10] The system according to item 9, wherein N is equal to 64. [Item 11] The memory device includes hardware for prefetching 128 bits of data and transferring only 64 bits of the data to the I / O (input / output) circuit of the hardware interface. The system described in. [Item 12] The memory device includes a memory array, the memory array includes a plurality of sub-arrays for providing the N data bits, and the memory array is said to store additional ECC data. 9. The system of item 9, which includes additional subarrays with more than N data bits. [Item 13] The system according to item 12, wherein the memory device includes a driver associated with the sub-array, and the memory device includes an additional driver for controlling access to the additional sub-array. [Item 14] The hardware interface is for coupling to a data bus of x4 or x8, and the additional subarray and additional driver are applied only when the hardware interface is coupled to the x4 data bus. Item 13. The system according to item 13. [Item 15] The N data bits include channels, and the ECC hardware applies ECC to the channels in two subchannels, each of which is an N / 2 data bit, each subchannel individually. The system according to item 9, which can be corrected in. Item 16. The system of item 9, wherein the memory device includes a synchronous dynamic random access memory (SDRAM) device compatible with a double data rate (DDR) standard. [Item 17] Of a host processor device coupled to the memory controller, a display communicably coupled to the host processor, a network interface communicably coupled to the host processor, or a battery that powers the system. 9. The system according to item 9, further comprising one or more of the above. [Item 18] The stage of receiving a data access command for accessing the N data bits of the memory array of the memory device and the error check correction (ECC) are set to N data bits inside the memory device. , A method comprising a step of applying as two groups of N / 2 bits. [Item 19] The step of internally applying ECC to the N data bits includes a step of applying ECC to the N data bits in order to transmit in response to a read command, and N / 2 bits. 18. The method of item 18, which includes performing error correction for individual groups of. [Item 20] The step of internally applying ECC to the N data bits includes a step of applying ECC to the N data bits received together with the write command, and is an individual group of N / 2 bits. 18. The method of item 18, which includes the step of writing an error code for. |
An apparatus configured to decode a block of video data in a coded bitstream includes a memory and a processor in communication with the memory. The memory is configured to store data associated with the block of video data in the coded bitstream. The processor is configured to: determine a transform partition type of the block, the block associated with transform coefficients determined via applying one or more transform functions on a plurality of pixel values associated with the block; determine, based on the transform partition type, an order in which the transform coefficients are to be inputted to an inverse transform function corresponding to the one or more transform functions; obtain output values via inputting the transform coefficients to the inverse transform function in the determined order; and decode the block of video data in the coded bitstream based on the output values. |
1.A method for decoding a block of video data in a coded bit stream, comprising:Determine a transform partition type associated with the block, the block being associated with a plurality of transform coefficients determined at least in part by applying one or more transform functions to a plurality of pixel values associated with the block;Based on the transform partition type, determining the order in which the plurality of transform coefficients will be input to one or more inverse transform functions corresponding to the one or more transform functions;Obtaining a plurality of output values at least in part by inputting the plurality of transform coefficients into the one or more inverse transform functions in the determined order; andDecode the block of video data in the coded bitstream based at least in part on the plurality of output values.2.The method of claim 1 wherein determining the order in which the plurality of transform coefficients are to be input to the one or more inverse transform functions comprises rearranging the plurality of the plurality of transform transform functions based at least in part on the transform partition types Transform coefficient.3.The method of claim 1 wherein determining the order in which the plurality of transform coefficients are to be input to the one or more inverse transform functions comprises rearranging the plurality of the plurality of transform transform functions based at least in part on the transform partition types Transform some but not all of the coefficients.4.The method of claim 1 further comprising one or more arithmetic of selectively bypassing the one or more inverse transform functions based at least in part on the transform partition type associated with the block of video data Operational level.5.The method of claim 1, further comprising an arithmetic operation of selectively by-passing a single stage of the one or more inverse transform functions based at least in part on the transform partition type associated with the block of video data But not all, the single stage contains an arithmetic operation for each transform coefficient input to the one or more inverse transform functions.6.The method of claim 1, wherein the one or more inverse transform functions comprise one or more of a 16-point Hadamard inverse transform function, an 8-point Hadamard inverse transform function, or a 4-point Hadamard inverse transform function.7.The method of claim 1, further comprising rearranging the plurality of output values of the one or more inverse transform functions based at least in part on the transform partition type.8.The method of claim 1, wherein the block of video data corresponds to one of (i) a row of 16 pixels in a coded picture in the bitstream, (ii) the Two rows of 8 pixels in the coded picture in the bitstream, or (iii) four rows of 4 pixels in the coded picture in the bitstream.9.The method of claim 1, wherein the one or more inverse transform functions comprise one or more arithmetic operation stages, each arithmetic operation stage comprising one or more of addition operations or subtraction operations.10.The method of claim 1, wherein said transform partition type is one of a type other than a 16-point transform, and said one or more inverse transform functions include a 16-point inverse transform function.11.The method of claim 1, wherein the transform partition type comprises one of (i) two 8-point transforms, (ii) one 8-point transform and two 4-point transforms, or ) Four 4-point transforms, and the one or more inverse transform functions include a 16-point inverse transform function.12.An apparatus for decoding a block of video data in a coded bitstream, comprising:A memory configured to store data associated with the block of video data in the coded bitstream;as well asA processor in communication with the memory and configured to:Determine a transform partition type associated with the block, the block being associated with a plurality of transform coefficients determined at least in part by applying one or more transform functions to a plurality of pixel values associated with the block;Based on the transform partition type, determining the order in which the plurality of transform coefficients will be input to one or more inverse transform functions corresponding to the one or more transform functions;Obtaining a plurality of output values at least in part by inputting the plurality of transform coefficients into the one or more inverse transform functions in the determined order; andDecode the block of video data in the coded bitstream based at least in part on the plurality of output values.13.The apparatus of claim 12, wherein the processor is further configured to determine that the plurality of transform coefficients will be input, at least in part, by rearranging the plurality of transform coefficients based at least in part on the transform partition type The order of the one or more inverse transform functions.14.The apparatus of claim 12, wherein the processor is further configured to determine the plurality of, at least in part, via rearranging a portion, but not all, of the plurality of transform coefficients based at least in part on the transform partition type Transform coefficients will be input to the order of the one or more inverse transform functions.15.The apparatus of claim 12, wherein the processor is further configured to selectively bypass the one or more inverse transform functions based at least in part on the transform partition type associated with the block of video data Of one or more arithmetic stages.16.The apparatus of claim 12, wherein the processor is further configured to selectively bypass the one or more inverse transform functions based at least in part on the transform partition type associated with the block of video data But not all of a single stage of arithmetic operations, said single stage containing an arithmetic operation for each transform coefficient input to said one or more inverse transform functions.17.The apparatus of claim 12, wherein the one or more inverse transform functions comprise one or more of a 16-point Hadamard inverse transform function, an 8-point Hadamard inverse transform function, or a 4-point Hadamard inverse transform function.18.The apparatus of claim 12, wherein the processor is further configured to rearrange the plurality of output values of the one or more inverse transform functions based at least in part on the transform partition type.19.The apparatus of claim 12, wherein the block of video data corresponds to one of (i) a row of 16 pixels in a coded picture in the bitstream, (ii) the Two rows of 8 pixels in the coded picture in the bitstream, or (iii) four rows of 4 pixels in the coded picture in the bitstream.20.The apparatus of claim 12, wherein the one or more inverse transform functions comprise one or more arithmetic operation stages, each arithmetic operation stage comprising one or more of addition operations or subtraction operations.21.The apparatus of claim 12, wherein the transform partition type is one type other than a 16-point transform, and the one or more inverse transform functions include a 16-point inverse transform function.22.The apparatus of claim 12, wherein the transform partition type comprises one of (i) two 8-point transforms, (ii) one 8-point transform and two 4-point transforms, or ) Four 4-point transforms, and the one or more inverse transform functions include a 16-point inverse transform function.23.A non-transitory computer-readable medium includes code that, when executed, causes a device to:Store data associated with video data blocks in the coded bit stream;Determine a transform partition type associated with the block, the block being associated with a plurality of transform coefficients determined at least in part by applying one or more transform functions to a plurality of pixel values associated with the block;Based on the transform partition type, determining the order in which the plurality of transform coefficients will be input to one or more inverse transform functions corresponding to the one or more transform functions;Obtaining a plurality of output values at least in part by inputting the plurality of transform coefficients into the one or more inverse transform functions in the determined order; andDecode the block of video data in the coded bitstream based at least in part on the plurality of output values.24.The computer-readable medium of claim 23, wherein the code causes the apparatus to determine the plurality of transform coefficients, at least in part, by rearranging the plurality of transform coefficients based at least in part on the transform partition type The order input to the one or more inverse transform functions.25.The computer-readable medium of claim 23, wherein the code further causes the apparatus to selectively bypass the one or more, at least in part, based on the transform partition type associated with the block of video data, One or more arithmetic operation levels of the inverse transform function.26.The computer-readable medium of claim 23, wherein the transform partition type comprises one of (i) two 8-point transforms, (ii) one 8-point transform and two 4-point transforms, Or (iii) four 4-point transforms, and the one or more inverse transform functions include a 16-point inverse transform function.27.A video coding device configured to decode a block of video data in a coded bitstream, the video coding device comprising:Means for storing data associated with blocks of video data in the coded bit stream;Means for determining a transform partition type associated with the block that is associated with a plurality of transform coefficients determined at least in part by applying one or more transform functions to a plurality of pixel values associated with the block Associated;Means for determining based on the transform partition type an order in which the plurality of transform coefficients will be input to one or more inverse transform functions corresponding to the one or more transform functions;Means for obtaining a plurality of output values at least in part via input of the plurality of transform coefficients to the one or more inverse transform functions in the determined order; andMeans for decoding the block of video data in the coded bitstream based at least in part on the plurality of output values.28.The video coding device of claim 27, wherein determining the order in which the plurality of transform coefficients are to be input to the one or more inverse transform functions includes, at least in part, rearranging based on the transform partition type Describes multiple transform coefficients.29.The video coding device of claim 27 further comprising means for selectively bypassing one of the one or more inverse transform functions based at least in part on the transform partition type associated with the block of video data Or multiple arithmetic stage devices.30.The video coding device of claim 27, wherein the transform partition type comprises one of (i) two 8-point transforms, (ii) one 8-point transform and two 4-point transforms, Or (iii) four 4-point transforms, and the one or more inverse transform functions include a 16-point inverse transform function. |
Systems and methods for reuse of transform structures for multi-partitioning transformationsTechnical fieldThe present invention relates to the field of video coding and compression and, in particular, to video compression for transmitting on a display link, for example to display link video compression.Background techniqueDigital video capabilities can be incorporated into a wide variety of displays including digital televisions, personal digital assistants (PDAs), laptop computers, desktop monitors, digital cameras, digital recording devices, digital media players, video game devices , Video game consoles, cellular or satellite radio phones, video teleconferencing devices and the like. The link is used to connect the monitor to the appropriate source device. The link bandwidth requirements are shown to be proportional to the resolution of the display, and as a result, high-resolution displays require large bandwidth display links. Some show that links do not have the bandwidth to support high-resolution displays. Video compression can be used to reduce bandwidth requirements, allowing digital video to be provided to high resolution displays using lower bandwidth display links.Others have tried to utilize image compression of pixel data. However, such solutions are sometimes not visually lossless, or may be difficult and expensive to implement in conventional display devices.The Video Electronics Standards Association (VESA) has developed Show Stream Compression (DSC) as a standard for displaying link video compression. Display link video compression techniques such as DSC should (especially) provide visually lossless picture quality (ie, pictures have a quality level so that the user can not conclude that compression is in effect). Showing link video compression techniques should also provide a simple and inexpensive solution that can be implemented in real-time with conventional hardware.Content of the inventionThe systems, methods, and devices of the present invention each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.In one aspect, a method of decoding a block of video data in a coded bitstream includes determining a type of transform partition associated with the block, the block being associated with a block that is associated with the block, at least in part, Associated with a plurality of transform coefficients determined using one or more transform functions; determining, based on the transform partition type, the plurality of transform coefficients to be input to a plurality of transform coefficients corresponding to the one or more transform functions One or more inverse transform functions; obtaining a plurality of output values via, at least in part, inputting the plurality of transform coefficients into the one or more inverse transform functions in the determined order; and based at least in part on The plurality of output values decoding the block of video data in the coded bitstream.In another aspect, an apparatus for decoding a block of video data in a coded bitstream includes a memory and a processor in communication with the memory. The memory is configured to store data associated with the block of video data in the coded bitstream. The processor is configured to determine a transform partition type associated with the block that is associated with the block at least partially determined by applying one or more transform functions to a plurality of pixel values associated with the block, Determining, based on the transform partition type, the plurality of transform coefficients to be input to an order of one or more inverse transform functions corresponding to the one or more transform functions; determining, at least in part, Inputting the plurality of transform coefficients into the one or more inverse transform functions to obtain a plurality of output values in a determined order; and generating a plurality of output values based at least in part on the plurality of output values The video data block is decoded.In another aspect, a non-transitory computer-readable medium contains code that, when executed, causes an apparatus to: store data associated with a block of video data in a coded bit stream; determine whether the data associated with the block The blocks being associated with a plurality of transform coefficients determined at least in part by applying one or more transform functions to a plurality of pixel values associated with the block; determining, based at least in part on the transform partition type, Said plurality of transform coefficients being to be input to an order of one or more inverse transform functions corresponding to said one or more transform functions; inputting said plurality of transform coefficients, via said determined order, One or more inverse transform functions; and decoding the block of video data in the coded bitstream based at least in part on the plurality of output values.In another aspect, a video coding device configured to decode a block of video data in a coded bitstream, the video coding device comprising: means for storing video in a coded bitstream Data associated with the block; means for determining a type of transform partition associated with the block, the block being associated with at least one of a plurality of pixel values associated with the block, A transform function; a processor for determining, based on the transform partition type, the order in which the plurality of transform coefficients will be input to one or more inverse transform functions corresponding to the one or more transform functions ; Means for obtaining a plurality of output values at least in part via inputting the plurality of transform coefficients to the one or more inverse transform functions in the determined order; and means for determining, based at least in part on Means for decoding a plurality of output values of the block of video data in the coded bitstream.BRIEF DESCRIPTION OF THE DRAWINGS FIG1A is a block diagram illustrating an example video encoding and decoding system that may utilize the techniques in accordance with the aspects described in this disclosure.IB is a block diagram illustrating another example video coding and decoding system that can perform techniques in accordance with aspects described in this disclosure.2A is a block diagram illustrating an example of a video encoder that may implement techniques in accordance with aspects described in this disclosure.2B is a block diagram illustrating an example of a video decoder that may implement techniques in accordance with aspects described in this disclosure.Figure 3 is an example of transform partitioning on the encoder side.Figure 4 is an example of transform partitioning on the decoder side.5A-5D illustrate example pixel partitions used in various partition types.6A-6D illustrate example implementations of various partition types using a single inverse transform transform structure.Figure 7 is a block diagram illustrating a method for reusing a transform structure for multi-partitioned transform performed by a decoder in accordance with aspects described in this disclosure.Figure 8 is an example of transform partitioning on the decoder side according to aspects described in this disclosure.detailed descriptionIn general, the present invention relates to a method of improving, for example, those video compression techniques utilized in displaying link video compression. More specifically, the present invention relates to systems and methods for implementing multi-length transform functions using a single transform structure.Although certain embodiments are described herein in the context of Display Stream Compression (DSC) standards that show examples of link video compression techniques, those skilled in the art will appreciate that the systems and methods disclosed herein may be applied to any suitable Video coding standard. For example, the embodiments disclosed herein may be applicable to one or more of the following standards: International Telecommunication Union (ITU) Telecommunication Standardization Sector (ITU-T) H.261, International Organization for Standardization / International Electrotechnical Commission (ISO) / IEC) Moving Picture Experts Group -1 (MPEG-1) Visual, ITU-T H.262 or ISO / IEC MPEG-2 Visual, ITU-T H.263, ISO / IEC MPEG-4 Visual, (Also known as ISO / IEC MPEG-4 AVC), High Efficiency Video Coding (HEVC), and any extensions to such standards. And, the technology described in the present invention may become a standard part to be developed in the future. In other words, the techniques described in this disclosure are applicable to previously developed video coding standards, video coding standards currently under development, and forthcoming video coding standards. In addition, the techniques described in this disclosure are applicable to any coding scheme that involves transform-based image / video compression.The video encoder may apply one or more transforms to the pixel values or residual values to be decoded in order to achieve additional compression. For example, the encoder may apply one or more transforms to a block of video data (eg, a pixel value or a residual value) and obtain a block of transform coefficients (eg, a block of transform coefficients corresponding to the block of video data). In some implementations, the encoder performs several transforms of different sizes (eg, four different transforms sets) and selects the best performance for the particular block or portion of the image or video data (eg, the closest to the desired rate distortion Performance) of the transformation. The encoder may signal a transform select signal in the bitstream to indicate to the decoder the selected transform.In existing decoder hardware implementations, separate inverse transform blocks are used for each transform partition type. For example, if the encoder is configured to select from four different partition types, the corresponding decoder configured to decode the bitstream generated by the encoder also includes four hardware sets (eg, Inter-shared registers, adders, subtractors, etc.) that each correspond to the four different partition types. Each hardware set generates a set of output values that are fed, for example, to a multiplexer (MUX), and the decoder selects the appropriate set of output values based on the partition selection signal.However, decoding incoming aliased bitstreams using multiple inverse transform blocks adversely affects the cost-effectiveness of the decoder as the hardware implementation is particularly cost-sensitive to the chip area and / or implementation on the decoder side. Accordingly, there is a need for an improved method involving decoding a transformed coded bitstream that involves multiple transform partition size designs in a more cost-effective manner.In the present invention, an improved method of decoding a transcoded coded bitstream that involves a plurality of transform partition size designs is described. For example, example implementations of 16-point transforms may include hardware such as adders and / or subtractors. These adders and / or subtractors can be used to perform other transforms such as 8-point and 4-point transforms without having to add all necessary to implement these 8-point and 4-point transforms separately and independently of the hardware used to implement the 16-point transforms Transform the structure. In other words, by reusing parts of the hardware used to implement the various transforms that the encoder and / or decoder may need to perform, the hardware requirements for implementing these transforms may be reduced.Video coding standardA digital image such as a video image, a TV image, a still image, or an image generated by a video recorder or a computer may include pixels or samples arranged in horizontal and vertical lines. The number of pixels in a single image is usually tens of thousands. Each pixel usually contains lightness and colorimetric information. Without compression, the absolute amount of information to be transmitted from the image encoder to the image decoder will render live image transmission unfeasible. In order to reduce the amount of information to be transmitted, several different compression methods have been developed, such as the JPEG, MPEG and H.263 standards.Video coding standards include ITU-T H.261, ISO / IEC MPEG-1 Visual, ITU-T H.262 or ISO / IEC MPEG-2 Visual, ITU-T H.263, ISO / IEC MPEG-4 Visual, ITU-T H .264 (also known as ISO / IEC MPEG-4 AVC), as well as an expanded HEVC that incorporates such standards.In addition, video coding standards (ie, DSC) have been developed by VESA. The DSC standard is a video compression standard for compressible video for transmission over the display link. As the resolution of the display increases, the bandwidth of the video data required to drive the display increases correspondingly. For such resolutions, some of the display links may not have bandwidth to transmit all the video data to the display. Therefore, the DSC standard specifies a compression standard for interoperable, visually lossless compression over the display link.The DSC standard differs from other video coding standards, such as H.264 and HEVC. DSC contains intra-frame compression but does not include inter-frame compression, which means that the DSC standard can not use the temporal information when transcoding video data. In contrast, other video coding standards may use inter-frame compression in their video coding techniques.Video coding systemVarious aspects of the novel systems, devices, and methods are described more fully below with reference to the accompanying figures. However, the invention may be embodied in many different forms and should not be construed as limited to any particular structure or functionality presented throughout the disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the present invention is intended to cover novel systems, devices, and methods disclosed herein, whether implemented independently of or combined with any other aspects of the invention Any aspect. For example, a device or a method of practice may be implemented using any number of aspects set forth herein. In addition, the scope of the present invention is intended to cover the other structures, functionality or structures and functionality disclosed herein, in addition to or other than the various aspects of the invention set forth herein Practice this device or method. It is to be understood that any of the aspects disclosed herein may be embodied by one or more elements of a claim.While specific aspects are described herein, many variations and permutations of these aspects fall within the scope of the invention. While some of the benefits and advantages of the preferred aspects are mentioned, the scope of the invention is not intended to be limited to particular benefits, uses, or objectives. Rather, the aspects of the invention are intended to be broadly applicable to different wireless technologies, system configurations, networks and transmission protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the invention, not to limit the scope of the invention, which is defined by the appended claims and their equivalents.Several examples are illustrated. The elements indicated by the reference numerals in the drawings correspond to elements indicated by the same reference numerals in the following description. In the present invention, an element whose name starts with a ordinal number (eg, "first", "second", "third", etc.) does not necessarily imply that the elements have a particular order. Instead, these ordinal numbers are used only to refer to different elements of the same or similar type.1A is a block diagram illustrating an example video coding system 10 that may utilize the techniques in accordance with aspects described in this disclosure. As used herein, the term "video coder" or "coder" generally refers to both a video encoder and a video decoder. In the present invention, the term "video coding" or "coding" may generally refer to video coding and video decoding. In addition to video encoders and video decoders, the aspects described in this application may be extended to other related devices such as transcoders (eg, a device that can decode a bitstream and re-encode another bitstream) and a middlebox (eg, For example, a device that can modify, transform, and / or otherwise manipulate a bitstream).As shown in FIG. 1A, video coding system 10 includes a source device 12 (ie, "video coding device 12" or "coding device 12") that generates data to be received by destination device 14 at a later time, "Video coding device 14" or "coding device 14"). In the example of FIG. 1A, source device 12 and destination device 14 constitute separate devices. However, it should be noted that source device 12 and destination device 14 may be on the same device or part of the same device, as shown in the example of FIG. 1B.Referring again to FIG. 1A, source device 12 and destination device 14 may each include any of a wide range of devices, also referred to as video coding devices, including desktops, laptops (eg, laptop computers ) Computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, so-called "smart" tablets, televisions, cameras, displays, digital media players, video game consoles, video streaming devices Or the like. In various embodiments, source device 12 and destination device 14 may be equipped for wireless communication (ie, configured to communicate via wireless communication).Video coding devices 12, 14 of video coding system 10 may be configured to communicate via radio and radio technologies such as a wireless wide area network (WWAN) (eg, cellular) and / or wireless local area network (WLAN) carrier. The terms "network" and "system" are often used interchangeably. Each of the video coding devices 12, 14 may be a user equipment (UE), a wireless device, a terminal, a mobile station, a subscriber unit, or the like.WWAN carriers may include, for example, wireless communication networks such as Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), Orthogonal FDMA (OFDMA), Single Carrier FDMA (SC- FDMA) And other networks. The CDMA system may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), CDMA2000 and the like. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. CDMA2000 covers IS-2000, IS-95 and IS-856 standards. The TDMA network can implement radio technologies such as the Global System for Mobile communications (GSM). The OFDMA network may implement a radio technology such as Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash- OFDMA and the like. UTRA and E-UTRA are part of Universal Mobile Telecommunications System (UMTS). 3GPP Long Term Evolution (LTE) and LTE Advanced (LTE-A) are new releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in the literature from the organization named "3rd Generation Partnership Project" (3GPP). CDMA2000 and UMB are described in the literature from the organization named "3rd Generation Partnership Project 2" (3GPP2).Video coding devices 12, 14 of video coding system 10 may also communicate with one another via a WLAN base station according to one or more standards such as the IEEE 802.11 standard, including, for example, these modifications: 802.11a-1999 (commonly referred to as "802.11a" , 802.11b-1999 (commonly referred to as "802.11b"), 802.11g-2003 (commonly referred to as "802.11g"), and the like.Destination device 14 may receive encoded video data to be decoded via link 16. The link 16 may include any type of media or device capable of moving encoded video data from the source device 12 to the destination device 14. In the example of FIG. 1A, the link 16 may include a communication medium that enables the source device 12 to transmit the encoded video data to the destination device 14 in real time. The encoded video data may be modulated according to a communication standard (eg, a wireless communication protocol) and transmitted to the destination device 14. Communication media can include any wireless or wired communication media, such as radio frequency (RF) spectrum or one or more physical transmission lines. Communication media may form part of a packet network (eg, a local area network, a wide area network, or a global network such as the Internet). Communication media may include routers, switches, base stations, or any other device that can be used to facilitate communications from the source device 12 to the destination device 14.In the example of FIG. 1A, source device 12 includes video source 18, video encoder 20 (also referred to simply as encoder 20), and output interface 22. In some cases, the output interface 22 may include a modulator / demodulator (modem) and / or a transmitter. In source device 12, video source 18 may include sources such as a video capture device (eg, a video camera), a video archive containing previously captured video, a video feed interface to receive video from a video content provider And / or a computer graphics system for generating computer graphics data as a source video, or a combination of such sources. As one example, if video source 18 is a video camera, source device 12 and destination device 14 may form a so-called "camera phone" or "video phone," as illustrated in the example of FIG. 1B. However, the techniques described in this disclosure are generally applicable to video coding and may be applied to wireless and / or wired applications.Captured, pre-captured, or computer-generated video may be encoded by video encoder 20. The encoded video data may be transmitted to the destination device 14 via the output interface 22 of the source device 12. The encoded video data may also (or alternatively) be stored on the storage device 31 for later access by the destination device 14 or other devices for decoding and / or playback. Video encoder 20 illustrated in FIGS. 1A and 1B may include video encoder 20 illustrated in FIG. 2A or any other video encoder described herein.In the example of FIG. 1A, destination device 14 includes input interface 28, video decoder 30 (also referred to simply as decoder 30), and display device 32. In some cases, input interface 28 may include a receiver and / or a modem. The input interface 28 of the destination device 14 may receive encoded video data over the link 16 and / or from the storage device 31. The encoded video data communicated over the link 16 or provided on the storage device 31 may include a variety of syntax elements generated by the video encoder 20 for use by a video decoder such as the video decoder 30 in decoding video data. Such syntax elements may be included with encoded video data transmitted on a communications medium, stored on a storage medium, or stored on a file server. Video decoder 30 illustrated in FIGS. 1A and 1B may include video decoder 30 illustrated in FIG. 2B or any other video decoder described herein.The display device 32 may be integrated with the destination device 14 or external to the destination device 14. In some examples, destination device 14 may include an integrated display device and may also be configured to interface with an external display device. In other examples, destination device 14 may be a display device. In general, display device 32 displays the decoded video data to a user, and may include any of a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display Of the display device.In related aspects, FIG. 1B shows an example video coding system 10 'where source device 12 and destination device 14 are on or part of device 11. Device 11 may be a telephone handset, such as a "smart" phone or the like. Device 11 may include a processor / controller device 13 (optionally present) in operative communication with source device 12 and destination device 14. The video coding system 10 'and its components of FIG. 1B are otherwise similar to the video coding system 10 of FIG. 1A and its components.Video encoder 20 and video decoder 30 may operate according to a video compression standard (eg, DSC). Alternatively, video encoder 20 and video decoder 30 may operate according to other proprietary or industry standards such as the ITU-T H.264 standard, the HEVC, or the extensions to these standards, alternatively referred to as MPEG-4 Part 10 AVC . However, the techniques of this disclosure are not limited to any particular coding standard. Other examples of video compression standards include MPEG-2 and ITU-T H.263.Although not shown in the examples of FIGS. 1A and 1B, video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include an appropriate multiplexer-demultiplexer (MUX -DEMUX) unit or other hardware and software to handle the encoding of both audio and video in a common data stream or separate data streams. In some instances, the MUX-DEMUX unit may conform to the ITU H.223 multiplexer protocol, or other protocols such as user datagram protocol (UDP), if applicable.Each of video encoder 20 and video decoder 30 may be implemented as any of a variety of suitable encoder circuits, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs) Field programmable gate array (FPGA), discrete logic, software, hardware, firmware, or any combination thereof. When the techniques are partially implemented in software, a device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the invention Technology. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated as a combined encoder in a respective device / Decoder part.Video decoding processAs briefly mentioned above, video encoder 20 encodes the video data. Video data may include one or more pictures. Each of the pictures is a static image that forms part of the video. In some cases, a picture may be referred to as a video "frame." When video encoder 20 encodes video data (eg, video coding layer (VCL) data and / or non-VCL data), video encoder 20 may generate a bitstream. The bitstream may comprise a sequence of bits forming a coded representation of the video data. The bitstream may contain coded pictures and associated data. The coded picture is a coded representation of the picture. The VCL data may include coded picture data (ie, information associated with the samples of the coded picture) and the non-VCL data may include control information associated with the one or more coded pictures (eg, a set of parameters And / or supplemental enhancement information).To generate a bitstream, video encoder 20 may perform an encoding operation on each picture in the video data. When video encoder 20 performs a coding operation on a picture, video encoder 20 may generate a series of coded pictures and associated data. The associated data may include a set of coding parameters, such as a quantization parameter (QP). To generate a coded picture, video encoder 20 may divide the picture into video blocks of equal size. The video block can be a two-dimensional array of samples. The coding parameters may define coding options (eg, coding modes) for each block of video data. Decoding options can be chosen to achieve the desired rate-distortion performance.In some examples, video encoder 20 may split the picture into multiple slices. Each of the slices may include spatially distinct regions in an image (eg, a frame) that may be independently decoded without information from the remaining regions in the image or frame. Each image or video frame may be encoded in a single slice or each image or video frame may be encoded in several slices. In DSC, the number of bits assigned to encode each slice may be substantially constant. As part of performing a coding operation on a picture, video encoder 20 may perform an encoding operation on each slice of the picture. When video encoder 20 performs an encoding operation on a slice, video encoder 20 may generate encoded data associated with the slice. The encoded data associated with the slice may be referred to as a "coded slice."DSC video encoder2A is a block diagram illustrating an example of a video encoder 20 that may implement techniques in accordance with aspects described in this disclosure. Video encoder 20 may be configured to perform some or all of the techniques of this disclosure. In some instances, the techniques described in this disclosure may be shared among the various components of video encoder 20. In some examples, the processor (not shown) may additionally or alternatively be configured to perform some or all of the techniques described in this disclosure.For purposes of explanation, the present invention describes video encoder 20 in the context of DSC decoding. However, the techniques of this disclosure may be applicable to other coding standards or methods.In the example of FIG. 2A, video encoder 20 includes a plurality of functional components. The functional components of the video encoder 20 include a color-space converter 105, a buffer 110, a flatness detector 115, a rate controller 120, a predictor, a quantizer and reconstructor component 125, a line buffer 130, an indexed Color history 135, entropy encoder 140, sub-stream multiplexer 145, and rate buffer 150. In other examples, video encoder 20 may include more, fewer, or different functional components.The color-space converter 105 can convert the input color-space to the color-space used in the decoding implementation. For example, in one exemplary embodiment, the color-space of the input video data is in the red, green and blue (RGB) color-space and is expressed in the brightness Y, chroma green Cg and chroma Orange Co (YCgCo) Color - space implementation of the decoding. Color-space conversion can be performed by methods that include shifting and adding to video data. It should be noted that input video data in other color spaces may be processed, and transforms to other color spaces may also be performed.In related aspects, video encoder 20 may include buffer 110, line buffer 130, and / or rate buffer 150. For example, the buffer 110 may hold (eg, store) the color-space converted video data before being used by other portions of the video encoder 20. In another example, the video data may be stored in an RGB color-space, and color-space conversion may be performed as needed, since the color-space converted data may require more bits.The rate buffer 150 may serve as part of a rate control mechanism in the video encoder 20, which will be described in more detail below in conjunction with the rate controller 120. The number of bits spent encoding each block can vary substantially based on the nature of the block. The rate buffer 150 may smooth the rate of the compressed video. In some embodiments, a constant bit rate (CBR) buffer model is employed in which bits stored in a rate buffer (eg, rate buffer 150) are removed from the rate buffer at a constant bit rate. In the CBR buffer model, rate buffer 150 may overflow if video encoder 20 adds too many bits to the bitstream. On the other hand, video encoder 20 may need to add enough bits to prevent underflow of rate buffer 150.On the video decoder side, bits may be added to the rate buffer 155 of the video decoder 30 (see FIG. 2B described in further detail below) at a constant bit rate, and the video decoder 30 may remove a variable number of bits per block . In order to ensure proper decoding, the rate buffer 155 of the video decoder 30 should not "underflow" or "overflow" during decoding of the compressed bit stream.In some embodiments, the BufferMaxSize may be calculated based on a value BufferCurrentSize that represents the number of bits currently in the buffer and a BufferMaxSize that represents the size of the rate buffer 150 (ie, the maximum number of bits that may be stored in the rate buffer 150 at any point in time) To define buffer fullness (BF). BF can be calculated as:BF = ((BufferCurrentSize * 100) / BufferMaxSize)The flatness detector 115 may detect a change from a complex (ie, non-flat) zone in the video data to a flat (ie, simple or uniform) zone in the video data. The terms "complex" and "flat" will be used herein to generally refer to the difficulty of video encoder 20 encoding a corresponding region of video data. Thus, the term "complex", as used herein, generally describes the area of video data as being complexly encoded by video encoder 20 and may, for example, contain textured video data, high spatial frequencies, and / or others that are complex to encode feature. The term "flat" as used herein generally describes the area of video data as being simple to encode by video encoder 20 and may, for example, include a smooth gradient in video data, low spatial frequencies, and / or other features that are easy to code . The transition between the complex and flat areas can be used by the video encoder 20 to reduce the quantization artifacts in the encoded video data. In particular, rate controller 120 and predictor, quantizer, and reconstructor components 125 can reduce such quantization artifacts when a transition from a complex to flat area is identified.The rate controller 120 determines a set of coding parameters, for example, QP. The QP may be adjusted by the rate controller 120 based on the buffer fullness of the rate buffer 150 and the video activity of the video data in order to maximize the picture quality for the target bit rate which ensures that the rate buffer 150 does not overflow or underflow. The rate controller 120 also selects a particular coding option (eg, a particular mode) for each block of video data in order to achieve optimal rate-distortion performance. The rate controller 120 minimizes the distortion of the reconstructed image such that the rate controller 120 satisfies the bit rate constraint, ie, the overall actual coding rate fits within the target bit rate.Predictor, quantizer, and reconstructor component 125 may perform at least three encoding operations of video encoder 20. Predictor, quantizer, and reconstructor components 125 can perform prediction in many different modes. An example prediction mode is a modified version of median adaptive prediction. Median adaptive prediction can be implemented by the non-destructive JPEG standard (JPEG-LS). The modified version of the median adaptive prediction that may be performed by the predictor, quantizer, and reconstructor component 125 may allow for parallel prediction of three consecutive sample values. Another example prediction mode is block prediction. In block prediction, samples are predicted from previously reconstructed pixels in the upper line or the left of the same line. In some embodiments, both the video encoder 20 and the video decoder 30 may perform the same search on the reconstructed pixels to determine the block prediction usage, and therefore, there is no need to send the bits in the block prediction mode. In other embodiments, video encoder 20 may perform a search and signal a block prediction vector in the bitstream so that video decoder 30 need not perform a separate search. Midpoint prediction modes can also be implemented, where the midpoint of the component range is used to predict the sample. The midpoint prediction mode enables the delineation of the number of bits needed for compressed video in even the worst case samples. As discussed further below with reference to FIG. 7, the predictor, quantizer, and reconstructor component 125 may be configured to code a block of video data (or any other prediction unit) by performing the method illustrated in FIG. 7 (eg, Encoding or decoding).Predictor, quantizer, and reconstructor components 125 also perform quantization. For example, quantization may be performed via a power-2 quantizer that may be implemented using a shifter. It should be noted that other quantization techniques may be implemented instead of a power-2 quantizer. The quantization performed by the predictor, quantizer, and reconstructor component 125 may be based on the QP determined by the rate controller 120. Finally, the predictor, quantizer, and reconstructor component 125 also performs reconstruction including adding the inverse quantized residue to the predicted value and ensuring that the result does not fall within the valid range of sample values.It should be noted that the example methods of prediction, quantization, and reconstruction performed by the predictor, quantizer, and reconstructor components 125 described above are merely illustrative and that other methods may be implemented. It should also be noted that predictor, quantizer, and reconstructor components 125 may include subcomponents for performing prediction, quantization, and / or reconstruction. It is further noted that prediction, quantization, and / or reconstruction may be performed by several separate encoder components instead of the predictor, quantizer, and reconstructor component 125.Line buffer 130 maintains (eg, stores) the output from predictor, quantizer, and reconstructor component 125 such that predictor, quantizer, and reconstructor components 125 and indexed color history 135 may use the buffered video data . Indexed color history 135 stores recently used pixel values. These recently used pixel values may be directly referenced by video encoder 20 via a dedicated syntax.The entropy encoder 140 generates a prediction based on the indexed color history 135 and the flatness transition coded prediction residuals identified by the flatness detector 115 and any other data received from the predictor, quantizer, and reconstructor component 125 (eg, Index identified by the device, quantizer, and reconstructor component 125). In some examples, entropy encoder 140 may encode three samples per sub-stream per clock encoder. Sub-stream multiplexer 145 may multiplex the bitstream based on a headerless packet multiplexing scheme. This allows the video decoder 30 to run three entropy decoders in parallel, helping to decode three pixels per clock. Sub-stream multiplexer 145 may optimize the order of packets so that video decoder 30 can effectively decode the packets. It should be noted that different approaches to entropy coding may be implemented, which may help decode a power of two pixels per clock (eg, 2 pixels / clock or 4 pixels / clock).DSC video decoder2B is a block diagram illustrating an example of a video decoder 30 that may implement the techniques described in this disclosure. Video decoder 30 may be configured to perform some or all of the techniques of this disclosure. In some instances, the techniques described in this disclosure may be shared among the various components of video decoder 30. In some examples, the processor (not shown) may additionally or alternatively be configured to perform some or all of the techniques described in this disclosure.For purposes of explanation, the present invention describes video decoder 30 in the context of DSC decoding. However, the techniques of this disclosure may be applicable to other coding standards or methods.In the example of FIG. 2B, video decoder 30 includes a plurality of functional components. The functional components of the video decoder 30 include a rate buffer 155, a sub-stream demultiplexer 160, an entropy decoder 165, a rate controller 170, a predictor, a quantizer and reconstructor component 175, an indexed color history 180 , A line buffer 185, and a color-space converter 190. The illustrated components of video decoder 30 are similar to the corresponding components described above with respect to video encoder 20 in FIG. 2A. Thus, each of the components of video decoder 30 may operate in a manner similar to the corresponding components of video encoder 20 as described above.Transform decodingIn some embodiments of the invention, a video encoder (eg, video encoder 20) may apply one or more transforms to pixel values or residual values to achieve additional compression. For example, an encoder (eg, video encoder 20) may apply one or more transforms to a block of video data (eg, a pixel value or a residual value) and obtain a block of transform coefficients (eg, corresponding to Transform coefficient block). As discussed above, after generating transform coefficient blocks, an encoder may perform a quantization process on the transform coefficient blocks, wherein the transform coefficients are quantized to potentially reduce the amount of data used to represent the transform coefficients, thereby providing further compression.Similarly, a video decoder (eg, video decoder 30) may receive a bitstream generated by an encoder, where the bitstream includes a coded representation of the video data encoded by the encoder. When the decoder receives the bitstream, the decoder parses the bitstream and extracts syntax elements from the bitstream, and pictures of the video data may be reconstructed based on the syntax elements extracted from the bitstream. The process of reconstructing the frequency data based on the syntax elements may be substantially reciprocal to the process performed by the encoder to produce the syntax elements. For example, the decoder reversibly quantizes a block of transform coefficients in the bitstream and performs an inverse transform on the block of transform coefficients to reconstruct the block of coded video data in the bitstream.In some embodiments of the invention, an encoder (eg, video encoder 20) performs several transforms of different sizes (eg, four different transforms sets) and selects the best performance for a particular block or portion of the image or video data (Eg, the closest to the desired rate distortion performance). For example, the encoder may perform (i) a single 16 point transform, (ii) two 8 point transforms, (iii) one 8 point transform and two 4 point transforms, or (iv) four 4 point transforms, where Each option utilizes the same number of inputs (eg, pixel data). Thus, each 16-pixel block may be encoded using a transform mode and the 16 pixels to be transcoded may be further partitioned into smaller block sizes (eg, 4 pixels, 8 pixels, or any other sized partition) , Then enter the transform function. In the example of a 16-pixel block, the 16-pixel block may represent (i) a single row of 16 pixels in the coded picture in the bitstream, (ii) two rows of 8 in the coded picture in the bitstream Pixel, (iii) four rows of four pixels in the coded picture in the bitstream, or (iv) any of the other sixteen pixels in the coded picture in the bitstream. 5A-5D illustrate how the pixel data can be divided into partitions when more than one transform is to be performed on the pixel data.After performing the various sets of transforms, the encoder can analyze the distortion and bit rate associated with each option and select one of the options based on the desired performance. The encoder may indicate to the decoder the selected option by signaling a flag or syntax element in the coded bit stream.Partition formatIn some embodiments, an encoder (eg, video encoder 20) partitions pixels in a picture or frame to be coded into smaller partitions (eg, 16 pixel blocks) for performing transform-based image compression. For example, the partition format (also referred to herein as a transform partition type) used in a given coding scheme may consist of: (i) one 16-pixel block, (ii) two 8-pixel blocks, ( iii) a mixture of one 8-pixel block and two 4-pixel blocks, and (iv) four 4-pixel blocks, as illustrated in FIG. In the example of FIG. 3, a block 302 containing 16 pixels is input to transform blocks 304, 306, 308, and 310 corresponding to different transform combinations. The encoder (eg, video encoder 20) then calculates the distortion cost 312 corresponding to each of the transforms associated with transform blocks 304, 306, 308, and 310. In FIG. 3, each of transform blocks 304, 306, 308, and 310 represent different transform partition types. For example, transform block 304 corresponds to a single 16-point transform, transform block 306 corresponds to two 8-point transforms, transform block 308 corresponds to a mixture of one 8-point transform and two 4-point transforms, and transform block 310 corresponds to four A 4-point transform. In addition, the transform coefficients corresponding to each of the transform blocks 304, 306, 308, and 310 are coded as a bit stream at the bit stream encoding block 314 and the transform coefficients corresponding to The bitstream cost of each. Based on the distortion cost 312 and bitstream cost 316 corresponding to each of the transform blocks 304, 306, 308, and 310, the encoder's select logic 318 selects one of the transform blocks 304, 306, 308, or 310 Of the transformation partition type, which is indicated by the partition selection flag or syntax element 320. Thus, in some embodiments, selection logic 318 selects the type of transform partition that produces the lowest encoded pixel distortion at the lowest bit-stream cost. For example, when coding a given picture, the encoder may determine that a portion of a given picture is in the case of using two 8-pixel blocks (eg, 8-point transform) (eg, a 16-pixel block within a given picture ) May be optimally coded, and another portion of a given picture may be optimally coded if four 4-pixel blocks (eg, 4-point transform) are used. Based on the value of the partition select flag or syntax element 320, the multiplexer 322 outputs the bit stream 324 for transmission to the decoder.On the decoder side as illustrated in FIG. 4, a decoder (eg, video decoder 30) or component thereof (eg, bitstream decoding 404) uses transform partition information contained in input bitstream 402 One or more inverse transforms (eg, the inverse of transforms associated with the transform blocks 304, 306, 308, or 310 selected by the encoder 20) to select at least one of the transform partition types indicated by the partition selection flags or the syntax elements 320 Used when decoding compressed pixel data. For example, the decoder extracts the transform coefficients and the partition selection signal 414 from the input bit stream 402. The transform coefficients are passed to all four inverse transform blocks 406, 408, 410 and 412, and the partition select signal 414 is used to select the desired inverse transform.In some existing hardware implementations of the decoder, separate inverse transform blocks are used for each partition type. For example, if the encoder is configured to select from four different partition types as illustrated in FIG. 3, a corresponding configuration configured to decode a bitstream generated by the encoder (eg, input bitstream 402) The decoder also contains four sets of hardware (eg, registers, adders, subtractors, etc. that are not shared among each other) that each correspond to the transforms 406, 408, 410, and 412 as illustrated in FIG. 4. Each inverse transform produces a set of output values that are fed to the multiplexer 416 and the decoder selects based on the partition select signal 414 (or another flag or syntax element indicating the partition type used to encode a given block) And obtains the pixel value of the 16-pixel block 418.The hardware implementation shown in FIG. 4 would require seven independent inverse transform blocks to decode four partitioned structures (eg, one 16-pixel block, two 8-pixel blocks, and four 4-pixel blocks). In order to reduce the implementation cost on the decoder side (eg, the chip area used to implement the decoder), the inverse transform may be performed on four different transform types (eg, 16, 8 + 8, 8 + 4 + 4 and 4 + 4 + 4 + 4) between the reorganization and re-use some of the arithmetic operation and the implementation of the inverse transform function.Hardware implementation planAs discussed above, prior methods utilize an independent transform function (eg, seven independent transform functions in the example of FIG. 4) to decode a transform-coded bitstream that includes multiple partition sizes. However, decoding incoming aliased bitstreams using multiple inverse transform blocks adversely affects the cost-effectiveness of the decoder as the hardware implementation is particularly sensitive to chip area and / or implementation cost on the decoder side. Therefore, an improved method for decoding a transcoded coded bitstream that involves multiple transform partition size designs in a more cost-effective manner is needed.For example, an example implementation of a 16-point transform may include an adder and a subtractor. These adders and subtractors may all need to perform 16-point transforms (or inverse transforms), but the same adders and subtractors (or other hardware for 16-point transforms) may also be used to perform other things such as 8-point and 4-point transforms Without having to add a full transform structure necessary to implement these 8-point and 4-point transforms separately and independently from the hardware used to implement the 16-point transform. In other words, by reusing parts of the hardware used to implement the various transforms that the encoder and / or decoder may need to perform, the hardware requirements for implementing these transforms may be reduced.Selective bypass, re-routing or reorderingIn some embodiments of the invention, the 16-point transform is available by selectively rerouting or bypassing certain portions of the 16-point transform and / or rerouting or reordering inputs, outputs, or other intermediate nodes in the 16-point transform To implement other types of transformations. For example, one or more multiplexers may be added to the 16-point transform such that one portion of the hardware is bypassed for the 4-point transform and another portion of the hardware is bypassed for the 8-point transform. While adding these multiplexers will add cost / chip area, the added cost and / or chip area of these multiplexers will still be far less than the hardware transforms that are fully implemented for each transform partition type.Reuse the hardware structureBecause the partition type for encoding is explicitly signaled (eg, partition selection signal 414 of FIG. 4), and on the decoder side, only a single partition type needs to be performed for each transform block of 16 input coefficients Inverse transform, the decoder implementation cost for the split transform can be reduced by reusing and sharing some portions of the hardware of the largest transform type. For example, in some implementations, a 16-point inverse Hadamard transform is used to produce four transform partition types (eg, 16, 8 + 8, 8 + 4 + 4 and 4 + 4 + 4 + 4). In some embodiments, the inverse transform other than the 16-point transform is implemented without using an extra adder or subtractor. Therefore, it is possible to reduce the implementation cost and / or the chip area of the decoder.In some embodiments of the invention, for each transform partition type, the transform partition type is again implemented using the arithmetical elements in the largest transform type. This allows for the implementation of all the required types of transformations while maintaining a low implementation area / cost, especially on the area / cost-critical decoder side. Although certain aspects of the invention are described with respect to the decoder side, the techniques described in this disclosure are also applicable to the encoder side (eg, by reusing and sharing the arithmetic functions of the largest transform type to implement other types of transforms).For each transform partition type, a full 16-point inverse Hadamard transform is reconfigured to perform the following inverse transformations using common mathematical operations: (i) one inverse 16-point transform, (ii) two inverse 8-point transforms, (iii ) One 8-point inverse transform and two 4-point inverse transformations, and (iv) four 4-point inverse transformations. Example implementations of these transformations are illustrated in FIGS. 6A-6D, respectively.For each partition type, inputs to the input 'and output to output' stage are used to reorder the input and output data for the corresponding transform mode. In addition, the inverse transform block is reconfigured to provide four partition types as described above by bypassing some of the internal hardware stages of the full 16-point inverse transform function.Example implementation: 8 + 8For transform partition types [8,8], as illustrated in Figure 6B, the input data is placed in a 16-point input hold buffer as two concatenated 8-point samples. The input data is reordered and placed in the middle input stage (eg, it may be a register or buffer that holds the transform coefficient value). The math between levels a through b is bypassed (with some sort of reordering), and the final output-to-output stage is configured to reorder the final output back to two 8-point concatenated data structures.Example implementation: 8 + 4 + 4For mixed transform partition types [8,4,4], as illustrated in Figure 6C, the input data is placed in the input buffer, with the 8-point sample data in the first eight positions followed by two 4-point data. As described in Other Partition Types, Input to Input 'stage is for reordering data, stages a to b include bypass and reordering, and only eight bits are bypassed for 4-point inverse transform in stage c to output. Final output 'data is reordered to produce the data structure of [8,4,4].Example Implementation: 4 + 4 + 4 + 4For partition types [4, 4, 4, 4], as illustrated in Figure 6D, the four input data are placed in the input hold buffer as four concatenated 4-point samples. The input data is reordered and placed in the middle input level. The math between a and b and c to output 'stages is bypassed, where the output' to output stage is configured to reconstruct the output 'data back to four 4-point data structures.An example flow chart for reusing a transform hardware structureReferring to FIG. 7, an example program for reusing a transform structure for multi-partitioned transform will be described. The steps illustrated in FIG. 7 may be performed by a video decoder (eg, video decoder 30 in FIG. 2B) or a component thereof. For convenience, the method 700 is described as being performed by a decoder (also referred to simply as a decoder), which may be the video decoder 30 or another component. Although the method 700 is described in the context of a video decoder, the techniques described herein, such as reusing a transform structure for multi-partitioned transforms, may extend to video encoders.The method 700 begins at block 701. At block 705, the decoder determines the type of transform partition associated with the block of video data in the coded bitstream. The blocks being associated with a plurality of transform coefficients determined at least in part by applying one or more transform functions to a plurality of pixel values associated with the block. In some embodiments, the transform partition type associated with the block of video data indicates a transform (eg, one or more functions) that is performed for obtaining the plurality of transform coefficients. For example, the transform partition type may indicate that a single 16-point transform is performed on a video block of 16 pixels. In another example, the transform partition type may indicate that two 8-point transforms are performed on the first and second sets of 8 pixels in the block. In yet another example, the transform partition type may indicate that a single 8-point transform is performed on 8 of the 16 pixels in the block and two 4-points on the corresponding 4 pixels of the remaining 8 pixels in the block Transform. In yet another example, the transform partition type may indicate that four 4-point transforms are performed on the first, second, third, and fourth sets of 4 pixels in the block. In some embodiments, the transform partition type may be signaled as a flag or syntax element in the bitstream. For example, the values "00", "01", "10" and "11" may respectively indicate that the transformations for video data blocks containing 16 values are (i) a single 16 point transform, (Iii) one 8-point transform and two 4-point transforms, and (iv) four 4-point transforms.At block 710, the decoder determines based on the transform partition type that the plurality of transform coefficients will be input to the order of one or more inverse transform functions corresponding to the one or more transform functions. The one or more inverse transform functions may each include one or more hardware stages including an adder, a subtractor, and / or a multiplexer. In some embodiments, determining the order may include rearranging the transform coefficients based on the transform partition type (eg, from the order in which the transform coefficients appear in the bitstream to a different order). In some embodiments, only a subset of the transform coefficients are rearranged based on the transform partition type, but not all.In one embodiment, the order in which the transform coefficients are to be input to the one or more inverse transform functions is the same as the order in which the transform coefficients are signaled or received in the bitstream. For example, as shown in FIG. 6A, based on the determination that the transform partition type corresponds to a single 16-point transform, the decoder may determine the order in which the transform coefficients will be input to the one or more inverse transform functions (eg, "input" ") Will be the same as the order in which the transform coefficients are signaled or received in the bitstream (eg," INPUT "). In another example, based on the determination that the transform partition type corresponds to two 8-point transforms, the decoder may signal or connect in the bitstream by rearranging the transform coefficients (eg, "input ") To determine the order in which the transform coefficients will be input to the one or more inverse transform functions (eg," input "), as shown in FIG. 6B. In this example, the first four coefficients remain unchanged, but then the four coefficients are inverted and placed at the end of a 16-coefficient block, and the last eight coefficients are each moved up by 4 points in the 16-coefficient block.In yet another example, as shown in FIG. 6C, the decoder may signal or receive in the bitstream by rearranging the transform coefficients based on the determination that the transform partition type corresponds to one 8-point transform and two 4-point transforms (Eg, "input") to determine the order in which the transform coefficients will be input to the one or more inverse transform functions (eg, "input") as shown in FIG. 6C. In this example, the coefficients 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 and 15 ("inputs") of the 16- The resulting coefficients are 0,1,2,3,8,9,12,13,11,10,15,14,7,6,5 and 4 ("input"). In yet another example, as shown in FIG. 6D, based on the determination that the transform partition type corresponds to four 4-point transforms, the decoder may signal or receive in the bitstream by rearranging the transform coefficients (eg, "input ") To determine the order in which the transform coefficients will be input to the one or more inverse transform functions (eg," input "), as shown in FIG. 6D. In this example, the coefficients 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 and 15 ("inputs") of the 16- The resulting coefficients 0,1,4,5,8,9,12,13,10,112,3,8,9,12,13,11,10,15,14,7,6,5 and 4 ( "enter'").At block 715, the decoder obtains a plurality of output values at least in part by inputting the plurality of transform coefficients to the one or more inverse transform functions in the determined order. In some embodiments, the one or more inverse transform functions comprise one or more stages of arithmetic and / or rearrangement operations. For example, as shown in FIG. 6A, the stage between "input" and "a" includes addition (indicated by two solid lines) or subtraction (for each conversion coefficient in the 16-coefficient block "input" Indicated by a solid line and a dashed line). As shown in FIG. 6A, the extra stages between "a" and "b", between "b" and "c", and between "c" and "output" each include a plurality of arithmetic operations (eg, each 16 separate operations). Based on the transform partition type, some of the stages can be bypassed, as shown in FIG. 6B. For example, although some of the arithmetic operations between "input" and "a", between "b" and "c" and between "c" and "output" again use 16-bit inverse transforms Or all (eg, as shown in FIG. 6A), the stages between "a" and "b" bypass the arithmetic operations and rearrange the variables in a given order (eg, based on the transform partition type). In some embodiments, one or more stages may bypass a portion of the 16 coefficients / variables used in the inverse transform, but not all 16 coefficients / variables. For example, as shown in FIG. 6C, arithmetic operations for 16-bit inverse transform (eg, shown in FIG. 6A) for the first four variables (e0, e1, f1 and f0) and the last four variables (e2, e3, f3 and f2) but bypasses the arithmetic operation for the middle eight variables (0,0,3,3,2,2,1 and 1) .The output values generated by inputting the transform coefficients to the one or more inverse transform functions in the determined order may be further rearranged based on the transform partition type. As illustrated in FIGS. 6A-6D, the output values may be rearranged differently (eg, from "Output" to "Output") based on the transform partition type.At block 720, the decoder decodes the block of video data in the coded bitstream based at least in part on the plurality of output values. For example, the output value may be the original pixel value. In another example, the output value may be a residual value, and further motion compensation may need to be performed to obtain a corresponding pixel value. The method 700 ends at block 725.In method 700, one or more of the boxes shown in FIG. 7, and / or the order in which the methods may be exchanged, may be removed (eg, not performed). In some embodiments, additional boxes may be added to the method 700. For example, the decoder may further selectively bypass one, some, or all of the inverse transforms. In some embodiments, a single stage of the transform function contains a mathematical operation for each transform coefficient input to the transform function. In another example, the output value may be rearranged before being used to decode the block of video data. Therefore, the embodiments of the present invention are not limited to or by the examples shown in FIG. 7, and other variations may be implemented without departing from the spirit of the present invention.Figure 8 illustrates an example of transform partitioning on the decoder side according to aspects described in this disclosure. After receiving the bit stream 802, the decoder decodes the transform coefficients and the partition selection signal 806 at block 804. The transform coefficients are input to a transform 808 that is configured to perform various inverse transforms. Transform 808 performs an appropriate inverse transform based on received partition select signal 806 and outputs 16 pixel block 810.Other considerationsAny of a number of different technologies and techniques may be used to represent the information and signals disclosed herein. For example, data, instructions, commands, information, signals, bits, symbols, etc. that may be referenced throughout this description may be represented by voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or particles of light or any combination thereof And chip.The various illustrative logical blocks and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or a combination of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof. The techniques may be implemented in any of a variety of devices, such as a general purpose computer, a wireless communication device handset, or an integrated circuit device having multiple uses for applications included in wireless communication device handsets and other devices. Any features described as devices or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be implemented, at least in part, by a computer-readable data storage medium that includes program code containing instructions that, when executed, perform one or more of the above-described methods . The computer-readable data storage medium may form part of a computer program product that may contain encapsulation material. Computer-readable media can include memory or data storage media such as random access memory (RAM) (eg, synchronous dynamic random access memory (SDRAM)), read only memory (ROM), nonvolatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, magnetic or optical data storage media, and the like. Additionally or alternatively, the techniques may be implemented at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and / Or perform (eg, propagate a signal or wave).The program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic Array (FPGA) or other equivalent integrated or discrete logic. This processor may be configured to perform any of the techniques described in this disclosure. The general purpose processor may be a microprocessor; however, in the alternative, the processor may be any conventional processor, controller, microcontroller or state machine. The processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Thus, the term "processor" as used herein may refer to any of the aforementioned structures, any combination of the above structures, or any other structure or device suitable for implementing the techniques described herein. In addition, in certain aspects, the functionality described herein may be provided in dedicated software or hardware configured for encoding and decoding or in a combined video encoder-decoder (codec) . Moreover, the techniques may be implemented entirely in one or more circuits or logic elements.The techniques of this disclosure may be implemented in a wide variety of devices or devices including wireless handsets, integrated circuits (ICs), or a group of ICs (eg, chipsets). Various components or elements are described in this disclosure in order to emphasize functional aspects of devices configured to perform the disclosed techniques, but not necessarily through different hardware elements. In fact, as described above, the various units may be provided in a codec hardware unit in combination with suitable software and / or firmware or may be provided by a collection of interoperating hardware units, including as described above One or more processors.Although the foregoing has described in connection with various embodiments, the features or elements from one embodiment may be combined with other embodiments without departing from the teachings of the invention. However, the combination of the features between the respective embodiments is not necessarily limited to this. Various embodiments of the present invention have been described. These and other embodiments are within the scope of the following claims. |
Disclosed embodiments relate to a prefetcher for delinquent irregular loads. In one example, a processor includes a cache memory, fetch and decode circuitry to fetch and decode instructions from a memory; and execution circuitry including a binary translator (BT) to respond to the decoded instructions by storing a plurality of decoded instructions in a BT cache, identifying a delinquent irregularload (DIRRL) among the plurality of decoded instructions, determining whether the DIRRL is prefetchable, and, if so, generating a custom prefetcher to cause the processor to prefetch a region of instructions leading up to the prefetchable DIRRL. |
1.A processor including:Cache memoryFetch and decode circuitry for fetching and decoding instructions from memory; andBinary translator (BT), which is used to respond to the decoded instruction by:Storing a plurality of the decoded instructions in a BT cache;Identifying a delayed irregular payload (DIRRL) among the stored instructions;Determining whether the DIRRL is prefetchable; andIf so, a custom prefetcher is generated to cause the processor to prefetch to boot into the prefetchable DIRRL instruction area.2.The processor of claim 1, wherein the DIRRL is a delay load experiencing a cache miss on a continuous dynamic instance that is greater than a first threshold number.3.The processor of claim 2, wherein the DIRRL is an irregular payload having at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of address increments overwrites Less than a third threshold number of consecutive dynamic instances.4.The processor of claim 3, wherein the execution circuit calculates a post-slice between two consecutive dynamic instances of the DIRRL, and when the post-slice includes a loop composed entirely of non-memory operations or regular memory operations When it is determined that the DIRRL is prefetchable.5.The processor of claim 4, wherein the custom prefetcher, when performing the prefetch, causes the processor to perform one or more critical loads among the post-slices before performing a non-critical load.6.The processor according to any one of claims 4-5, wherein the custom prefetcher causes the processor to prefetch a plurality of irregular loads, and the plurality of irregular loads contain instructions larger than There are few instructions included in the following slice.7.The processor of any one of claims 1, 4, and 5, wherein the custom prefetcher includes one or more prefetch hints among the plurality of instructions stored in the memory.8.The processor of any one of claims 1, 4, and 5, wherein the custom prefetcher includes a hardware prefetcher using the execution circuit.9.The processor of any one of claims 1, 4, and 5, wherein the processor further comprises an execution circuit, and wherein the BT is separate from the execution circuit.10.The processor of any one of claims 1, 4, and 5, wherein the processor further comprises an execution circuit, and wherein the BT is incorporated into the execution circuit.11.A method executed by a processor, the processor comprising:Cache memoryFetch and decode circuitry for fetching and decoding instructions from memory; andAn execution circuit including a binary translator (BT) to respond to the decoded instruction by:Storing a plurality of the decoded instructions in a BT cache;Identifying a delayed irregular payload (DIRRL) among the stored instructions;Determining whether the DIRRL is prefetchable; andIf so, a custom prefetcher is generated to cause the processor to prefetch to boot into the prefetchable DIRRL instruction area.12.The method of claim 11, wherein the DIRRL is a delay load experiencing a cache miss on a continuous dynamic instance that is greater than a first threshold number.13.The method of claim 12, wherein the DIRRL is an irregular payload having at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of address increments cover less than A third threshold number of consecutive dynamic instances.14.The method of claim 13, wherein the execution circuit calculates a post-slice between two consecutive dynamic instances of the DIRRL, and when the post-slice includes a loop composed entirely of non-memory operations or regular memory operations To determine that the DIRRL is prefetchable.15.The method of claim 14, wherein the custom prefetcher focuses the processor on the post-slices by enqueuing only one or more critical loads while others are not enqueued. One or more critical loads.16.The method according to any one of claims 14-15, wherein the custom prefetcher causes the processor to prefetch a plurality of irregular loads, the plurality of irregular loads containing instructions larger than the instructions There are fewer instructions in the post-slice.17.The method of any one of claims 14-15, wherein the custom prefetcher comprises one or more prefetch hints among the plurality of instructions stored in the memory.18.The method of any one of claims 11, 14 and 15, wherein the custom prefetcher comprises a hardware prefetcher using the execution circuit.19.The method of any one of claims 11, 14 and 15, wherein the custom prefetcher comprises one or more prefetch hint instructions to be executed using an existing instruction execution pipeline of the processor.20.The method of any one of claims 11, 14 and 15, wherein the custom prefetcher comprises a hardware prefetcher using an existing execution cluster of the processor.21.A machine-readable medium comprising code that, when executed, causes a machine to perform a method according to any one of claims 11-20. |
Prefetcher for delayed irregular loadsTechnical fieldThe field of the invention relates generally to computer processor architectures, and more specifically to prefetchers for delayed irregular loads.Background techniqueAs out-of-order cores become wider and deeper, microarchitecture performance tends to become more limited by two bottlenecks: cache misses and branch misprediction. Data prefetching can improve the performance of many applications. Through a combination of hardware and software, prefetching data before it is actually needed can result in reduced latency for memory accesses.The impact of cache misses can be mitigated in a number of ways, including: 1) hiding miss wait times by using out-of-order execution; 2) customizing cache replacement strategies to better match application needs; and 3) passing Prefetch memory locations before actual demand occurs.Load instructions can be categorized into several categories, including: a) a constant constant load whose virtual address remains constant across multiple dynamic instances; b) a stride load with continuous virtual addresses primarily at arithmetic levels; and c) both Irregular loads that are not constant or stride.Moreover, as described herein, a load that is frequently missed in the cache (ie, greater than a threshold number of times, such as 100, 1000, 10,000, etc.) is referred to as a delayed load.Irregular payloads with prefetch delays remain an open challenge.BRIEF DESCRIPTION OF THE DRAWINGSThe invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like reference numerals indicate similar elements, and in the drawings:FIG. 1A is a block diagram illustrating a processing component for executing instructions according to some embodiments;FIG. 1B is a block diagram illustrating a processing component for executing instructions according to some embodiments; FIG.2 is a block diagram of a system for generating an application-specific custom prefetcher according to some embodiments;3A is a flowchart block diagram of operations performed by a processor to generate an application-specific custom prefetcher, according to some embodiments;3B is a flowchart block diagram of operations performed by a processor to generate an application-specific custom prefetcher, according to some embodiments;4A is a code listing illustrating backslices according to some embodiments;4B is a custom hardware prefetcher generated for the code listing of FIG. 4A according to some embodiments;4C is a custom software prefetcher generated for the code listing of FIG. 4A according to some embodiments;FIG. 5A is a code listing of an instruction area directed to a delayed irregular payload according to some embodiments;FIG. 5B is a block flowchart illustrating an instruction flow of the code listing in FIG. 5A according to some embodiments; FIG.FIG. 6A is a code listing of an instruction region directed to a delayed irregular payload according to some embodiments;6B is a block flowchart illustrating an instruction flow of the code listing in FIG. 6A according to some embodiments;6C is another more detailed block flow diagram illustrating the instruction flow of the code listing in FIG. 6A according to some embodiments;7A illustrates an exemplary application-specific custom software prefetcher according to some embodiments;7B illustrates an exemplary application-specific custom hardware prefetcher corresponding to the custom software prefetcher of FIG. 7A according to some embodiments;8A-8B are block diagrams illustrating a general vector-friendly instruction format and its instruction template according to some embodiments of the present invention;8A is a block diagram illustrating a general vector friendly instruction format and a type A instruction template according to some embodiments of the present invention;8B is a block diagram illustrating a general vector-friendly instruction format and a type B instruction template according to some embodiments of the present invention;9A is a block diagram illustrating an exemplary specific vector-friendly instruction format according to some embodiments of the present invention;9B is a block diagram illustrating fields of a specific vector-friendly instruction format constituting a complete opcode field according to one embodiment;9C is a block diagram illustrating fields of a specific vector-friendly instruction format constituting a register index field according to one embodiment;9D is a block diagram illustrating fields of a specific vector-friendly instruction format constituting an augmented operation field according to one embodiment;10 is a block diagram of a register architecture according to one embodiment;11A is a block diagram illustrating an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue / execution pipeline according to some embodiments;11B is a block diagram illustrating an exemplary embodiment of an exemplary register renaming out-of-order issue / execution architecture core and an ordered architecture core to be included in a processor according to some embodiments;12A-B illustrate a block diagram of a more specific exemplary ordered core architecture, which core will be one of several logical blocks in a chip (containing other cores of the same type and / or different types);FIG. 12A is a block diagram of a single processor core with its connection to an interconnect network on a die and a local subset of its level 2 (L2) cache, according to some embodiments;12B is an expanded view of a portion of the processor core in FIG. 12A according to some embodiments;13 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics, according to some embodiments;14-17 are block diagrams of exemplary computer architectures;14 illustrates a block diagram of a system according to some embodiments;15 is a block diagram of a first more specific exemplary system according to some embodiments;16 is a block diagram of a second more specific exemplary system according to some embodiments;FIG. 17 is a block diagram of a system on chip (SoC) according to some embodiments; and18 is a block diagram comparing the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to some embodiments.detailed descriptionIn the following description, numerous specific details are set forth. It is understood, however, that some embodiments may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail so as not to obscure the understanding of this specification.References to "one embodiment", "embodiment", "example embodiment", etc. in the description indicate that the described embodiments may include features, structures, or characteristics, but each embodiment may not necessarily include the features, structures, or structures. Or characteristics. Moreover, such phrases are not necessarily referring to the same embodiment. In addition, when a feature, structure, or characteristic is described with respect to an embodiment, it is considered to be within the knowledge of those skilled in the art to affect such a feature, structure, or characteristic with respect to other embodiments, if explicitly described.The disclosed embodiments describe a method for generating an irregular payload (DIRRL) specifically for each delay using profiling and analysis performed by, for example, a runtime binary translator (BT)-sometimes referred to as difficult prefetching ( (HTP) or (HTP) Payload-Improved systems and methods designed to apply specific custom prefetchers. According to some embodiments, BT analyzes loops in backward slices (also known as "post slices") of DIRRL's instructions to determine if DIRRL is prefetchable. If so, BT either generates specific code containing prefetch hint instructions, or configures a custom hardware prefetcher to prefetch one or more payloads in a code area containing DIRRL.Unlike some failed methods, the disclosed embodiments avoid relying on a large amount of on-chip storage to record address patterns and try to predict future addresses. In addition to requiring an excessively high amount of on-chip memory, the difficulty of implementing this method in real hardware can be seen in its lack in commercial shipping processors.The disclosed embodiments also avoid a resource-intensive, computation-based prefetch method that uses a separate helper thread to execute instructions from a program in advance to prefetch a delayed load. Also, it is difficult to ensure that the helper thread does not far exceed the main thread, which actually ends up polluting the cache.The disclosed embodiments improve the processor architecture and its prefetch performance in several ways. One advantage of the disclosed embodiment is the possibility of highly accurate prefetching with low overhead, because the generated prefetcher is part of the main thread itself, and there is no need for an alternate thread context or large memory. Moreover, since the prefetcher code (or custom hardware) is generated to maintain a constant number of iterations before the main calculation, no additional effort is required to match the rate of the main thread and the prefetcher. Furthermore, cache and memory bandwidth interference is kept to a minimum with prefetching of the irregular payload instruction pointer (IP) inserted only in the delay.In describing the disclosed embodiments below, a number of terms are defined herein and these terms are used as part of the description of the disclosed embodiments. As used herein, "delayed" payloads are those payload instructions that have a number of first-level cache misses that are greater than a threshold (eg, 1K, 10K, etc.). As further used herein, the "address increment" of a load instruction is defined as the numerical difference between the virtual addresses of its successive dynamic instances. Furthermore, in some embodiments, "irregular" payloads are those payload instructions with at least ten unique address increments, and the ten most popular unique increments still cover less than 90% of all increments. This definition distinguishes between regular patterns (such as multidimensional arrays and other occasionally irregular (but mainly striding) loads) and irregular loads in the context of the disclosed embodiments.As described herein, and as illustrated with respect to FIG. 2, some disclosed embodiments consist of three parts: 1) a parser, 2) an optimizer, and 3) a prefetcher.ProfilerAccording to some embodiments, the profiler identifies delayed irregular loads. In some embodiments, the parser is a combination of both hardware and binary translator (BT) software. In such an embodiment, the hardware tracks data cache misses for each payload instruction in flight to identify delayed payloads. In some embodiments, the BT software runs a detailed address delta profile on the identified delay payloads to classify them as regular or irregular payloads.When the disclosed embodiment is incorporated into a processor that already has a stride detection prefetcher, the address increment information that was originally available to the processor can also be passed to the BT software for analysis. Incorporating the disclosed embodiments into a processor can therefore improve the processor's prefetch performance without adding too much cost, if any.In some embodiments, the disclosed profiler operates online (while parsing a thread), and in other embodiments, operates offline (at a different time than the actual running time of the thread, such as by analyzing source code in advance).OptimizerSome disclosed embodiments further include an optimizer that analyzes the executing code to calculate a for a lazy irregular load. As used herein, a post-slice (also known as a backward slice) of a delayed irregular load is a set of instructions in a program that executes before the operands of the delayed irregular load instruction and directly or indirectly The operand of the irregular load instruction contributes. Based on the address increment of the instruction in the post-slice (received from the parser), the optimizer then identifies the "prefetchable" payload as the one whose subsequent slice consists entirely of non-memory operations or regular memory operations. The optimizer will then generate a custom prefetcher for areas of code that contain prefetchable loads.Custom prefetcherThe custom prefetcher generated by the optimizer can be software (code generated with prefetch hint instructions; see, for example, FIG. 7A) or hardware (custom hardware that captures the data flow of address calculations; see, for example, FIG. 7B).It should be understood that, for simplicity, 1) the profiler, 2) the optimizer, and 3) the prefetcher are described herein as separate components. In fact, in some embodiments, all three of 1) the parser, 2) the optimizer, and 3) the prefetcher are combined in and broadly referred to as an "execution circuit" and portions thereof. The same goes for the binary translator described in this article. In some embodiments, the binary translator is incorporated in the "execution circuit", while in other embodiments, the BT is separate from the execution circuit and is external to the execution circuit.FIG. 1A is a block diagram illustrating a processing component for executing instructions according to some embodiments. As illustrated, the storage device 101 stores the instruction (s) 103 to be executed. As described further below, in some embodiments, the computing system 100 is a SIMD processor to simultaneously process multiple elements of a packed data vector, such as a matrix.In operation, the instruction 103 is fetched from the storage device 101 by the fetch circuit 105. Each fetched instruction 107 is decoded by the decoding circuit 109. The instruction format (s) are illustrated and described with respect to FIGS. 8A-B and 9A-D. The decoding circuit 109 decodes each fetched instruction 107 into one or more operations. In some embodiments, this decoding includes generating a plurality of micro-operations to be performed by an execution circuit, such as execution circuit 117. The decoding circuit 109 also decodes the instruction suffix and prefix (if used). The execution circuit 117 is further described and illustrated below with respect to FIGS. 2-3, 11A-B, and 12A-B.In some embodiments, the register renaming, register allocation, and / or scheduling circuit 113 provides functionality for one or more of the following: 1) Renaming a logical operation value to a physical operation value (eg, in some embodiments Register alias table); 2) assign status bits and flags to decoded instructions; and 3) dispatch decoded SMM instructions 111 from the instruction pool for execution on execution circuit 117 (eg, use a reservation in some embodiments station). Where renaming, allocation, and / or scheduling can occur at different times, or not at all, the register renaming / register allocation and / or scheduling circuit 113 is optional, as indicated by its dashed border.A register (register file) and / or memory 115 stores data as an operand of a decode instruction 111 to be operated on by the execution circuit 117. In some embodiments, as shown, the execution circuit 117 includes a binary translator 118 that includes a BT cache 119 and is further illustrated and described with reference to FIGS. 2-3. Where the binary translator 118 can be incorporated into the execution circuit 117 (as shown), it is optional, as indicated by its dashed border, and may be external to the execution circuit 117 (as shown in FIG. 1B), It can be implemented in software instead, or as a combination of hardware and software.In some embodiments, the register file and / or memory 115 includes a cache hierarchy including L1, L2, and L3 (or LLC) caches. In some embodiments, the cache is uniform, and other embodiments have separate data and instruction caches. Exemplary register types include write mask registers, packed data registers, general purpose registers, and floating point registers, as further described and illustrated at least with respect to FIG. 10 below.In some embodiments, the write-back circuit 120 submits the result of executing the decode instruction 111. The execution circuit 117 and the system 100 are further illustrated and described with respect to FIGS. 2-3, 11A-B, and 12A-B.FIG. 1B is a block diagram illustrating a processing component for executing instructions according to some embodiments. As illustrated, the storage device 151 stores the instruction (s) 153 to be executed. As described further below, in some embodiments, the computing system 150 is a SIMD processor to simultaneously process multiple elements of a packed data vector, such as a matrix.In operation, the instruction (s) 153 are fetched from the storage device 151 by the fetch circuit 155. Each fetched instruction 157 is decoded by the decoding circuit 159. The instruction format (s) are illustrated and described with respect to FIGS. 8A-B and 9A-D. The decoding circuit 159 decodes each fetched instruction 157 into one or more operations. In some embodiments, this decoding includes generating a plurality of micro-operations to be performed by an execution circuit, such as execution circuit 167. The decoding circuit 159 also decodes the instruction suffix and prefix (if used). The execution circuit 167 is further described and illustrated below with respect to FIGS. 2-3, 16 and 17.In some embodiments, the register renaming, register allocation, and / or scheduling circuit 163 provides functionality for one or more of the following: 1) Renaming a logical operation value to a physical operation value (eg, in some embodiments Register alias table); 2) assign status bits and flags to decoded instructions; and 3) dispatch decoded SMM instructions 161 from the instruction pool for execution on execution circuit 167 (eg, use a reservation in some embodiments station). Where renaming, allocation, and / or scheduling can occur at different times, or not at all, the register renaming / register allocation and / or scheduling circuit 163 is optional, as indicated by its dashed border.A register (register file) and / or memory 165 stores data as an operand of a decode instruction 161 to be operated on by the execution circuit 167. A binary translator 168 is also shown, which contains a BT cache 169, and is further illustrated and described further with reference to FIGS. 2-3. In the case where the binary translator 168 can be incorporated in the execution circuit 167 (as shown in FIG. 1A), it is optional, as indicated by its dashed border, and may be outside the execution circuit 167 (as shown), and can Instead, it is implemented in software, or as a combination of hardware and software.In some embodiments, the register file and / or memory 165 includes a cache hierarchy including L1, L2, and L3 (or LLC) caches. In some embodiments, the cache is uniform, and other embodiments have separate data and instruction caches. Exemplary register types include write mask registers, packed data registers, general purpose registers, and floating point registers, as further described and illustrated at least with respect to FIG. 15 below.In some embodiments, the write-back circuit 170 submits the result of executing the decode instruction 161. The execution circuit 167 and the system 150 are further illustrated and described with respect to FIGS. 2-3, 16 and 17.FIG. 2 is a block diagram of a system for generating an application-specific custom prefetcher according to some embodiments. As shown, the system 200 includes a parser 202, an optimizer 212, and a prefetcher 222. The profiler 202 receiving the payload miss performance counter 208 includes an address increment profile 204 and a delay load filter 206, and identifies a candidate region 210 and sends it to the optimizer 212. The optimizer 212 containing the data row analysis 214, the loop enumeration 216, and the prefetchable load identifier 218 generates a custom prefetcher 220 and sends it to the prefetcher 222. The prefetcher 222 contains the generated code 224 or custom hardware 226.3A is a flowchart block diagram of operations performed by a processor to generate an application-specific custom prefetcher, according to some embodiments. The processor executes process 300. As shown, at 302, the processor will fetch instructions from memory using a fetch circuit, such as fetch circuit 105 (FIG. 1). In some embodiments, the memory is an L1 instruction cache. In other embodiments, the memory is L2 or higher cache, and in still other embodiments, the memory is main memory. At 304, the processor is to decode the fetched instruction using a decoding circuit such as decoding circuit 109 (FIG. 1). At 306, the processor will use a binary translator with execution circuitry to respond to the decoded instructions to perform operations 308-314. Specifically, at 308, the processor stores the decoded instruction stream in a BT cache. In some embodiments, the BT cache is separate from the memory 115 shown in FIG. 1. At 310, the processor will track cache misses of the load instructions to identify delayed loads. At 312, the processor is to dissect the address increments of consecutive instances of the delayed payload to identify delayed irregular payloads. At 314, the processor is to determine whether the DIRRL is prefetchable by analyzing post-slices between successive dynamic instances of the DIRRL, and if so, generate a custom prefetcher to make the processor prefetch Guide to the instruction area of the prefetchable DIRRL. What is meant by "post-slicing" as used herein is further illustrated and described with respect to Figures 4, 5A and 6A.3B is a flowchart block diagram of operations performed by a processor to generate an application-specific custom prefetcher, according to some embodiments. The processor executes process 350. As shown, at 352, the processor will fetch instructions from memory using a fetch circuit, such as fetch circuit 105 (FIG. 1). In some embodiments, the memory is an L1 instruction cache. In other embodiments, the memory is L2 or higher cache, and in still other embodiments, the memory is main memory. At 354, the processor is to decode the fetched instruction using a decoding circuit such as decoding circuit 109 (FIG. 1). At 356, the processor will use a binary translator to respond to the decoded instructions to perform operations 358-364. Specifically, at 358, the processor stores the decoded instruction stream in a BT cache. In some embodiments, the BT cache is separate from the memory 115 shown in FIG. 1. At 360, the processor will track cache misses of the load instructions to identify delayed loads. At 362, the processor is to dissect the address increments of successive instances of the delayed payload to identify delayed irregular payloads. At 364, the processor is to determine whether the DIRRL is prefetchable by analyzing post-slices between successive dynamic instances of the DIRRL, and if so, generate a custom prefetcher to cause the processor to prefetch Guide to the instruction area of the prefetchable DIRRL. What is meant by "post-slicing" as used herein is further illustrated and described with respect to Figures 4, 5A and 6A.FIG. 4A is a code listing illustrating post-slicing according to some embodiments. As shown, code listing 400 defines an exemplary function foo (). For ease of discussion, the code listing is shown in a relatively easy-to-understand C programming language syntax. Some embodiments (eg, offline and advance analysis of code segments) can generate application-specific custom prefetchers by analyzing the code segments using a high-level programming language such as C. However, some embodiments use a hardware binary translator to dynamically and online generate application-specific custom prefetchers and analyze instructions in assembly code format. For example, code segments having an assembly instruction format are illustrated and described with respect to FIGS. 5A, 6A, and 7A. In some embodiments, the code to be analyzed includes a macro operation generated by a decoding circuit, such as the decoding circuit 109 (FIG. 1).As shown, the instruction at line 0160 is the target instruction 402, and a "post slice" leading to the target instruction is calculated. As used herein, "post-slicing" is a collection of all instructions that directly or indirectly contribute to calculations performed in a target instruction. In some embodiments, the instructions to be included in the post-slice can be identified by working backwards from the target instruction 402 to identify all contributing instructions that make up the post-slice 404. For example, the instruction at line 0140 directly contributes to the target instruction 402 because it sets the operand of the instruction. Working backwards from the instruction at 0140, the instructions at 0110, 0090, and 0070 will be included in the post slice 404 because they indirectly contribute to the calculation of the target instruction 402. In some embodiments, as here, the target instruction 402 is part of a loop, and the back slice is extended backwards, but stops at the beginning of the current iteration.It is worth noting that some of the instructions in the code listing 400 do not contribute directly or indirectly to the calculation of the target instruction 402, and are therefore not included in the post-slice 404. For example, the instructions at lines 0080, 0100, 0120, and 0150 are not included in the post-slice. The instructions at lines 0130 and 0170, even if they affect the operand 'c' used in the target instruction 402, are not included in the post-slice because the 'c' set by those instructions before reaching the target instruction 402 The value is overwritten.FIG. 4B is a custom hardware prefetcher generated for the code listing of FIG. 4A according to some embodiments. As shown, the custom hardware prefetcher 420 includes a first-in-first-out (FIFO) buffer 421 having pointers for the head 422 and the tail 424, and in a line from the code listing 400 (FIG. 4A) [ The instructions at 0090], [0110], and [0140] have been queued. A custom hardware prefetch control circuit 426, an arithmetic / logic unit (ALU) 428, and a memory load unit (MLU) 430 are also shown.For simplicity, and to illustrate the operation of the disclosed embodiments, the instructions enqueued in the FIFO 421 are shown according to a format of a high-level programming language such as Basic, C, Fortran, or C ++. However, in some embodiments, those instructions will instead be stored as decode micro-operations or macro operations generated by a decoding circuit, such as the decoding circuit 109 (FIG. 1A) or 159 (FIG. 1B).In operation, the custom hardware prefetch control circuit 426 will enqueue one or more instructions in the instruction area directed to the target instruction 402 (FIG. 4A) in the FIFO 421, and then cause the processor to execute the resulting arithmetic Operations (if any, using ALU 428) and memory loads (if any, using MLU 430).In other embodiments, different instructions from the code list 400 are selected for inclusion in the FIFO 421. For example, if one of the instructions is identified as a "critical load," as described below, the control circuit 426 may cause the processor to focus on the instruction only by enqueuing the instruction and not enqueuing the other instructions. In some embodiments, the entire post-slice 404 (FIG. 4A) is added to the FIFO 421 and executed by the processor.FIFO 421, custom hardware prefetch control circuit 426, ALU 428, and MLU 430 are all optional, as indicated by their dashed borders, where they can use hardware resources already included in the processor, they Either firmware or software may be used, or they may not be included at all. For example, the FIFO 421 may be implemented in a memory that is already available to the processor. Some embodiments implement registers FIFO 421 using registers in a processor's register file. Some embodiments implement FIFO 421 using several dedicated registers. Some embodiments use a different memory organization than FIFO 421, such as random access memory. For example, ALU 428 may contain one or more dedicated ALUs to perform arithmetic operations. In some embodiments, ALU 428 uses one or more existing processor execution units 1162 within execution cluster 1160, as illustrated and described with respect to FIGS. 11A-B.FIG. 4C is a custom software prefetcher generated for the code listing of FIG. 4A according to some embodiments. As shown, the custom software prefetcher 440 includes a first-in-first-out (FIFO) buffer 441, with a buffer 421 having pointers for a head 442 and a tail 444, and in a line from the code listing 400 (FIG. 4A) [ The instructions at 0090], [0110], and [0140] have been queued. The enqueued instructions in FIFO 441 are intended to serve as prefetch hints. A custom software prefetch control circuit 446 is also shown.For simplicity, and to illustrate the operation of the disclosed embodiments, the instructions enqueued in the FIFO 441 are shown according to a format of a high-level programming language such as Basic, C, Fortran, or C ++. However, in some embodiments those instructions will instead be stored as decode micro-operations or macro operations generated by a decoding circuit, such as the decoding circuit 109 (FIG. 1A) or 159 (FIG. 1B).In operation, the custom software prefetch control circuit 446 will cause one or more instructions within the instruction area directed to the target instruction 402 (FIG. 4A) to be enqueued in the FIFO 441 and will then act as a processor to be executed by the processor. Prefetching tips.In other embodiments, different instructions from the code list 400 are selected for inclusion in the FIFO 441. For example, if one of the instructions is identified as a "critical load," as described below, the control circuit 446 may cause the processor to focus on the instruction only by enqueuing the instruction and not enqueuing the other instructions. In some embodiments, the control circuit 446 causes the processor to focus on one or more critical loads by performing a critical load before performing a non-critical load when performing a prefetch. In some embodiments, the entire post-slice 404 (FIG. 4A) is added to the FIFO 441 and executed by the processor.In the case where FIFO 441 and custom software prefetch control circuit 426 can use the resources already included in the processor, or they may not be included at all, FIFO 441 and custom software prefetch control circuit 426 are optional, as they are Indicated by the dotted border. For example, FIFO 441 may be implemented in memory that is already available to the processor. For example, one or more prompts enqueued in FIFO 441 may instead be stored in instructions in memory. Some embodiments implement a FIFO 441 using registers in a processor's register file. Some embodiments implement FIFO 441 using several dedicated registers. Some embodiments use a different memory organization than FIFO 441, such as random access memory. In some embodiments, the control circuit 446 causes the processor to respond to prompts to enqueue using its existing execution pipeline, as illustrated and described with respect to FIGS. 11A-B.Identify post-slicing of example assembly code listing retrospective 15A is a code listing of instructions to be parsed by a profiler and then optimized by an optimizer, according to some embodiments. As shown, each instruction in the assembly code list traceback 1 500 includes an address, opcode, operand, and comments indicating its instruction type. Traceback 1 500 is sometimes called "hot zone", and here is a simple 17-instruction loop, where the 17th instruction loops back to the 1st instruction, and the two exit branches in which to exit go through a rarely used loop End (0xef1 and 0xef7). Traceback 1 500 has two irregular payloads (0xeea and 0xf05), two are stored on the stack (0xef3 and 0xf00), and the remaining payload is a constant address stack payload.Also illustrated is an arc defining a back slice back to 1500. Beginning with the last irregular load in the loop, at 0xf05, arcs A and B identify dependencies on 0xf03 and 0xefb, respectively. The use of dotted lines here simply allows for easier differentiation among the seven arcs. Continuing from 0xf03, arcs C and D identify dependencies on 0xeea and 0xefd, respectively. Continuing from 0xefb, arc E identifies a dependency on 0xee2. Finally, continuing from 0xeea, arcs F and G identify dependencies on 0xee7 and 0xee4, respectively.FIG. 5B illustrates a post-slice of the instruction flow of FIG. 5A as a flow block diagram. As shown, the seven arcs labeled A to G identify the same seven arcs between the same eight post-slicing instructions that trace back to 1500 (they are represented here by eight flowchart nodes). Specifically, the eight flowchart nodes labeled 522, 524, 526, 528, 530, 532, 534, and 536 correspond to the 1 500 addresses 0xee4, 0xee7, 0xee2, 0xeea, 0xefd, 0xefb, 0xf03, and 0xf05, respectively, at the retroactive 1,500 addresses Eight instructions.In operation, according to some embodiments, as further described below with respect to FIGS. 5A-B and 6A-B, a processor having a binary translator (BT) including a BT cache stores a stream of instructions that trace back to 1,500 to BT cache. Using a profiler, the binary translator identifies delayed irregular loads (DIRRL). Then, as described below, using the optimizer, BT determines whether DIRRL is prefetchable, and if so, generates a custom prefetcher to cause the processor prefetch to boot into the prefetchable DIRRL instruction area. The generated custom prefetcher can be implemented in software and / or hardware.Post-slice analysis and prefetcher generation for exemplary retrospective 26A is a code listing of assembly instructions to be profiled by a profiler and then optimized by an optimizer, according to some embodiments. As shown, each instruction in the assembly code list traceback 2 600 includes an address, opcode, and operand, and some have comments indicating their instruction type. Traceback 2 600 is sometimes called a "hot zone", and here is also looping (with 48 instructions) but with a more complex control flow (shown in Figures 6B and 6C). It has two stride loads (0x765 and 0x770) and four irregular loads (0x7cb, 0x7dc, 0x7ea, 0x7fb), but it is not stored. It also has three branches with a high false prediction rate on the common branch prediction circuit.Also illustrated is an arc defining a back slice that traces back 2600. Starting from the last irregular load in the loop, the delayed irregular loads at 0x7fb, arcs A, B, D, E, F, R, and S are identified by 0x7f4, 0x7f1, 0x7ee, 0x7bf, 0x765, 0x75e, The instructions at 0x75b and 0x7e6 all come back to the dependency chain. The use of dashed lines here simply allows for easier differentiation between arcs. Starting with the penultimate irregular load at 0x7ea, arcs G and J identify dependencies on 0x7cf and 0x7bf, respectively, and arcs H, K, L, and M identify dependencies on 0x7dc, 0x7d5, 0x7d2, and 0x7bf, respectively. And arcs I, N, O, P, and Q identify dependencies on 0x7cb, 0x7c8, 0x7c5, 0x770, and 0x75e, respectively.For ease of illustration and discussion, the code listing traceability 2 600 has been divided into eight (8) areas labeled A602, B 604, C 606, D 608, E 610, F612, G 614, and H 616, each of which Both end with a branch instruction. The illustrated 8 regions are further described and illustrated in FIG. 6B, and FIG. 6B contains the nodes in the flow diagram for each region.FIG. 6B is a control flowchart illustrating a back slice of a region traced back to 2600 as defined in FIG. 6A as a flowchart. As shown, the flow diagram of the traceback 2 slice 620 includes 9 nodes corresponding to the 8 nodes defined in FIG. 6A. Specifically, the eight nodes 622, 624, 626, 628, 630, 632, 634, and 636 are labeled A to H, and are defined by the same instruction in each node as the area in FIG. 6A.FIG. 6C illustrates a post-slicing of the instruction flow of the traceback 2 of FIG. 6A as a flow block diagram. As shown, the tracing 2 post-slicing 640 flowchart contains 18 nodes corresponding to 18 post-slicing instructions 642, 644, 646, 648, 650, 652, 654, 656, 658, 660, 662, 664, 666, 668 , 670, 672, 667, and 676, and the dependencies among the labeled instructions are 19 paths labeled A to S. The path among the illustrated nodes matches the arc in the traceback 2 instruction in FIG. 6A.As illustrated and described with respect to FIGS. 5B, 6B, and 6C, the back slices of Trace 1 and Trace 2 capture the data flow between successive iterations of the irregular load. The leading edge (from the lower instruction address to the higher instruction address) indicates the data flow within the iteration, and the trailing edge (from the higher instruction address to the lower instruction address) indicates the data flow from the previous iteration of the loop.In Figs. 5B and 6C, nodes representing regular and constant loads are marked with "#" symbols, and irregular loads are marked with "*". It can be seen that the number of instructions in the back slice of the irregular load is significantly smaller than the size of the loop (8 <17 in retrospect 1, and 18 <48 in retrospect 2). Therefore, the disclosed embodiments advantageously can prefetch all relevant dependencies on the target irregular load without having to prefetch all data accessed by the program.Another advantage of the disclosed embodiment is the critical relationship between successive iterations of the loop to capture irregular loads in this post-slice. The loop describes a situation where a calculation performed by a later instruction depends on the output of an earlier instruction and produces a new value that itself depends on the previous instruction when the instruction is subsequently executed. For example, Figure 5B shows two loops, which are: (0xee7,00eea) and (0xee2,0xefb). Of these, the latter is a simple loop consisting of register moves only and can be ignored. Similarly, there are three loops in the area from traceback 2: (0x7e6), (0x765, 0x7bf), and (0x770, 0x7c5). These loops capture a substantial recursive relationship between the virtual addresses of successive dynamic instances of an irregular payload. Note that these loops have significantly fewer instructions than the post-slice itself (4 to 8 in traceback 1 and 8 to 18 in traceback 2).The optimizer determines whether it is "prefetchable"The optimizer determines whether delayed irregular loads can be prefetched by analyzing the post-slices of the instruction. "Pre-fetchable" loads are those loads in which subsequent slices have cycles consisting entirely of non-memory operations or regular memory operations. If it is determined that the irregular delay load is prefetchable, the optimizer generates a custom prefetcher for the code area containing the prefetchable load.In some embodiments, all loops in the area from Traceback 2 are composed of non-memory operations or regular memory operations. Since the post-slices of 0x765 and 0x770 contain only a single loop (0x7e6) with a single register increment, it is statically obvious that they are both step loads. Thus, the loops (0x765, 0x7bf) and (0x770, 0x7c5) do not have any irregular memory operations.Therefore, as long as the loops execute long enough, they can "run" (by pre-fetching the stride load) multiple iterations before the main calculation. On the other hand, a non-simple loop in traceback 1 (0xee7, 0xeea) has a constant address payload (0xee7), but the other payload (0xeea) is irregular. Therefore, it is not possible to "run" this loop just by prefetching 0xee7. In fact, 0xeea is a "pointer chase" payload, and its waiting time to memory cannot be reduced except to shift the entire calculation closer to memory. From the reasoning above, the regions in Traceback 2 are "prefetchable", while the regions in Traceback 1 are not.As described above, the optimizer performs data flow analysis on regions with irregular loads. It generates a data flow graph for the integer data flow of the address calculation and enumerates all the primary loops in the graph. If no primary loop has any irregular memory operations, the optimizer determines the region as prefetchable and generates a custom prefetcher for it.Another advantageous aspect of the disclosed embodiment stems from the fact that the prevailing pattern in the irregular load and the step load are indirect, that is, the value of the step load is used as the address of the irregular load with an optional linear transformation ( K1 * address + K2, where K1 and K2 are constants). This happens in indirect programming access modes, such as A [B [i]], where B is a contiguous array of indexes. The techniques applied in the disclosed embodiments will not only determine such scenarios as prefetchable and generate custom prefetchers for them, but also apply where the transformation energy can be any arbitrary function (not necessarily linear, i.e. A The more general case of [f (B [i]], where f is an arbitrary function). For example, this access pattern is popular in hash tables, where f is the hash function of interest.Optimizer generates custom prefetcherAccording to the disclosed embodiment, the next step after identifying prefetchable loads is to generate a custom prefetcher for them. In some embodiments, the software profiler applies a heuristic to define a custom prefetcher, either as software or as hardware, to prefetch the calculated number of instruction iteration values from a loop, where the calculation involves estimating the instructions in the execution loop How long it will take, and then prefetch enough loop iterations to establish "look ahead" and keep enough before the code instructions to hide the wait time encountered by cache misses.Further, in some embodiments, the software profiler identifies one or more "critical loads" in the cycle that are expected to require a relatively high number of cycles to execute, and then generates customizations for those (one or more) critical loads Prefetcher. Critical loads may include loads that experience frequent cache misses. Critical loads can include loads that are coupled with complex arithmetic operations. In some embodiments, a custom prefetcher focuses the processor on a critical load, if any. To focus on critical loads, a custom prefetcher enables the processor to execute those critical loads before non-critical operations.In some embodiments, in addition to register moves, the operations performed in the post-slice and selected for inclusion in the custom prefetcher are load and arithmetic and / or logical operations, which in the case of a hardware prefetcher All are implemented using several dedicated address generation units and ALUs. The selected arithmetic and / or logical operation (if any) includes one or more of addition, subtraction, increment, decrement, multiplication, division, AND, OR, XOR, negation, and shift. In some embodiments, the selected arithmetic operation, in some embodiments, the selected arithmetic operation comprises a complex number operation, such as a square root. In some embodiments, the selected arithmetic operation comprises a trigonometric operation.FIG. 7A illustrates an exemplary application-specific custom software prefetcher according to some embodiments. Illustrated is a custom software prefetcher generated for retrospective 2 using the prefetch prompt instruction 'prefetch0 /'. Prefetch is achieved by inserting a software prefetch segment 700 after the instruction at address 0x770 and maintaining two iterations before the main loop. The disclosed embodiment assumes that% bn is a register reserved for the use of BT, and that the mask at the instruction "0x75e: andl $ 0x1fff,% r13d" of traceback 2 does not cause wrapping. Therefore, in some embodiments, before entering the loop with a custom prefetcher, a one-time check of the wrapping condition is inserted before the code generated by BT. In some embodiments, for rare cases when the surround condition is true, a separate version of the loop without a custom prefetcher is used. Also, the software prefetch segment 700 does not have any intervening storage between successive iterations of the loop. If intervening storage exists, BT engine's speculative payload and alias check support will be used.In some embodiments, all loads in the custom software prefetch segment 700 are made speculative to ensure that there is no change to the memory ordering of the application.7B illustrates an exemplary application-specific custom hardware prefetcher corresponding to the custom software prefetcher of FIG. 7A, according to some embodiments. The hardware prefetcher 720 is a hardware alternative to the prefetcher for retrospective 2 and is tightly coupled with the CPU's stride load prefetcher (stepper 1 722 and stepper 2 724 in Figure 7B) Implemented in custom hardware. The input to the stepper block is the step load instruction (at addresses 0x765 and 0x770) for which the user wants to track the address. "Value" blocks 726 and 728 access the cache and data translation lookaside buffer (DTLB), while the "+" operations 730 and 732 and the "&" operations 734 and 736 are addition operations and bitwise AND operations, respectively. An "address" block 738 is an address generation unit that calculates a virtual address 742 based on a value 740 and a base address-index-scale input. For clarity, FIG. 7B shows a scenario where the prefetcher maintains one iteration before the main calculation. However, lookahead can be incremented by configuring the stepper to correspondingly keep further ahead and by using ALU for multiple iterations of lookahead. It is to be noted that in some embodiments, this hardware is enabled when entering a loop and is disabled when exiting from it.Additional examplesExample 1 provides an exemplary processor including: a cache memory; a fetch and decode circuit for fetching and decoding instructions from the memory; and an execution circuit including a binary translator (BT) for responding to the decoded Instruction: Stores the decoded instruction stream in the BT cache, identifies the delayed irregular payload (DIRRL) in the stream, determines whether DIRRL is prefetchable, and if so, generates a custom prefetcher for processing The device prefetch leads to the instruction area that can prefetch DIRRL.Example 2 contains the essence of the exemplary processor of Example 1, wherein the DIRRL is a delay load experiencing a cache miss on a continuous dynamic instance that is greater than a first threshold number.Example 3 contains the essence of the exemplary processor of Example 2, wherein the DIRRL is an irregular payload having at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of addresses Incremental coverage is less than a third threshold number of consecutive dynamic instances.Example 4 contains the essence of the exemplary processor of Example 3, wherein the execution circuit calculates a post-slice between two consecutive dynamic instances of the DIRRL, and determines when the post-slice contains a completely non-memory operation or regular memory The DIRRL is prefetchable when the operation constitutes a loop.Example 5 contains the essence of the exemplary processor of Example 4, where a custom prefetcher causes the processor to prefetch a single critical load in a post-slice.Example 6 contains the essence of the exemplary processor of Example 4, wherein the custom prefetcher causes the processor to prefetch a plurality of irregular loads, the plurality of irregular loads containing There are fewer instructions.Example 7 contains the essence of the exemplary processor of Example 1, wherein the custom prefetcher contains one or more prefetch hints among the instruction stream stored in the memory.Example 8 contains the essence of the exemplary processor of Example 1, wherein the custom prefetcher includes a hardware prefetcher using the execution circuit.Example 9 contains the essence of the exemplary processor of Example 1, where the BT is separate from the execution circuit.Example 10 contains the essence of the exemplary processor of Example 1, where BT is incorporated into the execution circuit.Example 11 provides an exemplary method performed by a processor, the method comprising: fetching and decoding instructions from memory using a fetch and decode circuit; responding to the decoded instructions with an execution circuit using a binary translator to: store the decoded instruction stream In BT cache memory; cache misses tracking load instructions to identify delayed loads; analysis of address increments of consecutive instances of delayed loads to identify delayed irregular loads (DIRRL); analysis of continuous dynamic instances of DIRRL To determine whether the DIRRL is prefetchable, and if so, generate a custom prefetcher to cause the processor to prefetch to boot into the prefetchable DIRRL instruction area.Example 12 contains the essence of the exemplary method of Example 11, wherein the DIRRL is a lazy load whose consecutive instances experience greater than a first threshold number of cache misses.Example 13 contains the essence of the exemplary method of Example 12, wherein the DIRRL is further an irregular payload having at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of addresses Incremental coverage is less than a third threshold number of consecutive dynamic instances.Example 14 contains the essence of the exemplary method of Example 11, wherein it is determined that the DIRRL is prefetchable when the post-slice contains instructions that completely contain non-memory operations or regular memory operations.Example 15 contains the essence of the exemplary method of Example 11, wherein the custom prefetcher includes one or more prefetch hints stored in the memory among a stream of instructions in the memory.Example 16 contains the essence of the exemplary method of Example 11, wherein the custom prefetcher includes a custom hardware prefetcher using the execution circuit.Example 17 contains the essence of the exemplary processor of Example 11, where a custom prefetcher causes the processor to prefetch a single critical load in a post-slice.Example 18 contains the essence of the exemplary processor of Example 11, wherein the custom prefetcher causes the processor to prefetch a plurality of irregular loads, the plurality of irregular loads containing instructions greater than those in the post-slice Contains fewer instructions.Example 19 contains the essence of the exemplary method of Example 11, wherein BT is separate from the execution circuit.Example 20 contains the essence of the exemplary method of Example 11, wherein BT is incorporated into the execution circuit.Example 21 provides an exemplary processor including: a cache memory; a fetch and decode circuit for fetching and decoding instructions from the memory; and a binary translator (BT) for responding to the decoded instructions by: Store multiple decoded instruction streams in the BT cache, identify delayed irregular payloads (DIRRL) among stored instructions, determine if DIRRL is prefetchable, and if so, generate a custom prefetcher to Causes the processor to prefetch to the instruction area that can prefetch DIRRL.Example 22 contains the essence of the exemplary processor of Example 21, wherein the DIRRL is a delay load experiencing a cache miss on a continuous dynamic instance that is greater than a first threshold number.Example 23 contains the essence of the exemplary processor of Example 22, wherein the DIRRL is an irregular payload having at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of addresses Incremental coverage is less than a third threshold number of consecutive dynamic instances.Example 24 contains the essence of the exemplary processor of Example 23, wherein the execution circuit calculates a post-slice between two consecutive dynamic instances of the DIRRL, and determines when the post-slice includes a non-memory operation or regular memory entirely The DIRRL is prefetchable when the operation constitutes a loop.Example 25 contains the essence of the exemplary processor of Example 24, wherein the custom prefetcher causes the processor to prefetch one or more critical loads among the post-slices.Example 26 contains the essence of the exemplary processor of Example 24, wherein the custom prefetcher causes the processor to prefetch a plurality of irregular loads, the plurality of irregular loads containing instructions greater than in the post-slice Contains fewer instructions.Example 27 contains the essence of the exemplary processor of Example 21, wherein the custom prefetcher includes one or more prefetch hints among the plurality of instructions stored in the memory.Example 28 contains the essence of the exemplary processor of Example 21, wherein the custom prefetcher includes a hardware prefetcher using the execution circuit.Example 29 contains the essence of the exemplary processor of Example 21, wherein the processor further includes an execution circuit, and wherein the BT is separate from the execution circuit.Example 30 contains the essence of the exemplary processor of Example 21, wherein the processor further includes an execution circuit, and wherein the BT is incorporated into the execution circuit.Example 31 provides an exemplary non-transitory computer-readable medium containing instructions that, when executed by a computing device, cause the computing device to respond by fetching and decoding instructions from memory using fetch and decode circuits; using binary translation The responder (BT) responds to the decoded instructions to: store multiple decoded instructions in the BT cache; track cache misses of load instructions to identify delay loads; analyze the address increment of successive instances of delay loads to Identifying delayed irregular loads (DIRRL); and determining whether the DIRRL is prefetchable by analyzing post-slices between successive dynamic instances of DIRRL, and if so, generating a custom prefetcher to cause the processor to prefetch Fetch leads to the instruction area of the prefetchable DIRRL.Example 32 contains the essence of the exemplary computer-readable medium of Example 31, wherein the DIRRL is a lazy load whose consecutive instances experience greater than a first threshold number of cache misses.Example 33 contains the essence of the exemplary computer-readable medium of Example 32, wherein the DIRRL has at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of address increments overwrites Less than a third threshold number of consecutive dynamic instances.Example 34 contains the essence of the exemplary computer-readable medium of Example 31, wherein the DIRRL is determined to be prefetchable when the post-slice includes instructions that completely include non-memory operations or regular memory operations.Example 35 contains the essence of the exemplary computer-readable medium of Example 31, wherein the custom prefetcher includes one or more prefetch hints stored in the memory among the plurality of instructions in the memory.The invention also provides the following technical solutions:Technical Solution 1. A processor, including:Cache memoryFetch and decode circuitry for fetching and decoding instructions from memory; andAn execution circuit including a binary translator (BT) to respond to the decoded instruction by:Storing a plurality of the decoded instructions in a BT cache;Identifying a delayed irregular payload (DIRRL) among the stored instructions;Determining whether the DIRRL is prefetchable; andIf so, a custom prefetcher is generated to cause the processor to prefetch to boot into the prefetchable DIRRL instruction area.Technical Solution 2. The processor of Technical Solution 1, wherein the DIRRL is a delay load experiencing a cache miss on a continuous dynamic instance that is greater than a first threshold number.Technical Solution 3. The processor of Technical Solution 2, wherein the DIRRL is an irregular payload having at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of The address delta covers less than a third threshold of consecutive dynamic instances.Technical Solution 4. The processor according to Technical Solution 3, wherein the execution circuit calculates a post-slice between two consecutive dynamic instances of the DIRRL, and when the post-slice includes a completely non-memory operation or a regular memory When operating a loop, it is determined that the DIRRL is prefetchable.Technical Solution 5. The processor according to Technical Solution 4, wherein the custom prefetcher focuses the processor on the rear slice by enqueuing only one or more critical loads while others are not enqueued. Among the one or more critical loads.Technical solution 6. The processor according to technical solution 4, wherein the custom prefetcher causes the processor to prefetch a plurality of irregular loads, and the plurality of irregular loads contain an instruction ratio that is greater than that of the subsequent slice. There are fewer instructions.Technical Solution 7. The processor according to Technical Solution 1, wherein the custom prefetcher includes one or more prefetch hints among the plurality of instructions stored in the memory.Technical Solution 8. The processor according to Technical Solution 1, wherein the custom prefetcher includes a hardware prefetcher using the execution circuit.Technical Solution 9. The processor of Technical Solution 1, wherein the custom prefetcher includes one or more prefetch hint instructions to be executed using an existing instruction execution pipeline of the processor.Technical Solution 10. The processor according to Technical Solution 1, wherein the custom prefetcher includes a hardware prefetcher of an existing execution cluster using the processor.Technical solution 11. A processor, comprising:Cache memoryFetch and decode circuitry for fetching and decoding instructions from memory; andBinary translator (BT), which is used to respond to the decoded instruction by:Storing a plurality of the decoded instructions in a BT cache;Identifying a delayed irregular payload (DIRRL) among the stored instructions;Determining whether the DIRRL is prefetchable; andIf so, a custom prefetcher is generated to cause the processor to prefetch to boot into the prefetchable DIRRL instruction area.Technical Solution 12. The processor of Technical Solution 11, wherein the DIRRL is a delay load experiencing a cache miss on a continuous dynamic instance that is greater than a first threshold number.Technical Solution 13. The processor of Technical Solution 12, wherein the DIRRL is an irregular payload having at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of The address delta covers less than a third threshold of consecutive dynamic instances.Technical Solution 14. The processor according to Technical Solution 13, wherein the execution circuit calculates a post slice between two consecutive dynamic instances of the DIRRL, and when the post slice includes a non-memory operation or a regular memory When operating a loop, it is determined that the DIRRL is prefetchable.Technical Solution 15. The processor according to Technical Solution 14, wherein the custom prefetcher, when performing the prefetch, causes the processor to execute one or more of the post-slices before performing a non-critical load. Critical loads.Technical solution 16. The processor according to technical solution 14, wherein the custom prefetcher causes the processor to prefetch a plurality of irregular loads, and the plurality of irregular loads contain an instruction ratio that is greater than that of the subsequent slice. There are fewer instructions.Technical Solution 17. The processor according to Technical Solution 11, wherein the custom prefetcher includes one or more prefetch hints among the plurality of instructions stored in the memory.Technical Solution 18. The processor of Technical Solution 11, wherein the custom prefetcher includes a hardware prefetcher using the execution circuit.Technical Solution 19. The processor according to claim 11, wherein the processor further includes an execution circuit, and wherein the BT is separate from the execution circuit.Solution 20. The processor according to claim 11, wherein the processor further includes an execution circuit, and wherein the BT is incorporated into the execution circuit.Technical Solution 21. A non-transitory computer-readable medium containing instructions that, when executed by a computing device, cause the computing device to respond in the following ways:Fetch and decode instructions from memory using fetch and decode circuits; andUse a binary translator (BT) to respond to decoded instructions to:Store multiple decoded instructions in BT cache memory;Tracking cache misses of load instructions to identify delayed loads;Parsing the address increments of consecutive instances of the delayed payload to identify delayed irregular payloads (DIRRL); andDetermine whether the DIRRL is prefetchable by analyzing post-slices between successive dynamic instances of the DIRRL, and if so, generate a custom prefetcher to direct the processor prefetch to the prefetchable Take the instruction area of DIRRL.Technical Solution 22. The computer-readable medium of Technical Solution 21, wherein the DIRRL is a lazy load whose consecutive instances experience a cache miss greater than a first threshold number.Technical Solution 23. The computer-readable medium of Technical Solution 22, wherein the DIRRL has at least a second threshold number of address increments among its continuous dynamic instances, and wherein the second threshold number of address increments Covering consecutive dynamic instances less than a third threshold number.Technical Solution 24. The computer-readable medium of Technical Solution 21, wherein the DIRRL is determined to be prefetchable when the post-slice includes instructions that completely include non-memory operations or regular memory operations.Technical Solution 25. The computer-readable medium of Technical Solution 21, wherein the custom prefetcher includes one or more prefetch hints stored in the memory among the plurality of instructions in a memory .Instruction SetAn instruction set can contain one or more instruction formats. A given instruction format can define, among other things, the operation (opcode) that specifies the operation to be performed and the various fields (bits, bit positions, etc.) of the operand (s) on which the operation is to be performed and / Or (one or more) other data fields (such as masks). Some instruction formats are further decomposed by the definition of instruction templates (or sub-formats). For example, an instruction template of a given instruction format can be defined as different subsets of fields with instruction format (the fields are usually included in the same order, but at least some have different bit positions because fewer fields are included) and / Or defined to interpret a given field differently. Thus, each instruction of the ISA is represented using a given instruction format (and, if defined, a given one in the instruction template of that instruction format) and contains fields for specifying operations and operands. For example, an exemplary ADD instruction has a specific opcode and instruction format that contains an opcode field specifying the opcode and an operand field that selects the operands (source1 / destination and source2); appearing in the instruction stream will add The number segment has specific content for selecting specific operands. A collection of SIMD extensions called Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extension (VEX) coding scheme (see, for example, Intel® 64 and IA-32 architecture software developers Manual, September 2014; see also Intel® Advanced Vector Extensions Programming Reference, October 2014).Exemplary instruction formatEmbodiments of the instructions described herein may be implemented in different formats. In addition, exemplary systems, architectures, and pipelines are detailed below. Embodiments of instructions may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.General vector friendly instruction formatThe vector-friendly instruction format is an instruction format suitable for vector instructions (for example, there are certain fields specific to vector operations). Although an embodiment is described in which vector and scalar operations are supported through a vector friendly instruction format, alternative embodiments use only vector operations in a vector friendly instruction format.8A and 8B are block diagrams illustrating a general vector-friendly instruction format and an instruction template thereof according to some embodiments of the present invention. 8A is a block diagram illustrating a general vector-friendly instruction format and a type A instruction template according to some embodiments of the present invention; and FIG. 8B is a general vector-friendly instruction format and a type B instruction template thereof according to some embodiments of the present invention Block diagram. Specifically, the general vector-friendly instruction format 800 is defined as a type A and a type B instruction template, and they both include a memoryless access 805 instruction template and a memory access 820 instruction template. In the context of vector-friendly instruction formats, the term general refers to instruction formats that are not tied to any particular instruction set.Although it will be described that the vector-friendly instruction format supports an embodiment of the present invention that has a 64-byte vector operand length (or size) with a 32-bit (4-byte) or 64-bit (8-byte) data element width (or size) ) (And thus a 64-byte vector consists of 16 double-word-sized elements, or alternatively 8 quad-word-sized elements); has 16 bits (2 bytes) or 8 bits (1 byte) 64-byte vector operand length (or size) of data element width (or size); available in 32-bit (4 bytes), 64-bit (8 bytes), 16-bit (2 bytes), or 8-bit (1 word) Section) 32-byte vector operand length (or size) of data element width (or size); with 32-bit (4 bytes), 64-bit (8 bytes), 16-bit (2 bytes), or 8-bit ( 1-byte) 16-byte vector operand length (or size) of data element width (or size); however, alternative embodiments may support more, fewer, or different data element widths (for example, 128-bit (16 words) Section) data element width) more, fewer, and / or different vector operand sizes (for example, 256 words Section vector operands).The type A instruction template in FIG. 8A includes: 1) In the no memory access 805 instruction template, the no memory access, full rounding control type operation 810 instruction template and no memory access, data transformation type operation 815 are shown. Instruction templates; and 2) within the memory access 820 instruction template, memory access, temporary 825 instruction templates and memory access, non-transient 830 instruction templates are shown. The type B instruction template in FIG. 8B includes: 1) In the no-memory access 805 instruction template, there are shown no-memory access, write mask control, partial rounding control type operation 812 instruction template and no-memory access, Write mask control, vsize type operation 817 instruction template; and 2) In the memory access 820 instruction template, a memory access, write mask control 827 instruction template is shown.The general vector friendly instruction format 800 includes the following fields listed below in the order illustrated in FIGS. 8A-8B.Format field 840-A specific value (instruction format identifier value) in this field uniquely identifies a vector friendly instruction format, and thus appears in the instruction stream in a vector friendly instruction format. As such, this field is optional, and in a sense it is not needed for instruction sets that have only a general vector-friendly instruction format.Basic operation field 842-its content distinguishes between different basic operations.Register Index Field 844-its content specifies the location of the source and destination operands, either directly or through address generation, whether they are in a register or in memory. These contain a sufficient number of bits to select N registers from the PxQ (eg 32x512, 16x128, 32x1024, 64x1024) register file. Although in one embodiment N may have up to three source registers and one destination register, alternative embodiments may support more or fewer source registers and destination registers (eg, may support up to two sources , Where one of these sources also acts as a destination and can support up to three sources, and one of these sources also acts as a destination and can support up to two sources and one destination).Modifier field 846-its content distinguishes between the appearance of instructions that specify memory access and the appearance of instructions that do not specify memory access in the general vector instruction format; that is, in the no memory access 805 instruction template and the memory access 820 instruction template between. Memory access operations read and / or write to the memory hierarchy (in some cases, the values in the register are used to specify the source and / or destination addresses), while non-memory access operations do not (such as the source and destination are register). Although in one embodiment this field also chooses between three different ways to perform memory address calculations, alternative embodiments may support more, fewer, or different ways to perform memory address calculations.Augmentation operation field 850-its content distinguishes which of various operations should be performed in addition to the basic operation. This field is context-specific. In some embodiments, this field is divided into a category field 868, an alpha field 852, and a beta field 854. The augmented operation field 850 allows a common set of operations to be performed in a single instruction instead of 2, 3, or 4 instructions.Scale field 860-its contents allow the contents of the scale index field to be used for memory address generation (for example, for address generation using 2 scale * index + base address).Offset field 862A-its content is used as part of memory address generation (for example, for address generation using 2 scaling * index + base address + offset).Displacement factor field 862B (note that the juxtaposition of displacement field 862A directly on displacement factor field 862B indicates that one or the other is used)-its content is used as part of the address generation; it specifies the displacement factor, which is to be stored The size of the access (N) to scale-where N is the number of bytes in the memory access (for example, for address generation using 2 scaling * index + base address + scaling). The redundant low-order bits are ignored, and therefore, the contents of the shift factor field are multiplied by the total memory operand size (N) to generate the final shift to be used when calculating the effective address. The value of N is determined by the processor hardware at run time based on the full opcode field 874 (described later in this article) and the data manipulation field 854C. The displacement field 862A and the displacement factor field 862B are optional in the sense that they are not used for the memoryless access 805 instruction template and / or different embodiments may implement only one or neither.Data Element Width Field 864-its content distinguishes which of several data element widths to use (in some embodiments for all instructions; in other embodiments only for some of them). This field is optional, and in a sense it is not needed if only one aspect of the opcode is used to support one data element width and / or multiple data element widths are supported.Write mask field 870-its content controls whether the position of the data element in the destination vector operand reflects the results of the basic operation and the augmentation operation on the basis of each data element position. The type A instruction template supports merge-write masking, and the type B instruction template supports both merge-and-zero-write masking. When combined, the vector mask allows any set of elements in the destination to be protected from being updated during any operation (specified by the base operation and the augment operation); in another embodiment, the corresponding mask bit is reserved with The old value of each element of the destination of 0. In contrast, the vector mask when zeroed allows zeroing any set of elements in the destination during any operation (specified by the basic and augmented operations); in one embodiment, when the corresponding mask When the code point has a value of 0, the element of the destination is set to 0. A subset of this functionality is the ability to control the length of the vector of operations being performed (that is, the span of modifying elements from the first to the last); however, the elements being modified need not be coherent. Thus, the write mask field 870 allows partial vector operations, including loading, storing, arithmetic, logic, and so on. Although the implementation of the present invention is described in which the content of the write mask field 870 selects one of several write mask registers containing the write mask to be used (and thus the content of the write mask field 870 indirectly identifies the mask to be performed) Example, but alternative embodiments instead or in addition allow the contents of the mask write field 870 to directly specify the masking to be performed.Immediate field 872-its content allows specifying an immediate value. This field is optional, in the sense that it does not exist in implementations of general vector-friendly formats that do not support immediate values, and it does not exist in instructions that do not use immediate values.Category field 868-its content distinguishes between different categories of instructions. Referring to FIGS. 8A-B, the contents of this field select between type A and type B instructions. In FIGS. 8A-B, rounded squares are used to indicate the specific value present in the field (eg, Class A 868A and Class B 868B of the category field 868 in FIGS. 8A-B, respectively).Class A instruction templateIn the case of a class A non-memory access 805 instruction template, the alpha field 852 is interpreted as the RS field 852A, and its content distinguishes which of the different types of amplification operations is to be performed (for example, no memory access, The rounding type operation 810 and the no memory access, data transformation type operation 815 instruction template specify rounding 852A.1 and data transformation 852A.2), and the β field 854 distinguishes which operation of the specified type is to be performed. In the no memory access 805 instruction template, there are no scaling field 860, displacement field 862A, and displacement scaling field 862B.No Memory Access Instruction Template-Full Round Control Type OperationIn the no memory access complete rounding control type operation 810 instruction template, the beta field 854 is interpreted as a rounding control field 854A, the contents of which provide static rounding. Although in the described embodiment of the present invention, the rounding control field 854A contains a suppression of all floating point exception (SAE) field 856 and a rounding operation control field 858, alternative embodiments may support that these concepts can be coded into the same field Or only one or the other of these concepts / fields (for example, you can have only the rounding control field 858).SAE field 856-its content distinguishes whether or not exception event reporting is disabled; when the content of the SAE field 856 indicates that suppression is enabled, the given instruction does not report any kind of floating-point exception flag and does not raise any floating-point exception handler.Round operation control field 858-its content distinguishes which one of a group of round operations to perform (eg, round up, round down, round towards zero, and round to the nearest). Therefore, the round operation control field 858 allows the rounding mode to be changed on a per instruction basis. In some embodiments where the processor includes a control register for specifying the rounding mode, the contents of the rounding operation control field 850 overwrite the register value.No memory access instruction template-data transformation type operationIn the no-memory access data transformation type operation 815 instruction template, the β field 854 is interpreted as a data transformation field 854B, and its content distinguishes which one of several data transformations to perform (for example, no data transformation, scramble, broadcast).In the case of a class A memory access 820 instruction template, the alpha field 852 is interpreted as an eviction prompt field 852B, and its content distinguishes which eviction prompt to use (in FIG. 8A, respectively, the memory access and temporary 825 instruction templates And memory access, the non-temporary 830 instruction template specifies temporary 852B.1 and non-annihilation time (852B.2), and the β field 854 is interpreted as a data manipulation field 854C, the content of which distinguishes between several data manipulation operations to be performed (also Which is called a primitive) (eg, no manipulation; broadcast; up-conversion of source; and down-conversion of destination). The memory access 820 instruction template includes a scaling field 860, and an optional displacement field 862A or displacement scaling field 862B.Vector memory instructions perform conversions to load vectors from memory and store vectors into memory through conversion support. Like regular vector instructions, vector memory instructions transfer data from / to the memory on a data-element-by-data basis, where the elements actually transferred are determined by the contents of the vector mask selected as the write mask.Memory access instruction template-temporaryTemporary data is data that may be reused quickly enough to benefit from the cache. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Template-Non-TemporaryNon-transitory data is data that is unlikely to be reused fast enough to benefit from the cache in the first-level cache, and should be given priority over evictions. However, this is a hint, and different processors can implement it in different ways, including ignoring the hint entirely.Type B instruction templateIn the case of a type B instruction template, the alpha field 852 is interpreted as a write mask control (Z) field 852C, and its content distinguishes whether the write masks controlled by the write mask field 870 should be merged or zeroed.In the case of a class B non-memory access 805 instruction template, a portion of the β field 854 is interpreted as the RL field 857A, and its content distinguishes which of the different types of amplification operations is to be performed (for example, no memory Fetch, write mask control, partial rounding control type operation 812 instruction template and no memory access, write mask control, VSIZE type operation 817 instruction template specify rounding 852A.1 and vector length (VSIZE) 857A.2), And the rest of the β field 854 distinguishes which of the specified types of operations is to be performed. In the no memory access 805 instruction template, there are no scaling field 860, displacement field 862A, and displacement scaling field 862B.In the no memory access, write mask control, partial rounding control type operation 810 instruction template, the rest of the β field 854 is interpreted as the rounding operation field 859A, and exception event reporting is disabled (the given instruction does not report any Kind of floating-point exception flag, and does not raise any floating-point exception handlers).Round operation control field 859A-just like the round operation control field 858, its content distinguishes which one of a group of rounding operations to perform (for example, round up, round down, round toward zero, and nearest included). Therefore, the round operation control field 859A allows the rounding mode to be changed on a per instruction basis. In some embodiments where the processor includes a control register for specifying the rounding mode, the contents of the rounding operation control field 850 overwrite the register value.In the non-memory access, write mask control, VSIZE type operation 817 instruction template, the rest of the β field 854 is interpreted as a vector length field 859B, the content of which distinguishes which of several data vector lengths to execute ( (For example, 128, 256, or 512 bytes).In the case of a class B memory access 820 instruction template, a portion of the β field 854 is interpreted as a broadcast field 857B, and its content distinguishes whether or not a broadcast type data manipulation operation is to be performed, while the rest of the β field 854 is interpreted as a vector length Field 859B. The memory access 820 instruction template includes a scaling field 860, and an optional displacement field 862A or displacement scaling field 862B.Regarding the general vector friendly instruction format 800, a complete opcode field 874 is shown, which includes a format field 840, a basic operation field 842, and a data element width field 864. Although one embodiment is shown in which the full opcode field 874 contains all of these fields, in an embodiment where not all of them are supported, the full opcode field 874 contains fewer than all of these fields. The full opcode field 874 provides an opcode (opcode).The augmentation operation field 850, data element width field 864, and write mask field 870 allow these features to be specified on a per instruction basis in a general vector friendly instruction format.The combination of a write mask field and a data element width field creates typed instructions because they allow the mask to be applied based on different data element widths.The various instruction templates found in classes A and B are beneficial in different situations. In some embodiments of the present invention, different processors or different cores within a processor may support only type A, only type B, or both types. For example, high-performance general-order out-of-order cores that are expected to be used for general-purpose computing may only support class B, and cores that are expected to be used primarily for graphics and / or scientific (throughput) calculations may only support class A, and cores that are expected to use both Both can be supported (of course, a core with some mix of instructions and templates from both classes but not all templates and instructions from both classes is within the scope of the invention). Also, a single processor can contain multiple cores, all of which support the same class, or where different cores support different classes. For example, in processors with separate graphics and general-purpose cores, one of the graphics cores intended to be used primarily for graphics and / or scientific computing may only support Class A, and one or more of the general-purpose cores may A high-performance general-purpose core that only supports out-of-order execution of general-purpose calculations for class B and register renaming. Another processor without a separate graphics core may contain one or more general-purpose ordered or out-of-order cores that are supported by both types A and B. Of course, in different embodiments of the invention, features from one class can also be implemented in another class. Programs written in a high-level language will be translated (for example, just-in-time or statically compiled) into a variety of executable forms, including: 1) only supported by the target processor (s) for execution The form of a class of instructions; or 2) an alternative routine written using a different combination of instructions of all classes and in the form of a control flow code that is selected based on the instructions supported by the processor currently executing the code The routine to be executed.Example specific vector vector friendly instruction formatFIG. 9A is a block diagram illustrating an exemplary specific vector-friendly instruction format according to some embodiments of the present invention. FIG. 9A shows a specific vector-friendly instruction format 900 that is specific in the sense that it specifies the position, size, interpretation, and order of the fields and the values of some of those fields. The specific vector friendly instruction format 900 can be used to extend the x86 instruction set, and thus some of these fields are similar or identical to those used in existing x86 instruction sets and their extensions (eg, AVX). This format is consistent with the prefix encoding field, true opcode byte field, MOD R / M field, SIB field, shift field, and immediate field with an extension of the existing x86 instruction set. The fields from FIG. 8 to which the fields from FIG. 9A are mapped are illustrated.It should be understood that although for illustrative purposes, embodiments of the present invention have been described in the context of a general vector friendly instruction format 800 with reference to a particular vector friendly instruction format 900, the invention is not limited to a particular vector friendly instruction except where stated Format 900. For example, the general vector friendly instruction format 800 takes into account various possible sizes for various fields, while the specific vector friendly instruction format 900 is shown as a field with a specific size. As a specific example, although the data element width field 864 is illustrated as a one-bit field in the specific vector friendly instruction format 900, the present invention is not limited thereto (ie, the general vector friendly instruction format 800 considers data element width fields of other sizes) 864).The general vector friendly instruction format 800 includes the following fields listed below in the order illustrated in FIG. 9A.The EVEX prefix (bytes 0-3) 902 is encoded in four bytes.Format field 840 (EVEX byte 0, bits [7: 0])-The first byte (EVEX byte 0) is the format field 840, and it contains 0x62 (in some embodiments, it is used to distinguish vector-friendly instruction formats Unique value).The second-four bytes (EVEX bytes 1-3) contain several bit fields that provide specific capabilities.The REX field 905 (EVEX byte 1, bit [7-5]) consists of the EVEX.R bit field (EVEX byte 1, bit [7] -R), the EVEX.X bit field (EVEX byte 1, bit [6 ]-X) and 857BEX byte 1, bit [5] -B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields and are encoded using 1s complement form, that is, ZMM0 is encoded as 1111B and ZMM15 is encoded as 0000B. The other fields of the instruction, such as encoding the lower three bits (rrr, xxx, and bbb) of the register index, are well known in the art, so that Rrrr, Xxxx, and EVEX.B can be formed by adding Bbbb.REX'910A-This is the first part of the REX 'field 910 and is the EVEX.R' bit field (EVEX byte 1, bit [4] -R '), which is used to extend the upper half of the 32 register set 16 Or the lower half 16 is encoded. In some embodiments, this bit is stored in a bit-reversed format along with other bits as indicated below to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, which has a true opcode byte of 62, but The value 11 in the MOD field is not accepted in the MOD R / M field (described below); alternative embodiments of the present invention do not store this bit and other bits indicated below in an inverted format. The value 1 is used to encode the lower 16-bit registers. In other words, R'Rrrr is formed by combining EVEX.R ', EVEX.R, and other RRRs from other fields.Opcode mapping field 915 (EVEX byte 1, bit [3: 0] -mmmm)-its content encodes the implicit leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 864 (EVEX byte 2, bit [7] -W)-indicated by the symbol EVEX.W. EVEX.W is used to define the granularity (size) of data types (either 32-bit data elements or 64-bit data elements).EVEX.vvvv 920 (EVEX byte 2, bit [6: 3] -vvvv)-The role of EVEX.vvvv may include the following: 1) The first specified by EVEX.vvvv in reverse (one's complement) form Source register operands are encoded and valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes destination register operands specified in 1's complement form for some vector shifts; Or 3) EVEX.vvvv does not encode any operands. This field is reserved and should contain 1111b. Therefore, the EVEX.vvvv field 920 encodes the four lower order bits of the first source register designator stored in an inverted (one's complement) form. Depending on the instruction, the specifier size is expanded to 32 registers using additional different EVEX bit fields.EVEX.U 868 category field (EVEX byte 2, bit [2] -U)-if EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.Prefix Encoding Field 925 (EVEX Byte 2, bits [1: 0] -pp)-Provides additional bits for the basic operation field. In addition to providing support for legacy SSE instructions in the EVEX prefix format, this has the benefit of compressing the SIMD prefix (instead of requiring the byte to represent the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use SIMD prefixes (66H, F2H, F3H) in both legacy formats and in the EVEX prefix format, encode these legacy SIMD prefixes into the SIMD prefix encoding field; and in The runtime is extended into the legacy SIMD prefix before providing it to the decoder's PLA (thus, the PLA can execute the legacy and EVEX formats of these legacy instructions without modification). Although newer instructions may directly use the contents of the EVEX prefix encoding field as an opcode extension, some embodiments are extended in a similar manner to maintain consistency, but allow different meanings to be specified by these legacy SIMD prefixes. Alternative embodiments may redesign PLA to support 2-bit SIMD prefix encoding, and thus do not require extensions.Alpha field 852 (EVEX byte 3, bit [7] -EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control and EVEX.N; also shown in alpha)-as As mentioned earlier, this field is context specific.β field 854 (EVEX byte 3, bits [6: 4] -SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also shown with βββ )-As mentioned earlier, this field is context specific.REX'910B-This is the remainder of the REX 'field 910 and is the EVEX.V' bit field (EVEX byte 3, bit [3] -V '), which can be used to extend the upper half of the 32 register set 16 Or the lower half 16 is encoded. This bit is stored in a bit-reversed format. The value 1 is used to encode the lower 16-bit registers. In other words, V'VVVV is formed by combining EVEX.V 'and EVEX.vvvv.Write mask field 870 (EVEX byte 3, bits [2: 0] -kkk)-its content specifies the index of the register in the write mask register as described earlier. In some embodiments, the specific value EVEX.kkk = 000 has special behavior, implying that no write mask is used for a specific instruction (this can be implemented in various ways, including using hardwire to all write masks or bypass Hardware that masks the hardware).The true opcode field 930 (byte 4) is also called the opcode byte. Part of the opcode is specified in this field.The MOD R / M field 940 (byte 5) contains a MOD field 942, a Reg field 944, and an R / M field 946. As mentioned earlier, the contents of the MOD field 942 distinguish between memory access and non-memory access operations. The role of the Reg field 944 can be summarized into two situations: encoding the destination register operand or source register operand or treating it as an opcode extension, and it is not used to encode any instruction operand. The role of the R / M field 946 may include the following: encoding an instruction operand that references a memory address, or encoding a destination register operand or a source register operand.Scale, Index, Base Address (SIB) Byte (Byte 6)-As mentioned earlier, the contents of the scale field 850 are used for memory address generation. SIB.xxx 954 and SIB.bbb 956-The contents of these fields have previously referred to the register indexes Xxxx and Bbbb.Displacement field 862A (bytes 7-10)-When MOD field 942 contains 10, bytes 7-10 are displacement field 862A and it works the same way as the legacy 32-bit displacement (disp32) and works at byte granularity .Shift factor field 862B (byte 7)-When MOD field 942 contains 01, byte 7 is the shift factor field 862B. The location of this field is the same as the 8-bit shift (disp8) of the legacy x86 instruction set, which works at byte granularity. Since disp8 is sign-extended, it can only be addressed between -128 to 127 byte offsets; for a 64-byte cache line, disp8 uses 8 bits, which can only be set to 4 real Useful values are -128, -64, 0, and 64; because larger ranges are often needed, disp32 is used; however, disp32 requires 4 bytes. Compared to disp8 and disp32, the displacement factor field 862B is a reinterpretation of disp8; when using the displacement factor field 862B, the actual displacement is determined by multiplying the content of the displacement factor field by the size of the memory operand access (N). This type of displacement is called disp8 * N. This reduces the average instruction length (a single byte for displacement, but with a larger range). This compressed displacement is based on the assumption that the effective displacement is a multiple of the granularity of the memory access, and therefore, the redundant low-order bits of the address offset need not be encoded. In other words, the displacement factor field 862B replaces the 8-bit displacement of the legacy x86 instruction set. Thus, the displacement factor field 862B is encoded in the same way as the 8-bit displacement of the x86 instruction set (so there is no change in ModRM / SIB encoding rules), with the unique exception that disp8 is overloaded to disp8 * N. In other words, the encoding rules or encoding lengths have not changed except in the interpretation of the displacement value by hardware (this requires scaling the displacement by the size of the memory operand to obtain a byte-by-byte address offset). The immediate field 872 operates as previously described.Full opcode fieldFIG. 9B is a block diagram illustrating fields of a particular vector friendly instruction format 900 constituting a complete opcode field 874 according to some embodiments. Specifically, the full opcode field 874 includes a format field 840, a basic operation field 842, and a data element width (W) field 864. The basic operation field 842 includes a prefix encoding field 925, an operation code mapping field 915, and a true operation code field 930.Register index fieldFIG. 9C is a block diagram illustrating fields of a particular vector friendly instruction format 900 constituting a register index field 844 according to some embodiments. Specifically, the register index field 844 contains a REX field 905, a REX 'field 910, a MODR / M.reg field 944, a MODR / Mr / m field 946, a VVVV field 920, a xxx field 954, and a bbb field 956.Amplify operation fieldFIG. 9D is a block diagram illustrating fields of a specific vector friendly instruction format 900 constituting an augmented operation field 850 according to some embodiments. When the class (U) field 868 contains 0, it indicates EVEX.U0 (Class A 868A); when it contains 1, it indicates EVEX.U1 (Class B 868B). When U = 0 and the MOD field 942 contains 11 (indicating no memory access operation), the alpha field 852 (EVEX byte 3, bits [7] -EH) is interpreted as the rs field 852A. When the rs field 852A contains 1 (rounded 852A.1), the β field 854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as the rounded control field 854A. The rounding control field 854A includes a 1-bit SAE field 856 and a 2-bit rounding operation field 858. When the rs field 852A contains 0 (data transformation 852A.2), the β field 854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as a 3-bit data transformation field 854B. When U = 0 and the MOD field 942 contains 00, 01, or 10 (indicating a memory access operation), the alpha field 852 (EVEX byte 3, bits [7] -EH) is interpreted as an eviction hint (EH) field 852B, And the β field 854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as a 3-bit data manipulation field 854C.When U = 1, the alpha field 852 (EVEX byte 3, bits [7] -EH) is interpreted as the write mask control (Z) field 852C. When U = 1 and the MOD field 942 contains 11 (indicating no memory access operation), a portion of the β field 854 (EVEX byte 3, bits [4] -S0) is interpreted as the RL field 857A; When rounding 857A.1), the rest of the β field 854 (EVEX byte 3, bits [6-5] -S2-1) is interpreted as the rounding operation field 859A, and when the RL field 857A contains 0 (VSIZE 857 .A2), the rest of the β field 854 (EVEX byte 3, bits [6-5] -S2-1) is interpreted as a vector length field 859B (EVEX byte 3, bits [6-5] -L1- 0). When U = 1 and the MOD field 942 contains 00, 01, or 10 (indicating a memory access operation), the β field 854 (EVEX byte 3, bits [6: 4] -SSS) is interpreted as a vector length field 859B (EVEX Byte 3, bits [6-5] -L1-0) and broadcast field 857B (EVEX byte 3, bits [4] -B).Exemplary Register ArchitectureFIG. 10 is a block diagram of a register architecture 1000 according to some embodiments. In the illustrated embodiment, there are 32 vector registers 1010 with a width of 512 bits; these registers are referenced as zmm0 to zmm31. The lower order 256 bits of the lower 16 zmm registers are superimposed on the registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm register) are superimposed on the register xmm0-15. The specific vector-friendly instruction format 900 operates on these superimposed register files, as illustrated in the following table:In other words, the vector length field 859B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the previous length; and the instruction template without the vector length field 859B is at the maximum Operate on vector length. In addition, in one embodiment, the type B instruction template of the specific vector friendly instruction format 900 operates on packed or scalar single / double precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest-order data element locations in the zmm / ymm / xmm registers; higher-order data element locations either remain the same as they were before the instruction or zeroed, depending on the embodiment.Write Mask Registers 1015-In the illustrated embodiment, there are eight write mask registers (k0 to k7), each of which is 64 bits in size. In an alternative embodiment, the write mask register 1015 is 16 bits in size. As mentioned before, in some embodiments, the vector mask register k0 cannot be used as a write mask; when the encoding indicating k0 is normally used for the write mask, it selects a hard-wired write mask of 0xffff, Effectively disable the write mask for this instruction.General-purpose registers 1025-In the illustrated embodiment, there are 16 64-bit general-purpose registers that are used with existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 to R15.Scalar floating-point stack register file (x87 stack) 1045, on which MMX packed integer flat register file 1050 is aliased-in the illustrated embodiment, the x87 stack is used to extend the x87 instruction set to 32/64 / 80-bit floating-point data performs an eight-element stack of scalar floating-point operations; and MMX registers are used to perform operations on 64-bit packed integer data, and hold operands for some operations performed between MMX and XMM registers.Alternative embodiments may use wider or narrower registers. Moreover, alternative embodiments may use more, fewer, or different register files and registers.Exemplary core architecture, processor, and computer architectureProcessor cores can be implemented in different ways, for different purposes, and in different processors. For example, the implementation of such a kernel may include: 1) a general-purpose ordered kernel intended for general-purpose computing; 2) a high-performance general-purpose out-of-order kernel intended for general-purpose computing; 3) expected to be used primarily for graphics and / or science Throughput) dedicated core for calculations. Implementations of different processors may include: 1) a CPU containing one or more general-ordered cores intended for general-purpose computing and / or one or more general-order out-of-order cores intended for general-purpose calculations; and 2) co-processing Processors containing one or more dedicated cores that are expected to be used primarily for graphics and / or science (throughput). Such different processors result in different computer system architectures, which may include: 1) a coprocessor on a chip separate from the CPU; 2) a coprocessor on a separate die in the same package as the CPU; 3) Coprocessors on the same die as the CPU (in this case, such coprocessors are sometimes referred to as dedicated logic, such as integrated graphics and / or scientific (throughput) logic, or as a dedicated core); and 4) The described CPU (sometimes referred to as (one or more) application cores or (one or more) application processors), the aforementioned co-processors, and additional functionality on a chip may be included on the same die. An exemplary core architecture is described next, followed by a description of an exemplary processor and computer architecture.Exemplary nuclear architectureOrdered and Out-of-Order Core Block Diagram11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue / execution pipeline according to some embodiments of the invention. 11B is a block diagram illustrating an exemplary embodiment of both an ordered architecture core and an exemplary register renaming, out-of-order issue / execution architecture core to be included in a processor, according to some embodiments of the invention. The solid-line boxes in FIGS. 11A-B illustrate an ordered pipeline and an ordered core, while the optional addition of a dashed box illustrates register renaming, out-of-order release / execution pipeline, and core. Given that the ordered aspect is a subset of the disordered aspect, the disordered aspect will be described.In FIG. 11A, the processor pipeline 1100 includes an extraction phase 1102, a length decoding phase 1104, a decoding phase 1106, an allocation phase 1108, a rename phase 1110, a scheduling (also called dispatch or release) phase 1112, and a register read / memory read phase. 1114, execution phase 1116, write back / memory write phase 1118, exception handling phase 1122, and commit phase 1124.FIG. 11B shows a processor core 1190 including a front-end unit 1130 coupled to an execution engine unit 1150, and both are coupled to a memory unit 1170. Core 1190 may be a Reduced Instruction Set Computing (RISC) core, a Complex Instruction Set Computing (CISC) core, a Very Long Instruction Word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1190 may be a dedicated core, such as, for example, a network or communication core, a compression engine, a coprocessor core, a general purpose computing graphics processing unit (GPGPU) core, a graphics core, and the like.The front-end unit 1130 includes a branch prediction unit 1132 coupled to the instruction cache unit 1134, the cache unit 434 is coupled to the instruction translation lookaside buffer (TLB) 1136, the TLB 436 is coupled to the instruction fetch unit 1138, and the instruction fetch unit 438 is coupled to the decoding unit 1140. The decoding unit 1140 (or decoder) can decode instructions and generate as output one or more micro-operations, micro-code entry points, micro-instructions, other instructions or other control signals, which are decoded from the original instructions, or otherwise Way to reflect, or derive from the original instructions. The decoding unit 1140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLA), microcode read-only memory (ROM), and the like. In one embodiment, the core 1190 contains microcode ROM or other media storing microcode for certain macro instructions (eg, in the decoding unit 1140 or otherwise within the front-end unit 1130). The decoding unit 1140 is coupled to a rename / distributor unit 1152 in the execution engine unit 1150.The execution engine unit 1150 includes a rename / distributor unit 1152 coupled to a retirement unit 1154 and a set of one or more scheduler units 1156. The scheduler unit (s) 1156 represents any number of different schedulers, including reserved stations, a central instruction window, and the like. The scheduler unit (s) 1156 is coupled to the physical register file (s) unit (s) 1158. Each of the one or more physical register file units 1158 represents one or more physical register files, where different physical register files store one or more different data types, such as scalar integers, scalar floating point, packed integers , Packed floating point, vector integer, vector floating point, status (for example, an instruction pointer that is the address of the next instruction to be executed), and so on. In one embodiment, the physical register file unit (s) 1158 includes a vector register unit, a write mask register unit, and a scalar register unit. These register units can provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file (s) (s) 1158 (s) are overlapped by the retirement unit (s) 1154 to illustrate various ways in which register renaming and out-of-order execution can be achieved (for example, using (one or more) Reordering buffers and retirement register (s); using future (s), historical buffer (s) and retirement register (s); using register maps and registers Pool, etc.). A retirement unit 1154 and a physical register file (s) (s) 1158 (s) are coupled to the execution cluster (s) 1160 (s). The execution cluster (s) 1160 includes a set of one or more execution units 1162 and a set of one or more memory access units 1164. The execution unit 1162 can perform various operations (such as shift, addition, subtraction, and multiplication) on various types of data (for example, scalar floating point, packed integer, packed floating point, vector integer, and vector floating point). Although some embodiments may include several execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit 1156, the physical register file unit 1158, and the execution cluster (s) 1160 are shown as possibly plural, as some embodiments create separate pipelines (e.g., scalars) for certain types of data / operations Integer pipelines, scalar floating-point / packed integers / packed floating-point / vector integers / vector floating-point pipelines and / or memory access pipelines, each with their own scheduler unit, (one or more) physical register file units, and Or execution cluster—and in the case of a separate memory access pipeline, the execution cluster that implements this pipeline has only one or more memory access units 1164 (some embodiments). It should also be understood that where separate pipelines are used, one or more of these pipelines may be issued / executed out of order and the rest are ordered.The set of memory access units 1164 is coupled to a memory unit 1170, which includes a data TLB unit 1172 coupled to a data cache unit 1174, and a data cache unit 474 is coupled to a level 2 (L2) cache unit 1176. In one exemplary embodiment, the memory access unit 1164 may include a load unit, a storage address unit, and a storage data unit, each of which is coupled to a data TLB unit 1172 in the memory unit 1170. The instruction cache unit 1134 is further coupled to a level 2 (L2) cache unit 1176 in the memory unit 1170. The L2 cache unit 1176 is coupled to one or more other levels of cache, and eventually to main memory.As an example, the example register renaming, out-of-order issue / execution core architecture can implement the pipeline 1100 as follows: 1) instruction fetch 1138 performs fetch and length decoding phases 1102 and 1104; 2) decoding unit 1140 performs decoding phase 1106; 3) re- The naming / allocator unit 1152 performs the allocation phase 1108 and the rename phase 1110; 4) (one or more) scheduler unit 1156 performs the scheduling phase 1112; 5) (one or more) physical register files (one or more) Unit 1158 and memory unit 1170 perform register read / memory read phase 1114; execution cluster 1160 executes execute phase 1116; 6) memory unit 1170 and (one or more) physical register file (s) unit 1158 perform write back / The memory write phase 1118; 7) the various units may involve an exception handling phase 1122; 8) the retirement unit 1154 and the physical register file (one or more) unit 1158 execute the commit phase 1124.Core 1190 can support one or more instruction sets (eg, x86 instruction set (with some extensions that have been added with newer versions); CA's Sunnyvale's MIPS technology MIPS instruction set; Sunnyvale's ARM Holdings Corporation's ARM instructions Set (with optional additional extensions, such as NEON), containing the instruction (s) described herein. In one embodiment, the core 1190 contains logic that supports packed data instruction set extensions (eg, AVX1, AVX2) thereby allowing packed data to be used to perform operations used by many multimedia applications.It should be understood that the core can support multi-threaded operations (performing two or more sets of parallel operations or threads), and can do so in a variety of ways, including time-sliced multi-threaded operations, simultaneous multi-threaded operations (where a single The physical core provides a logical core for each thread that the physical core performs multi-threaded operations simultaneously) or a combination of them (for example, extraction and decoding of time slices and subsequent simultaneous multi-threaded operations, such as with Intel® Hyper-Threading Technology).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an ordered architecture. Although the illustrated processor embodiment also includes separate instruction cache unit 1134 and data cache unit 1174 and shared L2 cache unit 1176, alternative embodiments may have a single internal cache for both instruction and data , Such as, for example, a level 1 (L1) internal cache or a multilevel internal cache. In some embodiments, the system may include a combination of internal caches and external caches that are external to the core and / or processor. Alternatively, all caches may be external to the core and / or processor.Specific exemplary ordered core architectureFigures 12A-B illustrate a block diagram of a more specific exemplary ordered core architecture that will be one of several logical blocks in a chip (containing other cores of the same type and / or different types). Logic blocks communicate with some fixed-function logic, memory I / O interfaces, and other necessary I / O logic through a high-bandwidth interconnect network (for example, a ring network), depending on the application.FIG. 12A is a block diagram of a single processor core along with its connection to an on-die interconnect network 1202 and a local subset of its level 2 (L2) cache 1204 according to some embodiments of the invention. In one embodiment, the instruction decoder 1200 supports an x86 instruction set with a packed data instruction set extension. The L1 cache 1206 allows low-latency access to caches in scalar and vector units. Although in one embodiment (to simplify the design), the scalar unit 1208 and the vector unit 1210 use separate sets of registers (the scalar register 1212 and the vector register 1214, respectively), and the data transferred between them is written to the memory, and It is then read back from the level 1 (L1) cache 1206, but alternative embodiments of the present invention may use different methods (for example, using a single register set or including allowing data to be transferred between two register files without being written To and from the communication path).The local subset of L2 cache 1204 is part of the global L2 cache. The global L2 cache is divided into separate local subsets, one for each processor core. Each processor core has a direct access path to a local subset of its own L2 cache 1204. The data read by the processor core is stored in its L2 cache subset 1204 and can be quickly accessed in parallel with other processor cores accessing their own local L2 cache subset. Data written by the processor core is stored in its own L2 cache subset 1204 and cleared from other subset dumps if necessary. The ring network ensures consistency for shared data. The ring network is bidirectional to allow agents such as processor cores, L2 caches, and other logical blocks to communicate with each other within the chip. Each ring data path is 1012 bits wide in each direction.FIG. 12B is an expanded view of a portion of the processor core of FIG. 12A according to some embodiments of the invention. FIG. 12B includes the L1 data cache 1206A portion of the L1 cache 1204, and more details about the vector unit 1210 and the vector register 1214. Specifically, the vector unit 1210 is a 16-wide vector processing unit (VPU) (see 16-wide ALU 1228) that executes one or more of integer, single-precision floating-point, and double-precision floating-point instructions. The VPU supports the use of a scramble unit 1220 to scramble a register input, a digital conversion unit 1222A-B for digital conversion, and a copy unit 1224 on a memory input for copying. The write mask register 1226 allows the prediction result vector to be written.FIG. 13 is a block diagram of a processor 1300 according to some embodiments of the present invention. The processor 600 may have more than one core, may have an integrated memory controller, and may have integrated graphics. The solid line box in FIG. 13 illustrates the processor 1300 with a single core 1302A, the system agent 1310, and a group of one or more bus controller units 1316, while the optional addition of the dotted frame illustrates a multiple core 1302A -N, a set of one or more integrated memory controller units 1314 in the system agent unit 1310, and an alternative processor 1300 of the dedicated logic 1308.Thus, different implementations of the processor 1300 may include: 1) a CPU with dedicated logic 1008 and cores 1002A-N, the dedicated logic 1008 is integrated graphics and / or scientific (throughput) logic (which may contain one or more cores) While the core 1302A-N is one or more general-purpose cores (eg, general-ordered core, general-order out-of-order core, and a combination of the two); A coprocessor with cores 1002A-N of a large number of dedicated cores; and 3) a coprocessor with cores 1302A-N that are a large number of general-purpose ordered cores. Thus, the processor 1300 may be a general-purpose processor, a co-processor, or a special-purpose processor, such as, for example, a network or communication processor, a compression engine, a graphics processor, a GPGPU (General Purpose Graphics Processing Unit), many integrated cores of high throughput ( MIC) coprocessors (contains 30 or more cores), embedded processors, and more. The processor may be implemented on one or more chips. The processor 1300 may be part of one or more substrates and / or implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the core, a set or one or more shared cache units 1306, and external memory (not shown) coupled to the set of integrated memory controller units 1314. The set of shared cache units 1306 may contain one or more intermediate caches, such as level 2 (L2), level 3 (L3), level 4 (L4) or other levels of cache, last level cache (LLC), and / Or a combination thereof. Although in one embodiment the ring-based interconnect unit 1312 interconnects the integrated graphics logic 1308 (the integrated graphics logic 1308 is an example of dedicated logic and is also referred to herein as dedicated logic), the set of shared cache units 1306 and the system proxy unit 1310 / (one or more) integrated memory controller units 1314, but alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1306 and the cores 1302-A-N.In some embodiments, one or more of the cores 1302A-N are capable of multi-threaded operations. The system agent 1310 contains those components that coordinate and operate the cores 1302A-N. The system proxy unit 1010 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or may include logic and components required for regulating the power state of the cores 1302A-N and integrated graphics logic 1308. The display unit is used to drive one or more externally connected displays.Cores 1302A-N may be homogeneous or heterogeneous in terms of architectural instruction sets; that is, two or more of cores 1302A-N may be able to execute the same instruction set, while other cores may be able to execute only that instruction A subset of a set or a different instruction set.Exemplary computer architecture14-17 are block diagrams of exemplary computer architectures. Known in the art for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics Other system designs and configurations of devices, video game devices, set-top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, countless systems or electronic devices capable of incorporating a processor and / or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 14, a block diagram of a system 1400 according to one embodiment of the present invention is shown. System 1400 may include one or more processors 1410, 1415, which are coupled to a controller hub 1420. In one embodiment, the controller hub 1420 includes a graphics memory controller hub (GMCH) 1490 and an input / output hub (IOH) 1450 (they can be on separate chips); the GMCH 1490 includes a coupling to the memory 1440 and a coprocessor 1445 memory and graphics controller; IOH 1450 couples input / output (I / O) devices 1460 to GMCH 1490. Alternatively, one or both of the memory and the graphics controller are integrated within the processor (as described herein), the memory 1440 and the coprocessor 1445 are directly coupled to the processor 1410, and a single chip with an IOH 1450 The controller hub 1420.Optional properties of the additional processor 1415 are indicated by dashed lines in FIG. 14. Each processor 1410, 1415 may include one or more processing cores described herein, and may be a version of the processor 1300.The memory 1440 may be, for example, a dynamic random access memory (DRAM), a phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1420 is connected to the processor (s) via a multidrop bus (such as a front-side bus (FSB), point-to-point interface (such as fast path interconnect (QPI)), or similar connection 1495) 1410, 1415 communication.In one embodiment, the coprocessor 1445 is a special purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like. In one embodiment, the controller hub 1420 may include an integrated graphics accelerator.There can be various differences between the physical resources 1410, 1415 in terms of a series of quality metrics including architecture, microarchitecture, thermal, power consumption characteristics, and so on.In one embodiment, the processor 1410 executes instructions that control data processing operations of a general type. Embedded in the instructions may be coprocessor instructions. The processor 1410 recognizes these coprocessor instructions as the type that should be executed by the attached coprocessor 1445. Thus, the processor 1410 issues these coprocessor instructions (or control signals representing coprocessor instructions) to the coprocessor 1445 on a coprocessor bus or other interconnect. The coprocessor (s) 1445 accepts and executes the received coprocessor instructions.Referring now to FIG. 15, a block diagram of a first more specific exemplary system 1500 according to an embodiment of the present invention is shown. As shown in FIG. 15, the multi-processor system 1500 is a point-to-point interconnect system and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnect 1550. Each of the processors 1570 and 1580 may be a certain version of the processor 1300. In some embodiments, processors 1570 and 1580 are processors 1410 and 1415, respectively, and coprocessor 1538 is coprocessor 1445. In another embodiment, the processors 1570 and 1580 are a processor 1410 and a coprocessor 1445, respectively.The processors 1570 and 1580 are shown to include integrated memory controller (IMC) units 1572 and 1582, respectively. The processor 1570 also includes point-to-point (P-P) interfaces 1576 and 1578 as part of its bus controller unit; similarly, the second processor 1580 includes P-P interfaces 1586 and 1588. The processors 1570, 1580 can use P-P interface circuits 1578, 1588 to exchange information via a point-to-point (P-P) interface 1550. As shown in FIG. 15, IMCs 1572 and 1582 couple processors to respective memories, namely memory 1532 and memory 1534, which may be portions of the main memory locally attached to the respective processors.The processors 1570, 1580 can each use point-to-point interface circuits 1576, 1594, 1586, 1598 to exchange information with the chipset 1590 via the respective P-P interfaces 1552, 1554. The chipset 1590 may optionally exchange information with the coprocessor 1538 via a high-performance interface 1592. In one embodiment, the coprocessor 1538 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, a compression engine, a graphics processor, a GPGPU, an embedded processor, and the like.The shared cache (not shown) can be contained in either processor or external to both processors, still connected to the processor via the PP interconnect, so that if the processor is placed in low-power mode, either processor Or the local cache information of both processors can be stored in a shared cache.The chipset 1590 may be coupled to the first bus 1516 via an interface 1596. In one embodiment, the first bus 1516 may be a peripheral component interconnect (PCI) bus, or a bus such as a PCI high-speed bus or another third-generation I / O interconnect bus, although the scope of the present invention is not limited thereto.As shown in FIG. 15, various I / O devices 1514 may be coupled to the first bus 1516, along with a bus bridge 1518 that couples the first bus 1516 to the second bus 1520. In one embodiment, one or more additional processors 1515 (such as a co-processor, high-throughput MIC processor, GPGPU, accelerator (such as, for example, a graphics accelerator or digital signal processing (DSP) unit), a field programmable gate array Or any other processor) are coupled to the first bus 1516. In one embodiment, the second bus 1520 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 1520, such as containing a keyboard and / or mouse 1522, a communication device 1527, and a storage unit 1528, such as a disk drive or other large capacity that may contain instructions / code and data 1530 Storage device. In addition, audio I / O 1524 may be coupled to the second bus 1520. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 15, the system may implement a multidrop bus or other such architecture.Referring now to FIG. 16, a block diagram of a second more specific exemplary system 1600 according to an embodiment of the present invention is shown. Similar elements in FIGS. 15 and 16 are provided with similar reference numerals, and certain aspects of FIG. 15 have been omitted from FIG. 16 in order to avoid obscuring other aspects of FIG. 16.FIG. 16 illustrates that processors 1570, 1580 may include integrated memory and I / O control logic ("CL") 1572 and 1582, respectively. Thus, the CL 1572, 1582 contains an integrated memory controller unit and contains I / O control logic. FIG. 16 illustrates not only that memories 1532, 1534 are coupled to CLs 1572, 1582, but also that I / O devices 1614 are also coupled to control logic 1572, 1582. A legacy I / O device 1615 is coupled to the chipset 1590.Referring now to FIG. 17, a block diagram of an SoC 1700 according to an embodiment of the present invention is shown. Similar elements in FIG. 13 bear similar reference numerals. Also, dashed boxes are optional features on more advanced SoCs. In FIG. 17, the interconnect unit (s) 1702 is coupled to: an application processor 1710, which contains a set of one or more cores 1302A-N, which contains cache units 1304A-N and (one or more) Each) shared cache unit 1306; system agent unit 1310; (one or more) bus controller unit 1316; (one or more) integrated memory controller unit 1314; one or more coprocessors 1720, They may include integrated graphics logic, image processors, audio processors, and video processors; static random access memory (SRAM) unit 1730; direct memory access (DMA) unit 1732; and for coupling to one or more external Display unit 1740 of the display. In one embodiment, the coprocessor (s) 1720 is a special-purpose processor, such as, for example, a network or communications processor, a compression engine, a GPGPU, a high-throughput MIC processor, an embedded processor, and the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation methods. Embodiments of the present invention may be implemented on a programmable system including at least one processor, a storage system (including volatile and non-volatile memory and / or storage elements), at least one input device, and at least one output device. Computer program or program code executed.Program code, such as code 1530 illustrated in FIG. 15, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.Program code can be implemented in a high-level process-oriented or object-oriented programming language to communicate with the processing system. Program code can also be implemented in assembly or machine language (if desired). In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language can be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium, which represents various logic within a processor that, when read by a machine, causes the machine to make Perform the logic of the techniques described in this article. Such representations, called "IP cores", can be stored on a tangible, machine-readable medium and provided to various customers or manufacturing facilities for loading into a production machine that actually manufactures logic or processors.Such machine-readable storage media may include, but is not limited to, a non-volatile tangible arrangement of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disks, including floppy disks, optical disks, and CD-ROM (CD-ROM), compact disc rewritable (CD-RW) and magneto-optical discs, semiconductor devices such as read only memory (ROM), random access memory (RAM) such as dynamic random access memory (DRAM), static random access memory Fetch memory (SRAM), erasable programmable read-only memory (EPROM), flash memory, electrically erasable programmable read-only memory (EEPROM), phase change memory (PCM), magnetic or optical card or suitable for storing electronic instructions Any other type of media.Accordingly, embodiments of the present invention also include non-transitory tangible machine-readable media containing instructions or design data, such as a hardware description language (HDL), which defines the structures, circuits, devices, processors, and / or described herein System characteristics. Such embodiments may also be referred to as program products.Simulation (including binary translation, code transformation, etc.)In some cases, an instruction converter may be used to convert instructions from a source instruction set to a target instruction set. For example, the instruction converter may translate (eg, using static binary translation, dynamic binary translation including dynamic compilation), warp, simulate, or otherwise convert the instruction into one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on the processor, outside the processor, or partly on the processor and partly outside the processor.18 is a block diagram comparing the use of a software instruction converter to convert binary instructions in a source instruction set into binary instructions in a target instruction set according to some embodiments of the present invention. In the illustrated embodiment, the instruction converter is a software instruction converter, but alternatively, the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG. 18 illustrates that a program of a high-level language 1802 can be compiled using an x86 compiler 1804 to generate x86 binary code 1806, which can be executed natively by a processor having at least one x86 instruction set core 1816. A processor with at least one x86 instruction set core 1816 means any processor that can perform substantially the same function as an Intel processor with at least one x86 instruction set core, which is performed by compatible execution or otherwise handled (1) A substantial part of the instruction set of the Intel x86 instruction set core or (2) an object code version of an application or other software running on an Intel processor with at least one x86 instruction set core in order to interact with at least one x86 instruction set core Intel processors achieve essentially the same results. x86 compiler 1804 represents a compiler operable to generate x86 binary code 1806 (such as object code) that can be used on processors with at least one x86 instruction set core 1816 with or without additional chaining carried out. Similarly, FIG. 18 shows that a program in a high-level language 1802 can be compiled using an alternative instruction set compiler 1808 to generate an alternative instruction set binary code 1810 that can be processed by a processor without at least one x86 instruction set core 1814 ( For example, a processor with a MIPS instruction set that executes CA Sunnyvale's MIPS technology and / or a core that executes the ARM instruction set of CA Sunnyvale's ARM Holdings) is executed natively. The instruction converter 1812 is used to convert the x86 binary code 1806 into code that can be executed locally by a processor without the x86 instruction set core 1814. This converted code is unlikely to be the same as the alternate instruction set binary code 1810, because such an instruction converter can be difficult to manufacture; however, the converted code will perform general operations and consist of instructions from the alternate instruction set. Thus, the instruction converter 1812 represents software, firmware, hardware, or a combination thereof that allows a processor or other electronic device without an x86 instruction set processor or core to execute the x86 binary code 1806 through emulation, simulation, or any other process. |